Tech News Weekly 315 Transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
0:00:00 - Mikah Sargent
Coming up on tech news weekly. Are you ready for this show? Let's see. Thomas Germain of Gizmodo stops by to tell us about some new guidance. In the UK, Judges are now allowed to use chat GPT in legal rulings. What are the implications? Thomas Germain talks all about it. Then Sam Abuelsamid joins me to talk about that. Tesla recall yeah, almost all of its vehicles have been recalled and that is because of an autopilot safety flaw. I also have two stories of the week this week, first and foremost, about E3 and why it has been canceled, and kind of a bigger look, a broader look at the rest of the media industry. And, of course, we've got to talk about AI, this time with a report from Stanford University that suggests that our fears about students using chat GPT for cheating are a little overblown, at least for now. Stay tuned for this episode of Tech News Weekly. This is Tech News Weekly episode 315, recorded Thursday, december 14th 2023. Why Tesla Recalled 2M+ Vehicles. This episode of Tech News Weekly is brought to you by our friends ITProTV, now ACI Learning. It skills are outdated in about 18 months. Launch or advance your career today with quality, affordable, entertaining training. Individuals. Use code TWIT30 for 30% off a standard or premium individual IT Pro membership at go.acilearning.com/twit.
Hello and welcome to Tech News Weekly, the show where every week I talk to and about the people making and breaking the tech news. I am your host, micah Sargent, and I've got a great show planned for you today. Up first, this is an interesting story. When I saw this fly by, I thought, okay, I've got to talk about this because it's AI and we talk about AI a lot on Tech News Weekly, but also because it has me concerned. Okay, joining us to talk about judges being given the okay to use chat GPT in legal rulings is Gizmodos, thomas Germain. Welcome to the show, thomas.
0:02:19 - Thomas Germain
Thanks for having me.
0:02:21 - Mikah Sargent
Yeah, it's great to have a chance to talk with you, so let's start by kind of going over the basics here. Can you explain the UK Judicial Office's decision to permit the use of chat GPT and other AI tools in legal rulings?
0:02:35 - Thomas Germain
Yeah, so the judicial office is the governing body in the UK that oversees judges and magistrates and some other legal officials in England and Wales, and earlier this week they issued guidance which gives judges official permission to use AI for a variety of tasks in the courtroom, which raises a lot of interesting questions. I'm not really sure you know whether we should be freaking out about this, whether it's not that big of a deal and this is just, you know, the March of Time, but it's definitely a pretty strange world that we're living in, where AI is going to be playing probably an increasing role in the courtroom, and you know, right now this is only something we're seeing over across the pond, right, but I think it's only a matter of time before it comes to the United States in an official capacity, and we've already seen judges and lawyers using chat, gpt and other AI tools in the United States as well. So this is a little preview of the weird, surreal future that we're all about to enter together.
0:03:31 - Mikah Sargent
It's what's to come. So what are some of the criticisms or concerns that have been raised regarding the use of AI in legal context specifically? We've heard, especially in this show. We talk a lot about some of the other implications, but in legal context, using AI, just in fact for the UK judicial office to say you can use it, tells you that there's something here that you know there's. There's maybe something to be concerned about.
0:03:57 - Thomas Germain
Right. I mean the fact that's a great point. The fact that they had to step in at all indicates that people aren't sure whether this is okay and there's some weird questions that we need to answer. The concerns, I think, in the legal context are pretty much the same that you have anywhere else in the world of AI, and I think the most important thing to keep in mind here is that, as we all know, if you've tried chat, gpt or any other tool, it just kind of makes things up sometimes and it gives you the information that it's delivering in a very confident way, and that's not always clear to the people who are using it.
I think there's a real tech literacy problem when it comes to understanding how these AI chatbots work, what they do, what their limitations are, when you can and can't trust, what they have to say, and you know when we're talking about the legal system, this is often literally a question of life and death, right, if not actual. You know lifetime jail sentences and things like that. You know just important financial questions or you know any other issue that the legal system has to deal with. So I think the biggest concern here is that lawyers and judges are going to charge ahead and use this technology and throw it into the courtroom and their cases and filings without really understanding that it might be getting things wrong and without doing their due diligence to check what the information that they're feeding into the system is. And we're going to see those problems play out in real time and find out what the consequences are, because right now no holds barred.
0:05:21 - Mikah Sargent
No, tell us you mentioned. You know this is across the pond, but here in the United States there was an incident in New York involving lawyers and AI. Tell us about what happened there and maybe how has that influenced the perception of AI in legal proceedings.
0:05:40 - Thomas Germain
So earlier this year there was a case that got a lot. There was a case that got a lot of attention with two lawyers who issued a court filing. They put it before the judge. That was written by chat GBT and the problem was it turned out that it just made up a couple of cases and citations. So they confidently put out this document. You know a big part of legal filings is citing precedent and other cases and it wrote the citation correctly. But the cases were entirely made up in the right format. You wouldn't know it unless you went through hand by hand and checked all of the individual citations. But that's exactly what happened. It was totally made up. The judge found out. He was unsurprisingly furious and find these two lawyers.
It was pretty small fine $5,000, you know, given, you know, the typical lawyer's salary but it's a sign of a much bigger issue. You know this wasn't a particularly impactful case but you know, if it was another context, if it was a criminal proceeding, you know this could have really serious consequences, especially if something just sneaks by. There was another case. This is a little weirder. This rapper, who's a member of the band the Fuji's that was really popular about 10 or 20 years ago, his lawyer used chat, gpt or some other AI tools in his defense when he was brought with criminal charges, and he says that he lost the case because his lawyer used AI tools that weren't ready for prime time. So these are real world concerns. We're seeing them play out, people are trying to wrestle with what the implications are and what the rules should be, and we don't really know.
0:07:11 - Mikah Sargent
I hadn't considered that you could, you know, potentially have an issue of mistrial or, you know, a grounds for saying let's do this again because we're saying AI influenced the process and may have resulted in, you know, losing the case. That's very clever. But also again, yeah, if that sets up any precedent. That and I think that's also one of the unique aspects of legal proceedings right, is that precedent is set at every possible case and you kind of pull off of that. Now I was hoping you could elaborate on the example of Lord Justice Burse using chat, gpt and illegal ruling, and I think there was a quote from from Burse about chat GPT.
0:07:56 - Thomas Germain
Yeah, it's very cute in this kind of British way, right? So this is a judge who sits on an appeals court in the UK and he said that chat GPT is a jolly useful tool. That was that was the quote In a recent ruling. He was doing research about an area of the law that he's not particularly familiar with. He went and he asked chat GPT to summarize the relevant legal issues. He was satisfied with the answer and he copied and pasted it right into a ruling. And you know, this guy took responsibility by saying you know he wasn't putting off any. You know reason the word responsibility here but he said I'm not putting the responsibility on chat GPT. You know this is my work, my name's at the bottom of it, I'm the buck stops with me. But I think it speaks to a broader issue, right, which is there are going to be judges and lawyers who use this technology, who aren't really as versed, who aren't, maybe, you know, as concerned about the ethical questions, who just shove this stuff into the courtroom and you know, see what happens later.
0:08:57 - Mikah Sargent
So, speaking of this, I mean this was more than just hey, you can use chat, gpt, you can use AI tools. There was actual guidance provided by the judicial office. Can you tell us about what? What kind of guidance was actually provided? And maybe even more, did they place any limitations on how AI can be used in the process?
0:09:21 - Thomas Germain
Yeah. So the thing that really surprised me about this document is just how basic it is. It's six pages long, has pretty large size font, there's not a lot of information in there and it's a real primer on how AI tech works really basic stuff, which I think speaks to the fact that the judicial office is assuming that legal officials are going to be using this technology without understanding the most basic things about how it works. But there were a couple of limitations that they put in there. It was really kind of soft suggestions. They weren't hard and fast rules. But they said chat GPT and other AI tools aren't particularly useful, may not be safe for doing original research If there's something you don't know anything about. Asking chat GPT and getting the answer and copying and pasting it into your ruling maybe isn't safe. And they had to point out that chat GPT isn't pulling from a verified database of information. It's doing statistical analysis about what the most likely next word in a sentence is going to be, and that's not necessarily hard and fast set of information that you should be using on such important cases.
There were a couple of other things. They warned that judges should be concerned about privacy issues because and this is something that people may not recognize AI tools the ones that are consumer facing. For the most part, the companies open AI, google reserve the right to do basically whatever they want with the information that you're typing in there. So they warned that the judges should assume that anything they type into an AI tool is akin to publishing it for all the world to see. And often in the legal system there's information that's sensitive, that's being handled, that's not supposed to get out, and judges should be careful about not putting in in there. But the guidance is pretty vague. For the most part it's kind of free rein to do whatever you want. They're just warning. There's a couple of things here that you should be wary of.
0:11:14 - Mikah Sargent
Understood Now this. You kind of touched on this throughout, but just in general, we've kind of entered the what's your take portion of the interview. Legal AI tools obviously involve complex ethical considerations around bias, around accountability and transparency. So, in your view, what's your take? Has the UK judicial office guidance sufficiently addressed these concerns for judges using AI with a six page bit of?
0:11:41 - Thomas Germain
guidance. Here's what I'm worried about, you know, I think, just like any other tool, there are safe and ethical ways to use it, and there's nothing inherently wrong about using AI or chat GPT in the courtroom. I would be comfortable, for example, with a judge going out and doing some research on Wikipedia, as long as the judge understood that Wikipedia is edited by users, that there can be mistakes, and that they were doing their due diligence to go and check the citations and make sure that the information was sound, right. Nothing wrong with that. Same with AI. There's nothing inherently wrong with a judge typing a question into chat GPT and then, if they're satisfied with it, if they've done the checking, using it in the legal system.
The problem is and I think we can see this based on just how simplistic this document is like the low level of understanding they're starting with is that I think we can expect that judges and lawyers are going to go out and use this technology without understanding how it works. That's what we saw earlier this year in New York, with lawyers just publishing these made up legal citations because people don't really get it, and I think that's the problem here. I think in a couple of years, when we're a little more familiar with AI when we're growing up in a more native environment with this technology. Probably it will become safer to use over time, but right now I think it's maybe a little too early to be giving judges and anyone else free reign to use this technology without a deep understanding of what it is and how it works.
0:13:11 - Mikah Sargent
Yeah, and the implications of this technology being used by those who choose to represent themselves, hoping that this can fill in for them where a public defender that's provided to them would maybe know more about what's going on. That kind of is frightening to me, and I think you're right in the long term, as we become more familiar with these tools I have a story later on about the lack of familiarity still with these tools Right, yeah, I think that hopefully that will see improvements. Thomas, I want to thank you so much for having this piece in the first place that everybody should go to gizmodocom to check it out. Thank you for your time today. If folks want to follow you online and keep up with what you're doing, where should they go to do so?
0:13:56 - Thomas Germain
I'm on Twitter and I have a very active TikTok account. You could find me there. It's just my name at Thomas Jermaine, you can follow along. Awesome, thanks. So much, thomas. We appreciate it.
0:14:06 - Mikah Sargent
Thanks for having me on this was fun All right Up next I talk to Sam Abuelsamid, our car guy, about a huge vehicle recall in the United States. But first let's take a quick break so I can tell you about our first sponsor of the show. It's ITProTV. Yes, our friends at ITProTV now ACI Learning. Aci Learning covers all your audit, cybersecurity and information technology training needs. Our listeners, you out there, know the name ITProTV as one of our trusted sponsors for the last decade.
As part of ACI Learning, ITProTV, now ITPro, has elevated its highly entertaining bingeable short format content with new episodes added daily. Aci Learning's personal account managers will be with you every step of the way. You can fortify your expertise with access to self-paced IT training videos, interactive practice labs and certification practice tests. One user shares quote excellent resource, not just for theory, but labs incorporated within the subscription. It's fantastic. I highly recommend the resource and top class instructors. Aci Learning's practice labs let you test and experiment before deploying new apps or updates without compromising your live system. So MSPs love it. If you're ready to bring your team along, by the way, you can visit our special link and fill out ACI's form. TWiT listeners receive at least 20% off an ITPro Enterprise solution and can reach up to 65% for volume discounts, depending on the seats you need.
The audience asked and ACI delivered Just in time. To wrap up your annual CPE, aci is re-releasing their entire audit catalog in shorter, easier to digest versions. That means your CPE exams for each course will now be faster as well. Plus, you can check out their audit courses and lab releases, including certified information systems auditor and CompTIA Security Plus. Get the full scoop on how ACI can help you navigate the audit world by visiting acilearning.com/blog. Learn more about ACI Learning's premium training options across audit, it and cybersecurity readiness at go.acilearning.com/twit For individuals. Use code TWIT30 for 30% off a standard or premium individual ITPro membership. That's go.acilearning.com/twit.
We thank ACI Learning for sponsoring this week's episode of Tech News Weekly. All right, we are back from the break and that means it is time for our next story. I will let an earlier version of Mikah Sargent tell you all about it. If you are a Tesla owner perhaps you're not a Tesla owner you may have heard that a number, a large number of Teslas have been recalled. Joining us today to talk about these Tesla recalls is the car guy himself, Sam Abuelsamid. Welcome back to the show, sam.
0:17:04 - Sam Abuelsamid
Hey, micah, great to be with you again.
0:17:06 - Mikah Sargent
Great to have you here, so let's get into it. First and foremost, why has Tesla filed a recall and just how many of its vehicles are actually involved in this recall?
0:17:17 - Sam Abuelsamid
So the recall has to do with the hands-on detection and driver monitoring systems on pretty much all of the Teslas that have ever been built about 2 million vehicles and that goes back primarily to vehicles built from about 2015 onwards, when they started using autopilot, and prior to that Tesla's production volumes were pretty low. So there's probably maybe 50,000, 75,000 cars that aren't affected, but pretty much all the rest of them are affected.
0:17:54 - Mikah Sargent
About it. So only the oldest of the old, then are the ones that aren't affected. Oh wait, rather yeah, okay, yeah, yeah.
0:18:04 - Sam Abuelsamid
So this goes back to before they started building the Model 3 in 2017 and then the Model Y in 2020, when they just had the Model S and the X, which were still relatively low-volume vehicles.
0:18:14 - Mikah Sargent
Is the Cybertruck fall into?
0:18:15 - Sam Abuelsamid
this, the Cybertruck, does fall into this. Yes, interesting, although, again, there's only a few dozen of those, at most, that are out there.
0:18:24 - Mikah Sargent
Yeah, Now this, of course, is a NHTSA investigation that led to the recall. Do we know how long the investigation was going on?
0:18:32 - Sam Abuelsamid
in regard to this, so this investigation was technically going on for about two years, but in reality, this is something that NHTSA probably should have done back in 2016. So in May of 2016, there was a crash the first known fatal crash involving autopilot. A guy named Joshua Brown died in Florida when his Tesla Model S drove itself underneath a tractor trailer that was crossing the road that he was driving down, and the National Transportation Safety Board launched an investigation at that time, and when they completed their investigation, they made a number of recommendations. So the way the regulatory system in the US works for transportation, ntsb is an investigative body. They don't have any regulatory or enforcement power, but they're going to make recommendations and then, for ground transportation, the National Highway Traffic Safety Administration is the one that actually sets the rules and regulations. Ntsb at that time and actually it was early 2017 by the time they finished the investigation made a number of really good recommendations that were intended to minimize the possibility of driver misuse of the system. So that's what this all comes down to these drivers using autopilot in ways that it wasn't designed for, such as for driving hands off or not paying attention while the system is operating and among the recommendations that NTSB made back in 2017 were that all vehicles with these types of systems should have more robust driver monitoring systems to make sure that the driver is always watching the road while using the system and that they have their hands on the wheel. And Tesla has never done a very good job of that. They, until relatively recently, they didn't look for the driver watching the road at all. They only started doing that when they launched the full self-driving beta.
And for hands on the wheel, they use a torque sensor on the steering column that's looking for the motion of the steering wheel, because when you think about it, when you drive down the road, there's usually a little bit of motion from your hands. It's absolutely usually not keeping the steering wheel completely steady. The problem is that's not a very good one good way of detecting if the driver's hands are actually on the wheel, because you can have both false positives and false negatives with it. If it's too strict, then if you are actually holding the wheel steady as you're driving down the road, you can have the system constantly warning you put your hands on the wheel even when your hands are on the wheel, and I've actually had this happen with a number of Ford vehicles, because Ford also uses the same type of approach. They don't use a capacitive sensor in the steering wheel. On the opposite side, you can have false positives where the system will think your hands are on the wheel, because it's easy to fool.
I think you guys have done some stories in the past about devices that you can buy a little weight, that you can hang on the steering wheel, or you can even do it by just shoving a water bottle or an orange in between the steering wheel spokes, and what that does is when auto-steers doing little steering corrections, there's just enough weight there to counteract that, so it feels like there's some resistance from the driver's hands, and so it's just not a very good solution for detecting hands on the wheel.
What other manufacturers like General Motors and Nissan and Volvo and others and Stalantis are doing is putting capacitive sensors in the steering wheel that actually detect when your hands are touching the wheel, which is what you really want.
Then the other part of this is eyes watching the road, and again, more recent Teslas do have a camera that's mounted up by the rearview mirror, but it's an RGB camera, so it doesn't work well in low light conditions.
It's also very easy to fool. Consumer Reports did some testing, last year I think where they basically just took a picture of a face and held it up in front of the camera and then it detected that as being somebody watching the road. In fact, what you want is something that can do some depth perception. So, again, gm, ford, nissan, many others use infrared cameras mounted by the steering column or in the instrument cluster that work very similar to the face ID on your iPhone. Same same type of idea that detects where your eyes are looking, your head position, and can also detect you know if it's if your face is three dimensional instead of two dimensional, and works at night, works through polarized sunglasses, and can detect if you are actually watching the road. Tesla does none of this, and so, as a result, this recall, which all it's doing is pushing out an over the air software update that's supposed to enhance the systems they already have, is probably going to have little or no actual effect.
0:23:52 - Mikah Sargent
Wow, okay, wow, I've got lots of questions from this. So, firstly, do you think that the fact that Tesla doesn't have these more advanced features for detecting it Is that a selling point for some folks? And then, secondarily or it's kind of going on to that, can you even explain to us, like what, who is the person and what is their goal in circumventing these systems? Like, is it someone who's literally trying to sleep while they're on the road? Is it just that sort of libertarian, you know, moment of? I just want to be able to see if I can circumvent this, is it, you know? They want to just have their hands by their sides, because that's like, why are people sticking oranges and water bottles and all this other stuff and holding up photos? I know that was a test, but what's the point of that? Why would you want to be able to just not have your hands there? And then, yeah, is that a kind of a secret or subtle selling point of Tesla vehicles that it's a little bit easier to fool that system?
0:24:58 - Sam Abuelsamid
Yeah, it's a bit of all of that, really. Back in 2016, when they launched version two of autopilot, you know, after the Joshua Brown crash, the original supplier of the hardware they were using, a company called Mobileye realized that Tesla was misusing the system for autopilot and they had never intended it to be used the way Tesla was using it, and so they stopped supplying their stuff to Tesla. So Tesla developed their own in-house system, and during the announcement of that in 2016, elon Musk, in the first sentence of the press conference you can I'll send you a link, you can find this online, the recording of this conference call basically, he said that, starting this week, all Tesla vehicles have all the hardware that they need to be level five autonomous, meaning that they can drive autonomously everywhere, and it's just going to be a matter of updating the software over time Back in 2016. And a lot of Tesla fans and investors have bought into this idea that Tesla is going to be able to make their vehicles fully autonomous with nothing more than software updates, with the hardware that they have, which is eight cameras, one radar sensor which they actually don't even use the radar sensor anymore and some some ultrasonic sensors.
And so this was it was never true. But a lot of Tesla fans really bought into this idea and you know there's, there's people you know that are they're in the tech industry, that are you know, think, oh yeah, let's, let's see what this can really do, and they're trying to push it to its limits. And you know Musk has done nothing to dissuade people from this idea, even though when you want, when you first start the system in the the, in the usual end user license agreement, it says you know drivers fully responsible, you got to keep your hands on the road, hands on the wheel, eyes on the road. But they don't really, they've never really done anything to enforce that, whereas other manufacturers do. And so there are some Tesla fans that that do misuse the system Got it.
0:27:13 - Mikah Sargent
Now I know that earlier this year there was a recall for Tesla vehicles. Was that kind of the? The first portion of this? There was the FSD beta software that Nitsa said was not working properly. How does that relate to this?
0:27:28 - Sam Abuelsamid
That was that was a different one. That one let's see fair call correctly. Yeah, that one had to do with in the, the FSD beta software. So FSD is the full self-driving system, which is not actually fully self-driving. That's a whole other discussion, but one of the things that Tesla had programmed it to do was sometimes it would roll through stop signs, it wouldn't, instead of coming to a complete stop. And there were some other a few other things like that that were not technically, you know, following the rules of the road, and Nitsa required them to push out an update to change that.
0:28:04 - Mikah Sargent
Got it. Now. You said it was programmed. Does that mean it was quite literally programmed in the, as opposed to it just being something that happened by accident? They said no, roll through stop signs. Like you live in California, that is wild to me. I didn't realize that it was that.
0:28:18 - Sam Abuelsamid
Yeah, it's the. The idea is they were, they were trying to make it behave the way human drivers do Right, right, and there's some things that human drivers do that you probably don't want to replicate with an, with a software, yeah, yeah.
0:28:33 - Mikah Sargent
I'm from Missouri and so I call it the California stop because everybody's come around here. They all just rolled through stop signs all the time. I do not, so I guess I'm finding more like Nitsa in that way. Now I feel like this technology has been under scrutiny for a number of reasons, being that it's not full self-driving and the claims were made early on, and then now this issue. I just am curious you, as someone who's watched this and knows about the technology, were you even surprised to see this recall come around and to see this many vehicles basically all Teslas be recalled? Did it shock?
0:29:09 - Sam Abuelsamid
you?
Yeah, frankly, given Nitsa's complete lack of real action and and you know, taking responsibility over the last seven years, now I'm kind of surprised that they, that they even bothered, so you know, to to the degree that they did anything at all, I'm glad they did that, you know, and you know, considering how long it's been that they could have.
You know, if they had taken action back in 2017 when Nitsa or when the NTSB made those recommendations, that would have been prior to the launch of the Model 3. And you know the Model 3 and then the beginning of the Model Y production in 2020, that represents the vast majority of all of Tesla's production ever. You know the the Model S and the Model X have always been relatively low volume cars, but the three in the Y are the ones that they sell in high volumes. And if they had taken action before the Model 3 went into production and required Tesla to make some real, substantive changes back then, then there wouldn't have been a a recall. Two million vehicles now there might have been a recall of probably 50 or 60,000 vehicles at most Got it.
0:30:19 - Mikah Sargent
Got it. You briefly touched on this throughout. I'd love it if you sum this up. There are a number of vehicle manufacturers that have autopilot-like systems in their vehicles. How does Tesla's technology compare? Is it more advanced? Is it less advanced? Again, you touched on it. Some of them seem to have more security measures in place than others. Is that a matter of it coming from old-school car manufacturers? Where does that come in?
0:30:51 - Sam Abuelsamid
Yeah, general Motors was one of the first to launch a system like this their SuperCruise system, which came out originally in 2017. Ford has BlueCruise that they have on a number of models now. It's been out for a couple of years Nissan's ProPilot version 2, bmw has a system like this, mercedes-benz and there are others that are coming to market over the next year or two that are actually hands-free driving systems. They're designed to be operated hands-free, but the thing that they do is they geofence them. You can only activate the hands-free component when you're on certain roads, mostly on divided highways, limited access highways, where there's not a lot of intersections or no intersections. You don't have pedestrians to deal with things like that. They use maps to geofence the systems and they also use more robust driver monitoring systems for the eye gaze, head position and hands on the wheel.
Tesla is the only one that well.
There are others, like the first-generation Nissan ProPilot system, that are nominally similar to Tesla or Hyundai's Highway Drive Assist, that allow hands-on lane centering and the auto-steer functionality.
But even those, while they typically don't have a capacitor sensor in the steering wheel, they typically have a shorter timeout, when it usually within about 10 seconds or so, if it thinks that the driver's hands aren't on the steering wheel, that it'll tell you to put your hands on the wheel.
Tesla. A lot of times the system can go minutes without alerting the driver to put their hands on the wheel, and Tesla has never geofenced their system in any way to limit where you can enable it. Even though they tell you to only use it on highways, there's nothing to stop you from using it in the city or anywhere else. So, as an engineer, one of the things that I always learned early on in my career and that we always practiced was and I worked on safety systems like traction control and electronic stability control was to try to anticipate the ways that drivers could misuse the system and then put in whatever you could to try to mitigate that, to try to minimize the impact of that, to try and detect that. Tesla has never done that. They've said, yeah, the driver's responsible, let them do what they want, and that's, I think, a reckless and irresponsible way to do this.
0:33:47 - Mikah Sargent
Yeah, it seems vastly different from what all of these other manufacturers are doing. Where there is concern there at the very least, and this just seems like. Let them figure it out. Lastly, I'll just ask what will Tesla owners need to do to address the recall? I imagine they don't need to go into some Tesla dealer to get a new piece or something. It's much simpler than that.
0:34:15 - Sam Abuelsamid
Yeah, basically they don't have to do anything. Which is one of the nice things about over the air updates is Tesla will push out the update software either over Wi-Fi or cellular connection to the phone or to the car, and when the car is parked it will do the update. It'll run the software update and the next time you get in the car it'll flash up on the screen and say hey, you know, got a software update, you know here's the change notes and be on your way. So there's nothing that the customers have to do, and because they're not being required to do any hardware changes, that's it. So they should all pretty much all Tesla owners should be getting the update within the next day or so.
0:35:01 - Mikah Sargent
Wonderful, all right. Well, sam, it is always a pleasure to get to chat with you. I always end up learning something more than I even thought I was going to. If folks want to follow you online and stay up to date with everything that you're doing, where should they go to do so?
0:35:14 - Sam Abuelsamid
See, you can find me at my day jobs, a principal analyst at guidehouseinsightscom. You can check out the wheel bearings podcast at wheelbearingsmedia and I do that with my friends Roberto Baldwin and Nicole Wakelin every week. And you can find me on Mastodon and Threads and Blue Sky as well. Just look for my name.
0:35:40 - Mikah Sargent
Awesome. Thank you so much. We appreciate you.
0:35:42 - Sam Abuelsamid
Have a great day, Mikah.
0:35:44 - Mikah Sargent
All right, thank you, Sam. Thank you past Mikah for that up. Next, my first of two stories of the week. This one's all about the end of E3. Before we get to that, though, let me tell you about our next sponsor of Tech News Weekly. It's Wix who are bringing you this episode.
Web agencies out there, you are going to like this one. Let me tell you about Wix Studio, the platform that gives agencies total creative freedom to deliver complex client sites while still smashing deadlines. How Well. First, let's talk about the advanced design capabilities. With Wix Studio, you can build unique layouts with a revolutionary grid experience and watch as elements scale proportionally. By default, no code animations add sparks of delight, while custom CSS gives total design control. But it doesn't stop there. Bring ambitious client projects to life in any industry with a fully integrated suite of business solutions, from e-com to events, bookings and more, and extend the capabilities even further with hundreds of APIs and integrations. You know what else? The workflows just make sense. There are the built-in AI tools, the centralized workspace, the on-canvas collaborating, the reuse of assets across sites, the seamless client handover, and that's not all. Find out more at wix.com/studio Thank you, wix, for sponsoring this week's episode of Tech News Weekly.
So the first of my two stories of the week is about E3, the Electronic Entertainment Expo. Esa, or the Entertainment Software Association, has said, hey, look, it's time we're not going to be doing the Electronic Entertainment Expo anymore. For years this was the sort of premiere event where people gathered to see what was going to be new in gaming. It's kind of like CES but for games, and it has been a place where especially press and other people kind of in the trade association would meet and greet and learn about what was going to be coming out. According to VentureBeat, the show typically drew crowds of 70,000 people to Los Angeles, held in June, and gave again everyone an opportunity to kind of gather. Venturebeat spoke with Stanley Pierre Louis of ESA about why they made the decision to stop doing E3. And it was not surprising to hear that one of the big reasons that they've chosen to end E3 is because the big companies that typically took place in presenting at E3, like Electronic Arts, sony and even eventually Nintendo, chose to pull out of the event and no longer wanted to have a place there.
And it's kind of interesting because we had this situation where we had these huge conferences CES, of course, being won, e3, and this vast array of huge events where lots of people gathered, and the pandemic, of course, played the first role in shifting the way that we thought about these events. Right, due to the pandemic and the inability for folks to gather safely, it resulted in companies having to kind of pivot and figure out a way to do these things virtually, but that also led to a chance for examination, a chance for these companies to say is this something that we need to do? And a chance for press and everyone else to say is this something that we need to do, is this something that's beneficial? And in being forced into that, I think that maybe it sped up what would have eventually happened anyway, which was the shift from having everyone gather together at an event Because, if you think about it, each of these companies essentially ceded some level of control to that larger group, to the ESA, to the Entertainment Software Association. Right, it was saying that you get to put together this event and we get to come here and have our little stage or our big stage, but others are going to be there too. We're sharing space, we're sharing eyes, we're sharing audience, and so what we saw was a lot of these companies going. We don't need something like this, we can do it ourselves, and I think that this speaks to a larger shift in media in general, and that is part of the conversation with Pierre Louis about that shift that has taken place.
I think about podcasting, right, and there used to be a time where companies would want to come on to podcasts to talk about the software that they make or the hardware that they make or whatever it happened to be, and even that has shifted. A lot of these companies have either internally created podcasts themselves or they've hired a company that specifically works on creating podcasts for them, and so if everybody has their own thing, if everybody can stream to wherever they want to stream, then you don't necessarily have this issue anymore of needing to gather at a place and talk about what your newest product is. In the interview, they kind of go through what led to this decision of deciding to bring an end to E3 and sort of what it would look like afterward, and one of the big things that comes up that Pierre Louis talks about is that they have so many different outlets to promote their games and one of the big spaces in the gaming industry specifically and I think this is also something that's big in the tech industry the consumer tech industry is the rise of the content producers, the streamers, the influencers, these individuals who have small audiences that grow to be a large audience whenever they're all brought together. Right that, each of these individual people who has 10, 20, 50, 100,000 people who are watching them. Well, if a company shares their game with that streamer and another streamer and another streamer, then suddenly it's adding up to these millions of views of a new game that might be coming out, and so it's not necessarily necessary for Venture Beat or another publication to do the reviews of the games and to have that sort of interstitial piece, that middleman, so to speak. That is perhaps in the way of what the company's goals are.
So video game companies are kind of trying to think about new ways that they can reach people, and that has resulted in pulling away from these traditional trade association methods, and I liked the conversation about this, the part that involved kind of looking at what once may have seemed a All for one, one for all attitude that you know the games industry was. This entire thing is a hole, and everyone was working together to create and and and raise up the profile of everyone else. And what's interesting about that is, you know, I think about Hollywood or this that entertainment industry, and In the early days in particular, there was that all for one, one for all attitude. Because that industry saw how the government was regulating Radio and television and, in some cases, the other kind of Basic needs that we have, and they said, look, before the government comes and regulates us, we're going to go ahead and regulate ourselves so that the government doesn't see a need to come in and do that with us, right? So they themselves set up the rating system PG, gr, pg 13 and all of the rules surrounding that, and that was agreed on as a whole that that industry would work together. And even to this day you see how the different unions that exist as part of the entertainment industry are Can bring things to a standstill, right, if, if they choose to go on strike, there's still that kind of overarching Group mentality that exists there, Whereas in other industries that's not necessarily the case, but for gaming early on that was also the case.
Now you have these smaller companies and you also have even the bigger companies who are so focused on their own thing and know that they can create and share their own thing, and that leads to this, this separation that has resulted in something like e3 not being as viable for other companies. So they also talk a little bit about you know, was it smart to focus on press for e3 as opposed to focusing instead on fans and make it a fan event, make it something like packs or gamescom? And you know, pierre Louise says essentially that while there are other Conferences that do that, that was never the thing that e3 was going to be, that e3 was supposed to be this trade system. There are, that this, this industry way of gathering those folks together, and that it was not about the fans per se, even if some of those press ended up being fans of the games Overall.
Another important kind of theme running throughout is that you, you also have to look at a larger issue at play, which is that Press in general, journalism, media in general, is at a turning point and is Becoming less viable overall. You, as listeners of this show, are recognizing that I am doing this show alone Because even we here at twit have been affected by those by that that change in in the industry and so, given the kind of Shift where the press is maybe not as powerful as it once was in terms of its necessity and its ability to draw in eyes and and ears, it's no surprise that a press event is not as viable as it once was. So what I think this boils down to in the end is that e3 is Over, but ESA is still looking at ways of kind of moving forward. The entertainment software association is looking at ways of moving forward, looking at ways to support these companies and figure out what it can do to help support the Trade association that it has supported in Some way or another for so long. So we'll see what that looks like and how ESA moves forward without e3 as its kind of Premier event that takes place every year, and with that we'll go ahead and head into the next one, and with that we'll go ahead and head into our next story of the week, where we talk about research that suggests our concerns about students using AI for cheating Might be overblown.
Well, at least for now, stay tuned, all right. So my second story of the week this week is about a study. This is a piece from the New York Times and it is talking about AI, so you may remember that. Just this, this, I mean this whole year of 2023 has had a huge focus, especially in this, the latter half on artificial intelligence and, in particular, generative AI. And I can remember, within days of chat GPT reaching headlines, there was this incredibly immense concern that people were going to use this technology to cheat, that there were going to be essays that would be written by chat GPT, that People would not do the homework that they were assigned because they would have chat GPT do it for them. And then I remember, shortly after that fear came into play, that we saw a number of companies who were putting out tools to help detect if chat GPT was used in the creation of whatever you know bit of homework or essay or or article happened to be. And so a number of companies reached out to different education, educational organizations in terms of universities and colleges and high schools and middle schools, basically saying look, we know this is going to be a problem, so use our tools to make sure that your students are not writing, or are writing, rather, papers that you have assigned them.
Now, technology like this has existed for a while, because there have been tools that essentially keep a student from plagiarizing, so a teacher can take an essay and and pop it into a tool that scans the paper and kind of looks for instances of Taking other work from other places, from plagiarism. I can remember being in high school and I remember one of the students in the class getting kind of called aside whenever we were all working on different things and the teacher Not very quietly Talking to this student and saying that they had plagiarized and how serious it was and how if they did that at University, that it could result in all sorts of horrible things. And you know, the student of course denied that it had happened, but the teacher then pointed at the screen to show here's this that's been highlighted and here's this from this piece that you clearly had pulled from, at which point the student admitted to having done it. And Again, those tools have been around for some time, because cheating has always been something that you needed to be worried about as a teacher or a professor or some other educational body. But what has Been proven to be the case, at least for now, again, what's what's proven out I guess to be the case, is that our fear, as as it stands, of using chatbots for cheating are Overblown, that we would see this sudden rampant case of people Hopping into chat GPT and using it. Not the case. So Stanford University researchers. They surveyed more than 40 US high schools and in those 40 US high schools, the researchers revealed that 60 to 70 percent of students said they had recently engaged in cheating. So, again, three-quarters of students said yes, I have recently cheated, but that is the same percent as Previous years. So for as long as this has been going on, as long as there have been surveys about cheating, about three-quarters of students say yeah, I cheated recently. That number hasn't risen. Then.
On top of that, research from Stanford University, pew Research Center in October of 2023 also surveyed more than 1400 US teenagers and they were looking at the awareness of chat GPT among these teens. Nearly one third of the teens said that they had never heard of something called chat GPT. A another 44% said they'd heard a little about it and Only 23% of students said they'd heard a lot about chat GPT. Of the students who said they'd heard about chat GPT, 13% of those said that they'd used it to help them with their schoolwork Again, a very small percentage in comparison to what I think the early concerns were that we would see a huge, a Huge swath of people typing stuff in a chat GPT to get the answers.
But it turns out that a lot of people don't know about chat GPT in the first place and, interestingly, the responses kind of tie to some other trends as well. Responses this is again according to the New York Times, who looked through this Pew Research study about 72% of white teens said they had heard about the chat bot, compared with about 56% of black teens and Then about 75%. So three quarters of teens in households with annual incomes of seventy five thousand dollars or more Said they heard about chat GPT. Only 41% of teens in households with annual incomes of less than 30,000 had heard about chat GPT. So I Think that, while this is something that we have to be concerned about, going forward, as this becomes even more available and more known, the idea that in the short term we're going to see this huge addition of people going all right, the only way that I'm doing my homework is by using chat GPT is has not proven out to be realistic.
Another quote from the article among the subset of teens who said that they had heard about the chat bot, the vast majority 81% said that they had not used it to help them with their schoolwork. So again, it was 13% who said they had. 81% said they did not use it to help them with their schoolwork. Of course, this is something that is a self reported survey, and so that obviously bears in mind. We know that Pew Research is good about the way that they do their surveys and that they do try to keep in mind the potential bias, but cheating has been something that has been a part of schooling for a very long time. According to this piece, again between 2002 and 2015, 64% said they had cheated on a test and 58% nearly 60% of students said they had plagiarized. Honestly, I'm just impressed that 58% of students knew what plagiarism was. That's a success as far as I'm concerned. Here in the United. States.
I think that when we think of these tools as tools and we consider the fact that they are a way to potentially generate ideas, to provide some feedback, to give a jumping off point, to give a starting point, essentially that that is where something like this could be helpful. And so what we have to do is realize that, because these tools exist and they're out there, they're absolutely going to be used. And you can block, as some schools did, chat GPT from being able to be accessed on the school's Wi-Fi, for example, but that's not going to keep students from using these tools. So, given that we exist in a world where these tools exist, and given that students are going to use these tools, what we have to do is shape everything else with that understanding. And I think that it's hard to do that because education does not reshape easily and is still, frankly, playing catch up for a number of other things before it even gets to this. I mean, we've seen how quickly generative AI is moving and growing in popularity and how I just did a story not too long ago about. It was a horrible, horrible story where male students had created inappropriate nude photographs using the faces of their fellow students and then AI-generated bodies and there was not legal precedent in place on how to handle that. So the legal system has some catching up to do. We just saw in the EU the groundbreaking AI set of rules and guidance I guess is the best way to put that a set of guidance for AI and regulating AI and that's only just now happened, as we reach the end of a year that has been heavily inundated with news and interest in generative AI. So education needs to catch up as well and that again, the fact that the tool is there means that it's going to be used and that if it's going to be used, we accept it and we try to figure out a way to continually engage the students and let them use it as a tool.
There was one teacher who said in this piece I had a bunch of students in my AP history class use chatbots to generate a list of events that happened right after the Civil War in the 1880s. It was pretty accurate, except for the 1980s event during the Reagan administration. So what ended up being the case there is that a teacher had an opportunity to not only have the students see how it could help them, but also how it could hurt them that sometimes these tools generate things that simply aren't true, and if that's the case, then you need to be aware of that and be mindful of the fact that it might not give you the correct answer. And if it doesn't give you the correct answer, what do you do? You need to be checking this, you need to be double checking it, making sure that it is accurate, so that you are accurate. That's what I think the education should be. That's what I think is going to make the difference. We need to understand not just media literacy at this point and not just what I would call research literacy, but also AI literacy, and that we teach students how they can use this as a tool, but also maintain an understanding of how it can go wrong and how it might make you seem like you don't know what you're talking about, and that you need to double check the work afterward. So, yeah, go check out that piece over on the New York Times to learn more about the research from Pew, but also those researchers at Stanford who looked into this. And, of course, as always, we'll be keeping our eye on this and seeing how it changes over time, how teachers end up using these tools or not using these tools and what works and what doesn't. Folks, that is going to bring us to the end of this episode of Tech News Weekly.
The show publishes every Thursday at twittv slash TNW. That's where you can go to subscribe to the show in audio and video formats. If you want all of our shows ad free, well check out Clu bTwit. For $7 a month or $84 a year you get access to every single Twitch show with no ads. You gain access to the exclusive Twit Plus bonus fee that has extra content you won't find anywhere else behind the scenes before the show.
After the show, special Club Twit events get published there. So when you join the club, it's great because you'll get access to a huge back catalog of great content and also access to the members only discord server, a fun place to go to chat with your fellow Club Twit members and also those of us here at Twit. It is always, as I say, on and popping in the club, a Twit discord, so be sure to join for those things at twittv slash club twit. On top of that, you also gain access to access to some great Club Twit exclusive shows the untitled Linux show, hands on the windows, which is a short format show from Paul Therot that covers Windows tips and tricks, hands on Mac, which is a short format show from me. It's a show that's very, very urgent. That covers Apple tips and tricks and the Home Theatre Geeks program from Scott Wilkinson, which has not just tips and tricks but also reviews, interviews, questions asked and answered.
It is a great show that once existed outside of the club. We were able to relaunch it in the club and bring it back. So again, $84 a year, $7 a month, gets you access to all of that. It is an incredibly valuable subscription and we appreciate each and every one of you for your support. Thank you all for tuning in and we will see you next time. I will see you next time on Tech News Weekly. Bye-bye.
1:03:00 - Rod Pyle
Hey, I'm Rod Pyle, Editor-in-Chief of Ad AStra Magazine, and each week I joined with my co-host to bring you, This Week In Space, the latest and greatest news from the Final Frontier. We talked to NASA Chiefs, space Scientists, engineers, educators and Artists and sometimes we just shoot the breeze over what's hot and what's not in space books and TV and we do it all for you, our fellow true believers. So, whether you're an armchair adventurer or waiting for your turn to grab a slot in Elon's Mars rocket, join us on This Week in Space to be part of the greatest adventure of all time.