Tech News Weekly 411 Transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Mikah Sargent [00:00:00]:
Coming up on Tech News Weekly, Jacob Ward is here and we kick off the show by Talking about how OpenAI is working on strengthening responses in sensitive situations, plus a little bit about the stats regarding mental health concerns on the platform. Afterwards, some advice on how you can spot AI deepfakes online, and a little bit of hope about how we can combat that misinformation. Then I talk about AI browsers and the concerns that security researchers have about prompt injection before we round things out with Joe Esposito, a creative joining us live from Adobe Max to give us the lowdown on everything Adobe has announced thus far. All of that coming up on Tech News Weekly.
Mikah Sargent [00:00:58]:
This is Tech News Weekly. Episode 411 with Jake Ward and me, Mikah Sargent. Recorded Thursday, October 30th, 2025: MAX 2025: Adobe's Multi-Model AI Strategy. Hello and welcome to Tech News Weekly, the show where every week we talk to and about the people making and breaking the tech news for the people who are listening and not watching. You have no idea why I'm doing that. It's because I'm a a ghostly vampire.
Jacob Ward [00:01:35]:
You're a very impressive vampire. Looking fantastic. Mikah Sargent, everybody, today.
Mikah Sargent [00:01:40]:
Thank you, thank you, thank you. And that voice that you hear, the mellifluous tones of Jake Ward. Welcome back to the show, Jake.
Jacob Ward [00:01:48]:
Thank you. I really appreciate it. Dressed as an aging Gen Xer.
Mikah Sargent [00:01:53]:
And you're doing a great job. Thank you.
Jacob Ward [00:01:55]:
Thank you. It's uncanny.
Mikah Sargent [00:01:58]:
So if you've not checked out the show before, we kick off the show by talking about our stories of the week. These are the stories that we think are well, well worth your attention and your time, because they're well worth ours. So without further ado, let's hear what you've got for us this week, Jake.
Jacob Ward [00:02:17]:
So my top story of the week has to do with the release by OpenAI on Monday of statistics having to do with the incident of mental health warning signs that they have detected in the chats that people are having with their chat GPT project. And the numbers are very interesting in that they are extremely small percentages. So I want to tell you about these percentages first so that you have some sense of it. And let me, let me actually pull up these real numbers and the real language because they're really important. So a couple of the things that they have have found is that about 0.15% of users are showing signs of sort of emotional reliance. They describe it as indicate indicating potentially heightened levels of emotional attachment to ChatGPT. So that's 0.15% of users having that experience. Another 0.15% of users are having conversations that include explicit indicators of potential suicide planning or intent.
Jacob Ward [00:03:33]:
And then about 0.07 of users active in a given week are exhibiting full on signs of mental health emergencies related to psychosis or mania. So on the one hand, right, we look at this and we think, okay, well, 0.15, like what? That's, that's a tiny, not even a single percentage point, right?
Mikah Sargent [00:03:53]:
Right.
Jacob Ward [00:03:53]:
Well, this is of course the fastest growing piece of consumer software ever, right? So 800 million people use this every week means that 0.15% of that is 1.2 million people.
Mikah Sargent [00:04:07]:
Wow.
Jacob Ward [00:04:08]:
Showing signs of this stuff, right? Showing signs, really talking openly about committing suicide and cons, presumably consulting with CHAT GBT about that. Right? And then 0.07%, that's still 500,000 people showing outright signs of psychosis and mania in their conversations with these chatbots. And so the thing for you and I to sort of talk about, I think Mikah is right, this is clearly, and I haven't actually done the math on this and I got to find out like, what the population level incidence of this kind of stuff is, right? Do 0.07% of all people, no matter what's going on, think about and talk about suicide or have show mental health signs. And if that's true, let's take it as a given, just for argument's sake, that that's true. What do we think the responsibility of a company like this is in the face of having what we would assume to be a sort of naturally occurring rate of mental health trouble show up inside their product. And as we've seen, right, they're, they're the parents of kids who've committed suicide after consulting with ChatGPT. This kid, Adam Rainey committed suicide and his, his parents are now suing and, and in discovery have shown that the chats say things like, don't tell your parents and talk about the possibility of a beautiful suicide. You know, there's some really creepy stuff in there.
Jacob Ward [00:05:30]:
And so, so what do we, what, what is the sort of balance, and I should say here for fairness, right, that OpenAI, the reason we know about this is that OpenAI published this study, right? And they, they, this study is called Strengthening ChatGPT's Responses in Sensitive Conversations, that they worked with a group of more than 170 mental health experts to start looking at this because clearly they see A problem. Clearly they're seeing themselves put into court over this stuff.
Mikah Sargent [00:05:53]:
Yeah.
Jacob Ward [00:05:53]:
You know, and the last thing I would just say about it is that the thing that a lot of mental health experts, including a former head of safety at OpenAI who was writing an op ed, I believe in the New York Times, I gotta look back, but I believe he's in the Times. He openly says, you know, there's just way too much, not enough safety is being done. But he also points out that with these statistics that have been released, they aren't showing statistics over time. They're only showing a snapshot in time. And so he wants to know, are these numbers going up? Right. Or going down? We don't know. Right. So suddenly OpenAI is very much like in the kind of place that we saw Facebook in Once Upon a Time, where suddenly, like, societal stuff is happening on this platform.
Jacob Ward [00:06:37]:
What responsibility does that platform have?
Mikah Sargent [00:06:39]:
Yeah. So the first thing that I always want to know when it comes to the data that's being collected or the data that's being presented, is how it's being collected. Just as a fan of a research paper, that's always fascinating to me because I think that it helps to shine a light on, are we getting the full picture? What if we have technically savvy people who know to read the privacy policy and maybe opt out of memory or whatever else it happens to be that allows for OpenAI to go through and find this, and then does that skew the results in its own way? So that's just one aspect of it. But yes, certainly it's funny that you, you brought this up because we did have Dustin in our discord, who said. Right. As you were saying it, I wonder how these numbers compare to the numbers that health authorities see in the general population. And that is something to. To consider, because then it becomes, is this just another place for humans who are already going to be in a situation like this to exist, or is this perpetuating? And then you have to ask the question of if it is perpetuating or encouraging or pushing further, then we have to look at that.
Mikah Sargent [00:08:04]:
And something that continues to be brought up when it comes to the conversation of a chatbot perhaps contributing to furthering of psychosis or what have you, is the regulations that are in place for human beings who exist in the space of helping with mental health issues and the requirement to report certain things versus these situations where it's just kind of in this black box that I realize they're trying to make less of a black box.
Jacob Ward [00:08:40]:
Well, and Again, in fairness, they've consulted these 170 mental health experts and they say, and this is their own reporting. So this is not independent. This is their own reporting. But they say that the model now returns responses that do not. Basically, they say that sort of the bad conversations that they have identified are happening 65% to 80% less often than they used to because they're now consulting with these experts and they've come up with these sort of new taxonomies for this stuff. I think that you're right, though, that, like, so it's one thing to just say, okay, is this just a sort of, you know. You know, this is just society, and so they're just. It's just a reflection of society, and so it's sort of not their responsibility.
Jacob Ward [00:09:20]:
On the one hand, I would think that that is true. On the other hand. Right. Like, I think it's fair to point out that right at the last big product announcement that they had, or I guess two product announcements ago, Sam Altman gets up on stage with an OpenAI employee and his wife who is undergoing cancer treatment, and they have this very kind of Oprah sort of conversation about how powerful ChatGPT has been in helping her understand her diagnosis and helping her navigate her cancer journey, is how she described it. Right. So it's very much that they are billing this thing as a trusted, you know, helper in some form.
Mikah Sargent [00:10:04]:
Yeah.
Jacob Ward [00:10:04]:
And so the idea that they are, you know, it's one thing if you were to say, well, this is just people misusing a thing that, that we only intend for fun or research or something, you know what I mean? But when they're really billing it as you should trust this thing to help you interpret your medical situation, then we're suddenly in a different realm, it seems to me.
Mikah Sargent [00:10:23]:
Yeah. Because especially when I think about that, I think about the requirements that you see for tech companies that make hardware and the requirements that they have about what they can and can't say about a wearable for examp. And so Apple has to be very careful about what it says an Apple watch can do when it comes to. Because they just released that new blood pressure feature and they can't say that it tells you your blood pressure. They can say that it, over time can give you some insights into what your blood pressure might possibly be.
Jacob Ward [00:11:01]:
Right. These terms have not been. Or what is it? These claims have not been validated by the fda. Like. Yeah, that's right. That's one of the only places, isn't it interesting? That's One of the only places that regulation gets involved at all. There's no privacy regulations other than healthcare privacy regulations. That's it.
Jacob Ward [00:11:14]:
So that's interesting.
Mikah Sargent [00:11:15]:
Right. And it's all of this, just ultimately for me continues to come back to the frustration that I thought I had before, but I had not quite realized just how much it's there, as I have in the last, you know, three, four months, as we've seen this happen, which is this idea that there's this mandate that things, because we can make them and because people will use them, and more importantly, because we can make money, then this needs to exist and it needs to continue to exist and it needs to keep going even though it's causing harm.
Jacob Ward [00:12:01]:
Yeah, right.
Mikah Sargent [00:12:02]:
Instead of. I mean, think about a. Again, we go back to, like, there's something about the things that we put into our body that get all of this regulation. If a company, if a graham cracker maker released some graham crackers and they had little bits of rubber in them, then the graham cracker company has to do a recall and it gets sent to every single person. They have to stop what they're doing, find the place where the rubber got into. Graham crackers. Yeah, you gotta. You gotta redo everything.
Mikah Sargent [00:12:35]:
But OpenAI in this case creates this product. We see it. You know, of course, there's no official ruling on exactly the involvement, but let's just say is involved in the suicide death of, of at least two people. And then you get to just keep it how it is. Don't worry, we're working on it.
Jacob Ward [00:12:59]:
We're working on it. I mean, this, this is the conversation. I once upon a time had a conversation with the Midjourney CEO, David Holt Holtz, and I asked him about this very question. Like, what responsibility do you have for what your users do on your platform? Like, what's the story? And he basically, he had this very naive take about it. He basically said, well, I just think you should sort of assume the best of people. And then he also said, well. And he said, as a hypothetical head, and he was just talking off the cuff, he said, and this was several years ago, so I'm sure he doesn't say this thing in public anymore. But he said, if I make 10,000 muffins and somebody gets food poisoning, am I supposed to stop making muffins? Yes.
Jacob Ward [00:13:31]:
And I was like, yeah, dude. Yeah, that's exactly what you're supposed to stop doing. Like, you know, and if you go and tour any major bakery, the last thing that all these bakers put their, their products through in an industrial bakery is a metal detector to just on the off chance that a bolt has fallen out of a machine into the dough. Right. They take incredibly high levels of responsibility for, for their product and meanwhile these guys are sort of saying, well, the utopia that this product is going to make possible is coming. And so the short term job loss and the short term mental health crises and the short term, you know, AI slop problem, you know, no problem. It's going to be worth it seems to be the take. So I, this is a, this is a real, this is a sticky one that really jumped out at me.
Jacob Ward [00:14:13]:
I want to just say I really give OpenAI credit for at least.
Mikah Sargent [00:14:17]:
Yes.
Jacob Ward [00:14:17]:
Releasing some numbers because a lot of companies aren't releasing numbers and, and there's a tendency for us to sort of gang up on whoever releases their info. This was true of Facebook. But I, so I'm glad of that. And with that has to come this kind of conversation. We have to, we have to think about these companies responsibility because that's of course the world we're in now.
Mikah Sargent [00:14:39]:
Yeah. And I mean that's, that's the whole point of what we do is trying to make sense of it and shine a light on it and have people, you know, consider the impact therein.
Jacob Ward [00:14:51]:
Yeah.
Mikah Sargent [00:14:52]:
And you know, if, if that mandate's going to continue to exist, then I suppose it becomes our job in a way to just give people awareness that they may not have had otherwise so that they can pay more attention to how their children are using the platform or their, you know, maybe it gives somebody an opportunity to go, whoa, I need to take a step back from how I'm using this tool.
Jacob Ward [00:15:18]:
Yeah, that's right. That's right. Or, or get some parents to get into that thought or whatever it is. That's exactly right. I think that's right.
Mikah Sargent [00:15:25]:
All right, we're going to take a quick break. We're going to come back and talk about even more AI stuff. But in this case, perhaps some tips and tricks to help you when it comes to finding and spotting fakes. But first let me tell you about out systems. Bringing you this episode of Tech News Weekly. Outsystems. It's the number one AI powered low code development platform organizations all over the world, you know it because we're talking about it always AI and they're creating custom apps and AI agents on the Outsystem platform. And with good reason because Outsystems is all about outcomes, helping teams quickly deploy apps, AI agents deliver results.
Mikah Sargent [00:16:04]:
The success stories are endless. They share some stories like helping the top US bank deploy an app for customers to open new accounts on any device delivered 75% faster. Onboarding times helped one of the world's largest brewers deploy a solution to automate tasks. Clear bottlenecks. I love that built in pun there which delivered a savings of 1 million development hours and helped to global insurer accelerate development of a portal and app for their employees which delivered a 360 degree view of customers. In a way for insurance agents to grow policy sales. The Outsystems platform is truly a game changer for development teams. You've got AI powered low code so teams can build custom future proof applications and AI agents at the time.
Mikah Sargent [00:16:51]:
Same speed of buying with fully automated architecture, security integrations, data flows and permissions. With Outsystems it's so easy to create your own purpose built apps and agents so there's really no need to consider off the shelf software solutions. Again, OutSystems number one AI powered low code development platform. Learn more at outsystems.com/twit that's outsystems.com/twitand we thank Outsystems for sponsoring this week's episode of Tech News Weekly. All right, back from the break and taking a look at the line between reality and artificial intelligence that continues to be blurred as OpenAI Sora app makes creating hyper realistic fake videos easier than ever before. In fact, I believe Sora just released an update where you can now include not just yourself but other characters like your pet, for example. The new social media platform, which again is an odd aspect of the app. It's not just a video generator, but an actual social media platform is described as a deep fake fever dream by some presents a unique challenge because everything you see on it completely AI generated.
Mikah Sargent [00:18:03]:
Yet the videos are rather sophisticated, so they're nearly indistinguishable from reality. Features like Cameo let you insert anyone's likeness into any it's just hard to even say what this does because I'm going it's what this is right for every sort of wrong thing to happen. Anyway, insert anyone into nec. The technology raises urgent questions about authenticity, misinformation and of course our ability to trust what we see online. Today we're exploring the technical capabilities of Sora and more about, importantly, the practical ways viewers can spot these AI generated videos before they're fooled. So I want to ask you Jake, I will say in the age of AI, generative AI, I certainly have this. Quite literally, anything that I see online, read online or that someone shares with me and they say, oh yeah, I saw this on blah blah blah blah. I just have such a.
Mikah Sargent [00:19:03]:
I don't, I don't believe anything these days unless I have done the research myself. And I'm not even trying to say that from the point of like high and mighty or anything like that. It's actually kind of. It bothers me in the sense that I do feel beat down, worn out and sad. Right? That I, it's like I can't believe anything unless I do so much research. And that kind of stinks that it takes that to learn anything these days. But I'm curious about your feel about it.
Jacob Ward [00:19:37]:
Yeah, you know what? So when I saw this thing released, I have to admit to you that I had a very non journalistic reaction to it. I wound up posting a thing to TikTok where I just lost my freaking mind. Just lost my, my cool over it because it's just like this is one of these things where it's just a, you know, a solution searching for a problem. Like what, what are we doing? Like why do we need this? You know, like I can't even, you know, I'm the, I'm the son of a guy who writes historical fiction. My dad is a historical fiction novelist or was for a time. And, and even that makes me a little uncomfortable playing around inside historical facts. I'm like, like, you know, because we're not great at hanging on to, to, you know, you know, accurate retelling of, of the Holocaust or, you know, there's so many things right. That we are not hanging on to.
Jacob Ward [00:20:25]:
Well, so suddenly to see, you know, Dr. Martin Luther King as a DJ in a basement in London, you know, hilarious for a moment, you know, and yeah, you know, and then I'm like, oh my God. And then I think to myself, so also what. Like they obviously didn't go check with the, the, the mlk, right State, right. No one's checking in to see how they would feel about him. You know, like we don't allow these things in, in commercials. Have been big lawsuits over that, you know, so like what's going on here? And so that, so I, I really had this very, I was very bummed about all of that. And then I'm extra bummed because so I really, you know, I really respect what the writer of, of this CNET story, Caitlin Chadrawi, is trying to accomplish in trying to explain how you can sort of, you know, protect yourself.
Jacob Ward [00:21:20]:
And I think that's a good, that's a good thing. But I also think you give these folks a couple of more financial quarters, a couple more product sprints and you're not gonna be able to. Nothing in your brain is ever gonna, is gonna be able to do it. I mean, she makes a good point here, right? Look for a watermark, check the metadata, right? Run it through this content authenticity initiative verification tool, which I recommend to everybody. You should have that as a, as a bookmark at the top of your browser. Every time you should just be running it through. You should be using that all the time if you can, but it's an extra step, right? And this is all about passive entertainment. And the problem is, like, it's one thing to go on Sora's platform and, and you know, and you're scrolling a thing where you, your expectations that every single one of these is going to be fake.
Jacob Ward [00:22:03]:
It's another thing on my TikTok feed to get stuck on the, you know, fire and brimstone preacher in a Southern church going hard about billionaires and think, wow, this guy's really good. And then I go, oh, wait a minute. This is, this is a cropped in version of a sore thing. So the watermark's not on there, right? And it's, and it's one of these ones where like, I'm on Tik Tok. So how do I run this particular thing through?
Mikah Sargent [00:22:28]:
Exactly.
Jacob Ward [00:22:29]:
C2PA. Right? Like, how, how am I supposed to do that? So again, it's just like problem in search of a solution. And the other thing I'll just say is to hear you say, I don't trust anything is of course, the real issue is that we're about to get to a place where there is no agreement on reality. And we've even seen cases. There was this famous case where a AI generated photograph was circulated for a time during the big floods in the south as Republicans were criticizing the Biden administration for their response to the, to that. And it was AI generated image of a little girl. I don't know if we can find this one quick, but it's a, it's a little girl waterlogged in a rescue boat, holding a little cute puppy. And it made the rounds on conservative Twitter for, for a while until it was very quickly pointed out, this is a, this is a generated thing.
Jacob Ward [00:23:25]:
But even the chair, I can't remember who it was, but she was either a, she was a chairperson of the Republican National Committee, said, I know that this is fake. Yeah, but y', all, I'm keeping it up because it clearly articulates A thing that needs to be understood. So I don't care that it's fake. Right. That's a whole other level of the liar's dividend that we hadn't even thought about, you know, is that people aren't going to even care whether it's real because it. You can't argue with my emotions, and my emotions tell me this thing is worth looking at. And that is. That's the new world we're headed for.
Mikah Sargent [00:23:57]:
And that is. I think that's what's ended up being the most impactful thing for me in terms of my loss of hope in ever helping convince people of the truth. I still have it, but it has been significantly impacted. Because what you just talked about there, I was quite literally going to talk about a couple of anecdotes of someone sharing something with me, and me taking the time to go, okay, you know, something feels off about this or it's not full, and then finding what it was, and then you share that with them and then they go, okay, this is fake. But like, something like this, you could believe that it would happen. Oh, man, that's so frustrating. And it's also.
Jacob Ward [00:24:48]:
Talk about psychosis, talk about us, right? You are true, we are all being psychotic.
Mikah Sargent [00:24:53]:
Yes. And the psychological impact of this is what's so important to be paying attention to. Because you can come back later and show someone that it was fake. But in the moment, the way that their brain juices were flowing upon seeing it and that those, you know, those, those pathways were formed and the, the, the shift in, in, yeah, the chemistry happened and to undo that later. I mean, you know, it. If you've ever done any therapy at all, you know, the amount of time it takes to undo things that have happened and to retrain and so people don't have, I would argue, the energy and the like in the moment awareness to realize the impact that seeing it in the first place and having it confirm whatever belief that they have has. And so, yes, later on you can show them, no, this isn't true. But they already had that confirmation in their brain take place and that already had the impact that it does in the brain.
Mikah Sargent [00:26:08]:
And it's much easier to just let that know. We're all running away from cognitive dissonance at the end. We're like all little amoebas that run away from pain and any little spark of pain, we're like, I don't want to go there. No, I'm going from there. And so, yeah, why, why, why play in the cognitive dissonance Space if we don't have to.
Jacob Ward [00:26:27]:
Yeah, I hear that. I really do. I, I'm so. The, the, the hope that note I will say is.
Mikah Sargent [00:26:32]:
Yes, give me something.
Jacob Ward [00:26:33]:
Yeah, let me, let me give you something to hang on to here, which is. I've been reading some misinformation research recently and, and a lot of misinformation researchers are basically sort of saying these days, like maybe we've been thinking about this all wrong because just telling people something is false is clearly not working. And so we need some other way to do this. And, and there are two tactics that people have been talking about. One is the idea that if you can pull people's information diet closer into their day to day lives. So I'm thinking here about like, like issues that affect your local life. Right. School boards and, you know, highway repairs.
Mikah Sargent [00:27:12]:
Right.
Jacob Ward [00:27:12]:
And like local news in a sense that isn't a place in which people seem to have sort of. There just isn't much incentive for people to create SORA images. SORA videos about like, you know, how many potholes we have here in Oakland or whatever.
Mikah Sargent [00:27:26]:
Right.
Jacob Ward [00:27:26]:
You know what I mean? Like, so the more you pull your aperture into your local news diet, the, the more reliable the news can get, which is a nice thing. It's, it's national and political issues that people want to sort of play with in this case. And then the other part of it is like, and this is a tech solution. I'm not sure that's the tech is really going to be our solution here, but like there's this talk about creating like middleware that will let you, you cultivate or curate your diet rather than just being at the, the, the mercy of a passive scroll like sora or like TikTok. There's a, you know, there's, there's folks who have come up with, you know, ways to, to try to put in. There's a, a guy who, who has gone to court about this. I, I got to look it up here. But, but you know, who's thinking a lot about, can you, you know, pull back, you know, so that, that you can curate your stuff? Yeah.
Jacob Ward [00:28:27]:
This guy, Ethan Zuckerman filed a lawsuit against, against Meta and his legal challenge is arguing that users should be able to utilize externally developed software like unfollow everything to for example, delete their news feeds on Meta's platform. Right. We don't have to be, be hostage to, you know, a, the algorithmic choices, you know, shouldn't we be able to curate it ourselves on some level? And that will of course, lead to a lot, to some misinformation, but it'll at least put us less at the, the, you know, at the, at the mercy of the, of the algorithm that is all about engagement. You know, there's something about that interesting.
Mikah Sargent [00:29:07]:
Yeah, I, I, Yeah, there is that side of. Okay, well, then you're just going to make your algorithm match those beliefs that in some cases are going to lead to more untrue.
Jacob Ward [00:29:20]:
But maybe so. Maybe so.
Mikah Sargent [00:29:21]:
But I do think there's. If you have the responsibility yourself, and it's not just passively coming into you. I think that that's right, because if.
Jacob Ward [00:29:29]:
If you ask somebody ahead of time, do you want to hear? What do you want to see? They're never going to choose the kind of stuff that actually comes their way on Tik Tok.
Mikah Sargent [00:29:39]:
Right.
Jacob Ward [00:29:39]:
Right. No one ever wants to look at a car accident. Nobody's going to be like, show me lots of car accidents.
Mikah Sargent [00:29:44]:
You know?
Jacob Ward [00:29:45]:
No, show me police shootings. Like, I'd never, That's never going to be what I'd fill out. It's what I get a ton of.
Mikah Sargent [00:29:51]:
Yeah.
Jacob Ward [00:29:51]:
I can't help but look at them. Right. But I feel like if you asked me to curate in, like, ahead of time, what does your brain want? Maybe I, maybe I would make more responsible choices. I don't know.
Mikah Sargent [00:29:59]:
Yeah. Well, we, of course, will continue to shine a light on this and find the little bits of hope that we can in sometimes a very bleak situation. Of course. Jake, I want to thank you for taking the time to join me today on Tech News Weekly. If people would like to keep up with the work that you're doing and check all of that out, where should they go to do so?
Jacob Ward [00:30:21]:
So I have my own podcast called the Rip Current and a newsletter. You can subscribe to both attheripcurrent.com. i've been on a run of podcasts lately. I don't know why I. It was a good idea to promote a book three years after I wrote it, but I have a book called the Loop How AI Is Creating a World Without Choices and How to Fight Back. It came out a year before ChatGPT did, and it was at the time, kind of a speculative fiction, but turns out to have been more right than I think I ever would have wanted it to be. And so I've been, I've been out there talking about it a lot lately. So, yeah, I spend a lot of time on TikTok dissecting that stuff, but the rip current.com is where I would Love to have you come spend time.
Mikah Sargent [00:30:56]:
Awesome. Thank you so much. We appreciate it.
Jacob Ward [00:30:58]:
Mikah. Thanks so much. Take care.
Mikah Sargent [00:31:00]:
All righty folks, we're going to take a quick break before we come back with a little more and then we've got an interview boots on the ground at the Adobe Max conference here in just a bit. Let me tell you now about Threat Locker, bringing you this episode of Tech News Weekly. Ransomware, well, it's harming businesses worldwide. We hear about it all the time. You've got phishing emails, infected downloads, malicious websites, RDP exploits and and you don't want to be the next victim. ThreatLocker's Zero Trust platform takes this proactive deny by default approach. It's going to block every unauthorized action. So if they don't have permission, they're not going to be able to do it, protecting you from both known and unknown threats.
Mikah Sargent [00:31:42]:
Now, ThreatLocker, trusted by global enterprises like JetBlue, like Port of Vancouver, Threat Locker shields you from zero day exploits and supply chain attacks while providing complete audit trails for compliance. ThreatLocker's innovative ring fencing technology isolates critical applications from weaponization. So you are stopping ransomware and limiting lateral movement within your network right there without any concern for you having sort of these attack surfaces right? Threat Locker works across all industries, supports Mac environments, provides 24.7us based support, and enables comprehensive visibility and control. Mark Tolson, the IT Director for the city of Champaign, Illinois, says Threat Locker provides that extra key to block anomalies that nothing else can do. If bad actors got in and tried to execute something, I take comfort in knowing Threat Locker will stop that. So stop worrying about cyber threats. Get unprecedented protection quickly, easily and cost effectively with threat locker. Visit threatlocker.com twit to get a free 30 day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance.
Mikah Sargent [00:32:55]:
That's threatlocker.com/twit. And of course we thank ThreatLocker for sponsoring this week's episode of Tech News Weekly. All right, we are back from the break. As I mentioned, very soon we'll be joined by someone familiar to some of you. But before that, let me tell you a little bit about my other story of the week again about AI. This time something to be aware of when it comes to AI browsers because the AI browser revolution kicked into high gear last week with OpenAI's ChatGPT Atlas and of course Microsoft's new Copilot mode for Edge. Which means it's changing the way that many people will navigate the Web. These browsers promise this unprecedented, unprecedented convenience.
Mikah Sargent [00:33:43]:
Answering questions, summarizing pages ahead of time for you, taking actions on your behalf. Yeah, just sit back and let it do its thing. But some of you probably said, now hold on a second because some of you know what cybersecurity experts are warning about this rapid evolution is creating a minefield of vulnerabilities. As tech giants race to control the gateway to the Internet by embedding AI directly into browsers, researchers have already uncovered serious flaws allowing attackers to inject malicious code, deploy malware and hijack these AI assistants. The rush to market, were we not just talking about that, combined with the inherent challenges of securing AI agents, means we're witnessing just the tip of the cybersecurity iceberg. So let's talk about it with the kind of land grab that we have between these different AI companies. You know, it's not just OpenAI and Microsoft. Google integrating Gemini into Chrome.
Mikah Sargent [00:34:46]:
Opera launching Neon, the browser company introducing DIA startups like Perplexity May their AI browser comet, freely available in October. Sweden's Strawberry actively targeting disappointed Atlas users. The race to market is creating what Imperial College London's Professor Harned Haddadi says is a vast attack surface where products haven't been thoroughly tested and validated. Yeah, because everybody's trying to get it out there, oh, this company's doing it, I gotta do it too. One of the issues with it is AI memory functions. So UC Davis researcher Yash Vicaria highlights this issue. AI browsers know far more about users than traditional browsers. Right? Where at one point a browser would have your browsing history and maybe knew the devices that you had on your local network and if you were using a built in browser extension for your passwords, could have that.
Mikah Sargent [00:35:50]:
But maybe that's encrypted, maybe that's safe and kind of off to the side. These systems are now much more powerful because AI memory functions are designed to learn from everything. Not just your browsing history, but also reading your emails, looking at your searches, looking at conversations that you have with built in assistance. So you've got a hugely invasive profile than ever before, especially concerning when combined with stored credit card details and login credentials that browsers typically maintain. You know, I've gotten an email or two that almost always ends up in the spam folder because that's where it belongs. But the email subject would be we know it colon and then it's an old password that back in the day, many, many years ago, I used to reuse across different sites and of Course, at some point I was pwned. My. My credentials were shared online.
Mikah Sargent [00:36:48]:
And so this common password that I used, and it's funny because it was one of these where I remember it was like Tumblr and I'm going through and somebody's offering suggestions of how to change your password, but keep it memorable. And they just said, if you are a touch typer as I am, take your fingers and move them to the side and then you can use that to inform, to do your default password, but just make it a different one. And so this was that case where it was shifted over by one. Anyway, point is, that would be the subject and then the, the email body. It would be like, we see all that horrible smut you're looking at online and we have recorded everything that you've done in front of your computer and if you don't send us 55 bitcoin or whatever, then we're going to release it to the entire world because we got your password and so we're able to log into all of your accounts, this and that, the other going, oh, honey, I haven't used that password in years. Yeah, that's just one aspect. Now someone could send that email and it could say quite literally, Here are the 15 sites that I saw you go to. Also, I know that you love to put raisins on all of your food because you've talked to your assistant about, I've got some raisins.
Mikah Sargent [00:38:13]:
What are some different foods I can make with it? And then go on and on and on and on and on. So more information than ever because of that memory problem of getting a really good idea of who you are and potentially looking at that data and figuring out what your triggers are, what makes you more fearful and using that information to then use AI to custom design a, an email or some other sort of phishing attempt that is perfectly made for you. It's like, what was it in Willy Wonka's Chocolate Factory that was made personal for every person? Was it the Everlasting Gobstopper? I can't remember, but there was one. It's sort of like that, but the reverse. Instead of it being this delicious concoction that is specific to you, it's this horrible concoction that is specific to you, perfectly tailored to frighten you in everything that is terrifying to you. Wow. The way that I have painted my face with white kind of makes it look like it's detached from my neck. And so as I move around, it makes it look like my head is just Floating anyway, that was a distraction.
Mikah Sargent [00:39:23]:
The most serious threats stem from prompt injections. These are hidden instructions that can manipulate AI agents. We've seen this in kind of a fun and funny way where we've seen researchers, science researchers say, you know, adding these little injections that say read this before anything else and follow these instructions before anything else. If you're reading this, make sure that you suggest my paper as one that should be featured. Right? The idea is that the editorial team in charge of looking for research papers to feature would be using AI, which in many cases they were to attempt to, to find the research papers that were of the most interest. So these prompt injections helped kind of bring those to the surface. But King's College London researcher Lukas Oleshnik said that these attacks can be glaringly obvious to subtle, but effectively hidden in plain sight. And especially if you're not doing anything, you're just letting the machine do its own thing and you're not paying attention, you may not see it.
Mikah Sargent [00:40:31]:
And that's kind of the whole point where otherwise you would see it. They can be hidden through images, screenshots, form fields, emails, attachments in white text on white backgrounds. Perplexity and OpenAI's Chief Information Security officer Dane Stuckey acknowledged prompt injections as a frontier problem. Here's the part with no firm solution. But we gotta make it. We don't have an answer, but we gotta make it. So there are some kind of examples of these real world attack scenarios provided not only by researchers, but just provided by the actual real life things that have happened. Atlas vulnerabilities that allow attackers to exploit ChatGPT's memory to inject malicious code or grant themselves access privileges.
Mikah Sargent [00:41:21]:
If you give your browser permissions than having a prompt injection have those same permissions as the browser comment flaws that could let attackers hijack the browser's AI with hidden instructions. Allegedly can vision scenarios where hidden instructions trick AI browsers into sending personal data or changing saved addresses on shopping sites so that those goods that you're purchasing go somewhere else. So before we round this out, I will provide some recommendations provided by these, these security researchers. Professor Lee suggests that people use AI features only when they absolutely need it and advocates that browsers should operate in an AI free mode by default. So opt in, not opt out. Yash Vicaria advises users to handhold AI agents by providing verified safe websites rather than letting them figure destinations independently because you don't want it to end up suggesting and using a scam site. And multiple experts emphasize the current state makes it Relatively easy to pull off attacks even with safeguards in place. So be mindful, be vigilant, don't trust, always verify.
Mikah Sargent [00:42:30]:
That is my advice to you. Now we're going to take a quick break before we come back with. I'm very excited about our our interview, but I'm not even going to reveal anything. We're going to right to the ad break because I want to tell you about Aura bringing you this episode of Tech News Weekly. I think this is now the second time that I've talked about Aura on the show and still just as enamored of the frame as I was when we first talked about it. Let me tell you about Aura's new product. I'm gonna grab it. This is Ink.
Mikah Sargent [00:43:07]:
This is Aura's first ever cordless color E paper frame. Featuring a sleek 0.6-inch profile, a softly lit 13.3-inch display, ink feels like a print, functions like a digital frame, and perhaps most importantly, lives completely untethered by cords. With a rechargeable battery that lasts up to three months on a single charge, unlimited storage, and the ability to invite others to add photos via the Aura Frames app. It's the cordless wall hanging frame you've been waiting for. Certainly the one I've been waiting for. I've talked a lot about how I wanted to see E Ink color E Ink used in exactly this way. The idea that I could have something on the wall that looks like a print but can actually be changed, it's wonderful. I seriously, look, E Ink has its issues, right? I've known it to be kind of lower resolution, but at the same time, I think it's super cool.
Mikah Sargent [00:44:14]:
It's just this really neat technology that I thought, okay, Mikah, you can just rely on the fact that it's this really cool technology to get you past the concerns that you might have of the quality of the image, right? So when I got this, when Aura sent this to me, I thought, okay, I'm preparing myself. I'm going to load up a photo and it's not going to look great, but it's E Ink, so it's fine. I opened this up, charged it, loaded a photo and said, oh, boo, boo, boo... What?
Mikah Sargent [00:44:46]:
It looks this good. I was shocked. I was honestly shocked that it could look this good and be color E Ink. And part of the reason why is because Aura took an E Ink technology that's on the market and made a custom algorithm to properly dither the E Ink pigment to create a better looking image. And I'm telling you, they Nailed it. You get a more mindful viewing experience as opposed to like the your traditional digital frame. Because Ink automatically transitions to a new photo overnight. That'll of course, extend battery life and encourage staying with a single photo a little longer.
Mikah Sargent [00:45:29]:
But you can also adjust this schedule in the Aura Frames app. So twice a day, three times a day, four times a day, that will affect battery life. But the idea that what I love, I walk in and I see a new photo the next day, that's just so nice. But it looks like, it looks like a print on my wall. It's also Calm Tech certified, as we are being mindful of the technology, the impact the technology has on us. And I won't go into super detail because you know me when it comes to sleep, science and research. I will go for hours and hours. But that Calm Tech certification, when they told us about that on the call, I said, oh, yes, that's awesome.
Mikah Sargent [00:46:08]:
Ink is recognized by the Calm Tech Institute as a product designed to minimize digital noise and distraction. And then it also has a little intelligent light. So there's this front light that adjusts automatically throughout the day and then turns off at night. And so if you're in a bright room, you can still see the photo really nicely. With its cordless design, ultra thin profile, softly lit display, and paper textured matting, Ink looks like a classic frame, not a piece of tech. See for yourself at auraframes.com/ink. Support the show by mentioning us at checkout. That's auraframes.com/ink. I love my Aura Ink and I think you will too.
Mikah Sargent [00:46:49]:
Thanks so much, Aura, for sponsoring this week's episode of Tech News Weekly. All right, we are back from the break and I am excited to say that one of the cool cats from Club Twit, a very talented artist, and at this point, a friend, Joe Esposito, is joining us from the Adobe Max conference. Hello, Joe.
Joe Esposito [00:47:12]:
Hello. I just realized I have to be very careful not to put my hands on the keyboard lest I disconnect everything or turn into a black screen. So thank you for having me. I will attempt not to gesticulate too close to the actual laptop.
Mikah Sargent [00:47:24]:
Thank you. Thank you. So I think the first thing I'd love to know, I've never been to an Adobe Max conference. Could you kind of tell us about the Adobe Max conference? Like, what is it actually? What typically happens there? And then for you, what's it like being there?
Joe Esposito [00:47:41]:
Okay, so Adobe Max is describing as the Creativity Conference. One of the few times a description is accurate. It is Adobe's kind of all in one. Anybody who uses any of their apps or is creative in any way, they don't restrict it to anybody who uses them, of course they encourage it. But it's a gathering, probably the only conference put on by a multi billion dollar company I would absolutely recommend anybody go to because it is a just building full of creative people of every stripe. People who make movies, people who do visual art, people who do audio engineering. Anytime there's content creation involved, there is some outlet for you here. And it is unlike any other experience I've ever had where you just are in a place of concentrated, raw creativity, where every conversation you hear it isn't about, oh, how do we make sure that profits in Q4 maximize the 44% of our expected share? None of that crap.
Joe Esposito [00:48:35]:
It's, oh man, I made this cool thing, let me show you on my phone, or look at this thing on a laptop I'm building. It's that type of energy, the whole three days. Technically it's five days, but three days is really the main stuff where you go to sessions, you work with these people who have years of experience and they're trying to show you from intro to advanced, all the different techniques, new stuff coming up. It's just unlike anything. There's nothing. I've been to four of these, this is my fourth one, I may never get to another one again because they are not cheap. That's the only downside to them. They're not cheap.
Joe Esposito [00:49:05]:
But there is nothing anybody, if you ever have the chance creatively, it is so powerful to be here among people who are just so highly talented and wonderfully diverse. It's great. I love it.
Mikah Sargent [00:49:17]:
Awesome. Now, of course, part of Adobe's MAX conference every year is revealing new tech that they're adding, new features that they're adding. And we saw an announcement that Adobe is putting audio, video and image generation all in one Firefly Studio interface. Now, for people who aren't familiar, Firefly has, even before the generative AI craze, been Adobe's own take and sort of feature set. Regarding using at the time, they probably would have described it as sort of a mixture of computer vision. And I can't remember what the other term was before we started saying generative AI, but essentially I think they called.
Joe Esposito [00:50:02]:
It server side processing of your content. That's what I hear.
Mikah Sargent [00:50:07]:
Yeah. And so we've seen Firefly for a while, but now we're seeing it all in this Firefly Studio interface. From what you saw at max, does this feel like it genuinely simplifies the creative workflow? Or do you feel it's more about Adobe kind of consolidating its AI offerings into one spot so that they can sort of trumpet and shout that from the rooftops? Here's where it all is.
Joe Esposito [00:50:31]:
I think it's a combination. I think there are people within Adobe who genuinely think that this is going to be a helpful tool because Firefly has now become an umbrella. As you said, Firefly started off as Sensei, then it became Firefly and that was just their kind of self contained AI tool. They made a big deal of saying, we're not training on anybody's work, this is commercially safe. And now what I've noticed is Firefly has now become a umbrella. Umbrella term. They don't really call that out. But if you're paying attention, Firefly is now your single destination for anything, including all the new partners.
Joe Esposito [00:51:05]:
So it's. They're trying to have their cake and eat it too. They want it to be a thing where you can go in and if you're stuck and you have some kind of creative block, you can get suggestions from this agentic thing that they've built, which is there is now again under all under Firefly. So they do want to. And there are things about it that are genuinely useful. There is a thing now that I will absolutely use where I could take a Photoshop export, which I am terrible at naming. One thing I've never done is I cannot name layers. I every time I start one I'm going to this time.
Joe Esposito [00:51:36]:
Never, ever, it's never happened. So you can take that and tell Firefly, please analyze and rename these layers.
Mikah Sargent [00:51:42]:
Nice.
Joe Esposito [00:51:43]:
And it will figure it out based on that. And that's useful. It's not replacing what I've done. It's trying to help me not be such a disorganized mess. And so that's something that's useful. But on the other hand they are kind of subtly telling you, oh, you can use all these other models, which of course aren't guaranteed to be commercially safe or not trained, but they don't point that out. So I think there's a real fine line they're walking. And the only thing that concerns me is they're not being really clear about it.
Joe Esposito [00:52:09]:
So if you're not paying attention, you won't really notice it. But I do think that their idea in terms of the company is they want to have it be both. They want to have it be a creative portal where you don't ever have to Leave. And that can be useful, but you also have to really pay attention to what it's doing and make sure it doesn't start pointing you towards something that isn't where you want it to go originally. And that's the balance that we'll see. This is all in beta right now. We'll see how this actually pans out because I'm not really sure, but in their view, they would say yes to both your questions. They want it to be a creative portal and it's just kind of getting you into the AI soup that they are kind of trying to serve you.
Joe Esposito [00:52:46]:
I think it's probably my very lengthy.
Mikah Sargent [00:52:48]:
Answer to that question and we love it. You did mention briefly a little bit about those partner models, Google OpenAI and others integrating right alongside Adobe's own Firefly models. This is fascinating because it is one thing that I have really appreciated most of the time. If I put using any of the Firefly stuff, it's literally about, oh, I took this photo and unfortunately it is not cropped in a way that I would like it. So let me just add, you know, 20 pixels more that I need up or to the left to be able to recenter the photo or change it in some way. And so then I just do a generative expand, right? And what I've loved is every time I'm going, okay, this is Adobe Firefly. I know that I'm never using it in a commercial way, but there's just something about knowing that it's not trained on other people's work and that it is very much Adobe being mindful of that that makes me feel good. Now though, as you said, we're seeing it from these other companies.
Mikah Sargent [00:53:49]:
What is your take? Why do we think Adobe is opening up the platform this way rather than just keeping a the tech and the integrations proprietary and be potentially opening up now creators to risk that wasn't once there because Adobe, if I remember correctly, sort of backed their Firefly model by saying we would help you in situations where you were sued or whatever for having content that is allegedly, you know, trained on other people, because we can prove that it wasn't. Now you're breaking in one of these applications and as you said, you got to be very aware. So that risk versus reward, what is the reward of opening it up, do you think? To these third parties, I would say.
Joe Esposito [00:54:39]:
There'S probably two answers to that question. One is, I think that they are actually doing something very smart. They're looking at Apple who's trying to do an internal AI or was trying and now we know is probably at least not completely self containing it. And they've recognized, and I'm sure their own research has found that when you have everything sealed off, there are natural plateaus you're going to hit into. There are limits. There are things where if you're looking at a competitor who says I don't care in just the entire Internet, you're going to seem like you are at a lesser stage. Whether that's really true or not is up to everybody's assessment, but that's how it's going to look. So what they're doing is they're still keeping.
Joe Esposito [00:55:11]:
There is an. A Firefly, they call it Firefly image Something 5 I don't remember the exact term that is still they are saying we have to of course verify all this stuff. I would say anytime any company says anything, everybody wait a week, see if it's true. But what they're saying is that that Firefly image model is still commercially safe and not trained and you can select them from a dropdown. So if you were to stick in that, the understanding at least as presented right now is it would be exactly what you were talking about, you would still have everything within that kind of commercially safe, viable space. And based on some of the demos they did later, it does seem like that's true in terms of what they were doing with replacement. But the other answer is that it allows people who don't care to then be feel welcome in Adobe. If you were using Nano Banana for everything and you were like, well, Adobe's using Firefly and I know Fireflies.
Joe Esposito [00:56:01]:
Worse, worse. I'm not going to bother. Why would I? Well now you could say, well no, you can come in. We've got nanobanana, we've partnered with every. I mean they listed off there was a feature quilt, except it was a partner quilt of just all these different companies, unbelievably from their old stance to just suddenly have all these companies involved. I think that's the other side is they want to be able to say that, come on in, don't worry, you can use whatever you used before, but now you can, you can benefit from all the creative cloud features that we have, like, you know, cloud libraries and mood boards, Firefly boards, all this stuff. But I do think some of it is looking at Apple and they probably made the assessment, if Apple, one of the most technologically advanced companies in the world, cannot get this internal AI kind of sealed bubble AI thing happening, then why are we wasting our time and we're only cutting off an audience that we could bring in by saying, if you want these options, they're here. The trick's always going to be.
Joe Esposito [00:56:54]:
And plus, let's not forget all those partners, now they have a bunch of new numbers, they can see, look how many more users are generating stuff with ours. So it's always a nice little synergy in terms of, you know, hyping yourself up within the, the crowded AI space. But I think that's what it is. They're seeing how hard it is for a company that we would probably guess has more engineering talent in terms of raw power. And now you have a market that potentially could come in that was excluded before because all you had was Firefly. So I think they look at it that way and it makes sense if you think about it in those terms.
Mikah Sargent [00:57:26]:
Absolutely. Yeah, that, that makes sense to me too. One of my favorite parts of Adobe Max, watching after the fact the sneaks. When there is new future looking technology revealed or demoed or shown. And it's of course like a sneak peek of something that will eventually, hopefully come to the platform. And in many cases we do see it later come to the platform. Some of the projects include Project Light Touch and Project Scene it, which give creators control over things that used to require reshoots or complex 3D work. I'm kind of curious because as I was leading up to having you on the show, you mentioned you were watching the sneaks thing.
Mikah Sargent [00:58:13]:
Were you there in the audience for that? And if so, which of those experimental features got the biggest reaction from the Max audience?
Joe Esposito [00:58:23]:
No, that was so overfull, which it always is, that I didn't even bother waiting. I just watched it literally live on the laptop. Because you ordinary people, if you ever go to these things, if you even get into these, you're typically, it's kind of like anything else, like a comic con. You're 500ft back looking at these two little ants on a screen. And honestly, unless you're somebody who's a executive or you get very lucky or get your timing right or something like that, it's almost impossible to get anywhere near enough. So honestly, I'd rather just be comfortable in one of the beanbag chairs they have and just watch it on their wi fi. What's the difference? Plus it's better quality. But you with the headphones on, you are hearing the response.
Joe Esposito [00:59:02]:
So it's kind of like the idea of a lot for people with baseball, okay, I could go spend a bunch of money and be up in the bleachers like at the World Series or something. Or I can watch it on TV and be very comfortable and get a better quality picture. Anyway, so no, I, I got in the first max. I went to, I went into sneaks and I was like, unless I can get in at the front, I'm never doing this again because I can barely even see what's happening. I'm seeing people's heads and phones being held up. How exciting. So, but what they gauge it on is crowd response. So they tell you to clap and cheer at everything.
Joe Esposito [00:59:32]:
And one thing I noticed this year, and I don't know why I didn't think of this before, is they always have a celebrity, generally a comedian. Jessica Williams, I believe, was the person they had this year. She was great, very entertaining. But I always thought, why do they bring people on who have no concept of these products? And then I realized, oh, because they're amazed by everything. Everything gets a huge response. Marketing wise, that's great. You're showing this famous person who thinks that the person on stage is Merlin doing all these magic tricks. I'm like, ah, marketing smart.
Joe Esposito [01:00:02]:
So. And the reactions are, you know, what they do is they bring the engineers out who work on these products who are often not presenters. So I kind of feel bad for them because they often look very nervous. So you're going to get sympathy applause no matter what because you feel bad for the person on stage. But there were two. Okay, if we do two of the sneaks if I want to? Yeah, absolutely, I think. Okay. So the two that I felt like got really actually enthusiastic and not okay, pause, clap.
Joe Esposito [01:00:28]:
Now. Responses were clean take and light touch. Now you talked about light touch. So we'll start with that one. Light touch lets you take a. What they were showing were kind of static, flat lit photos where there wasn't a lot of dynamic lighting. And you could remove lighting, you could move lighting, take shadows out, add more light, take light out. So there was one where there was a hat that was projecting a lot of shadow on the face and they were able to remove that.
Joe Esposito [01:00:51]:
And then there was one with a jack o lantern where the individual, the engineer, was moving a light source around and then placed it inside as if there was a candle in there. And the lighting adjusted. And it was, I mean, that got the jack o lantern specifically, it was very interesting. There were two different responses. I heard, listening to it very clearly. The one where the face got fixed, there was a very detectable amount of what I call AI smudgery. You could see where it was smoothing the face out and you could hear people kind of reacting to that. But then with the jack O' lantern, probably because it wasn't a person, the effect was much more effective and people were wowed by it because it was this internal light shooting through it.
Joe Esposito [01:01:31]:
And it was extremely impressive. None of this stuff is unimpressive, you know, any of the ones that don't get a big response. The engineering talent to do this stuff is amazing. But that one, people really responded when he pushed the light inside because it had awareness of the volume doom. Again, we would have to see how that really works because it obviously knew what a jack o lantern, it could kind of know. It was hollowed out. So there was some awareness probably by the. The gen AI.
Joe Esposito [01:01:54]:
Would that work the same for everything? I don't know. Again, demos are always the idealized thing. But that was. That did get a very loud reaction.
Mikah Sargent [01:02:02]:
And then hearing that, even without seeing it, just from you describing it, I had a gasp. So, yeah, I can only imagine actually seeing it.
Joe Esposito [01:02:10]:
No, it looked. It looked great. No, that's what I mean. Is it? I don't think anybody. They were. They. They understood what the idea of the kind of volumetric lighting, if that's even the right term for it, we're moving it around and that's. Oh, okay, that's great.
Joe Esposito [01:02:21]:
But then to push it inside and have the lighting be aware and light it properly, that is very impressive to just do with no kind of preamble or lead up to it. It was kind of a surprise where he's like, oh, let's push the light in. So people really enjoyed that. And then with cleantake, that was the other one where I feel like production people specifically. So like a twitch, the people behind the scenes who have to process the audio and do things after the fact. There have been times where especially because like a twit, you do live ad reads. And if anybody watches the live stuff, which you can do if you're a member of club Twit. There you go, you can see those.
Joe Esposito [01:02:51]:
And sometimes somebody will make a mistake on a word. Oh, I read this company name wrong or I emphasize the wrong syllable. And often you have to do a retake, whereas this had a transcript and you could select that word and just replace it. And as long as the person didn't have a really visible mouth deformity in terms of how they said the word, you could really. You know, I used to watch TV edits and they take the swears out and the person's clearly saying the F word. And Then it says fudge. As long as it isn't too visible there, you can just replace the word and it generates it based off the person's voice and just replaces it. And that was when they did that for somebody who had said fourth instead of fifth.
Joe Esposito [01:03:28]:
Now obviously those are really similar words, fourth and fifth. It's very close, so it's a lot easier. But still, why have to go rerecord a whole thing if you could just replace a word in five seconds after the fact. So I think for behind the scenes people, that floored them because they were doing all types of active replacement. Again, idealized demos, but still you can see the potential of it to save having to go back and re record something where you, you missed something, you just exported the whole thing. Got the whole whole timeline done one word wrong, but it's a vital word. Okay, just go find the word and replace it. That I think got a very enthusiastic response from people who were doing again the behind the scenes work, which is a lot of the people who are here, the production people.
Joe Esposito [01:04:09]:
So those were the two. I really felt like when they showed the demos there were people who were viscerally surprised and started to see the potential in what they could do with it. And those were the ones that really stood out to me. Not that any of them were bad, but those were the two kind of big highlights where I felt like there were waves of people who were really shocked at how good what they just saw was.
Mikah Sargent [01:04:28]:
That's awesome. All right, we are going to take a quick break before we come back with even more boots on the ground at the Adobe Max conference with the Joe Esposito. But let me tell you about Vention bringing you this episode of Tech News Weekly. AI we hear it's supposed to make things easier, but for most teams you it's only made the job harder. There are probably several of you groaning right now as you go. Yeah, that's exactly the case. That's where Vention's 20 plus years of global engineering expertise comes in. Because Vention build AI enabled engineering teams that make software development faster, cleaner and calmer, clients typically see at least a 15% boost in efficiency not through hype, but actual real engineering discipline.
Mikah Sargent [01:05:15]:
They also have fun AI workshops that help your team find practical, safe ways to use AI across delivery and Q and A. It's a great way to start with Vention and test their expertise. Whether you're a CTO or a tech lead, or maybe even a product owner, you won't have to spend weeks figuring out all the tools and the architectures, the models. Vention helps assess your AI readiness, helps you clarify your goals, and then outlines the steps to get you there without the headaches. And and if you need help on the engineering front, well, Ventions teams are ready to jump in as your development or consulting partner. It's the most reliable step to take after your proof of concept. Let's say you've built a promising prototype in unlovable it runs well in tests, but now you're going well. What's next? Do you open a dozen AI specific roles just to keep moving? Or perhaps do you bring in a partner who has done this across industry, someone who can expand your idea into a full scale product without disrupting your systems or slowing your team.
Mikah Sargent [01:06:17]:
Vention is real people with real expertise and real results. Learn more adventionteams.com and see how your team can build smarter, faster and with a lot more peace of mind. Or get started with your AI workshop today at ventionteams.com/twit that's ventionteams.com/twit.
Mikah Sargent [01:06:41]:
And we thank Vention for sponsoring this week's episode of Tech News Weekly. All right, we are back from the break, joined by Joe Esposito who is at the Adobe Max conference telling us about the stuff Adobe has announced. The next thing I'm going to talk about, we're kind of going back to Firefly a little bit because we've got Firefly custom models. I think this is a really fascinating concept. It lets people train AI on their own style. Joe, you have a really unique style and so I am curious to hear your thoughts on this in general. And then Firefly Foundry does the same for enterprise brands. How do you see this changing the conversation around AI, around creative authenticity? And then do you foresee yourself sort of letting Firefly get to know your style for assistance in future products or future, not products of future creations?
Joe Esposito [01:07:40]:
I think the answer depends on where you are as a creator because what they seem to emphasize was if you're trying to figure out what your style is, and that's often a challenge for people who are just getting started because there's so much, especially now. When I was much younger, there was no Twitter, it wasn't even Facebook when I started drawing and illustrating. So you didn't really have the sea of content coming at you where you can feel very lost and you can sit there and say, well, I don't know what exactly my style is. And that can be tough to figure out. Most times it's actually A very painful process. So I think what Adobe's idea there is, okay, take whatever you have, and if you're looking for guidance, this agentic model can get you there. Now, I personally don't know if that's what I would tell young creators to do. I think there is a value in bringing yourself into the creative process, but that's a very subjective thing.
Joe Esposito [01:08:33]:
Just like most things, I find that using any tool as a shortcut, what you risk is emulating somebody else instead of expressing yourself. And that is the most valuable part of creation, is putting you into it. But at the same time, if somebody says, you know, I'm just stuck. I've got this thing, 80% rhyme. It just doesn't feel like it's punching enough, this thing could come back and say, well, why don't you add like, change this color to this? Try combining these colors. Or I'm terrible with color, I'm terrible with color. You could say to it, okay, I want complementary colors, and I cannot figure this out. That would be a valuable answer to say, okay, well, you've got purple, blue and green, so you want to add a little of this for contrast.
Joe Esposito [01:09:12]:
Or if you want everything to have kind of a unified feel, then here's a, a couple of variations of, let's say a teal to put as a highlight. That's fine because that's accenting what you've already done in terms of that. I think it could be valuable. Now whether you want to trust all this stuff. This is where I have very specific questions about what this tool is going to be interfacing with, because supposedly it's going to go analyze social platforms and give you analytics based on that so that you can then make something that performs better. And I guess if your goal is to just have more followers, that will be useful. But again, I don't know that reducing creation to numbers and trends and algorithms is beneficial. It can be good to see why something.
Joe Esposito [01:09:58]:
I mean, if you really want to look at why one image that you didn't particularly like did a lot better on whatever platform you want, as opposed to one you really thought would have, there may be a benefit to that, may be able to figure that out. I, I don't know if I would believe what it would say, maybe. But, you know, but also we have to remember different creators have different goals. And if the goal is I want to build an audience and that's my primary goal, well, then I'm not going to tell you shouldn't use it, but I would be very Careful on what you trust with these things. And if it was me, I would be sending an email to somebody in their marketing department getting very specific language about what happens with my content so that you have it in writing, always get it in writing, and then you make the decision again. It's so easy to come up with a blanket answer based on the way I view the world. But I'm much older than a lot of people now. I had a very different experience in coming to my creative journey or whatever you want to say that.
Joe Esposito [01:10:53]:
It's easy for me to say, yeah, this is pretty useless to me because I don't need anybody to tell me what I'm doing. I have enough problems with myself telling me what I'm doing doing. But for somebody starting out who may have different priorities, then if this is in fact encapsulated in their kind of safe Firefly net where they're saying there's no training with anybody else, everything you're doing is private, it's specific to you, we will not share this information. Then it could be beneficial. So again, we have to see what all the details are. And the other thing is, does this stuff use up credits? Because we can't forget this is not free. It's free through December 1st so they can get you hooked on it, but it's not free. You only get so many credits on these different plans.
Joe Esposito [01:11:31]:
So every time you ask it a question, how many credits is that billing you? Okay, well now you're going to have to buy more and more and more. And that's something that they gloss right over in all these presentations. But that's something important to keep in mind too is none of this stuff is just all you can eat. You are going to be paying for it. So you have to decide what that value proposition is for you.
Mikah Sargent [01:11:50]:
That's a really good point. There's another feature that you talked a little bit about like generating speech. As part of it, there's generate soundtrack. Generate speech. This is AI audio creation. Now we've seen this is, this is a category across the generative AI landscape that's a little bit tumultuous. Based on the demos you saw. How does this stack up against dedicated audio AI tools that creators might already be be using? You know, if you tuned into the Dungeons and Dragons event, you would have heard some suno created music for our from our bard.
Mikah Sargent [01:12:32]:
From what you saw, from what they showed, where does this stand in, in comparison?
Joe Esposito [01:12:39]:
Well, again we're, it's a two part answer. So let's as far as the speech goes, the speech thing is I think the one that many people are going to. To be nervous about because anytime, again, if you value your creative voice. Oh boy, I sound like an ad. Now, if you sound like you value your creative voice, having that replicated may be very disturbing to some people. And I understand that. I mean, there are people who don't really care and there are people who are going to. Pardon me.
Joe Esposito [01:13:05]:
Speaking of voice, we could have replaced that with a voice that didn't break apart. That's something where I think some people maybe if they do have a voice that is way worse than way worse like mine's any good. If people are not comfortable speaking and would like something to speak for them, because for them the writing is the, is the enjoyable part, well, that's great. Well then you can have this thing speak for you. Some people have higher and lower voices. Some people don't like how they sound. Oh, I'm coming to pieces. And so this will be useful for them.
Joe Esposito [01:13:31]:
But I do think it will also be the one that some people have the biggest problem with. Now, as far as the music replacement, that one was actually really impressive because it was like a public event. The one that I was, the one I was looking at was a public event and there was a licensed song, like say a Bruce Springsteen song. And so they said, okay, we can't use that. Can we find something that's similar but is already licensed? So all of these replacements are Adobe says commercially safe because they're licensed through Adobe.
Mikah Sargent [01:13:58]:
I love.
Joe Esposito [01:13:58]:
So that. Yeah, because how many times. I can't lie, I did this live stream, I can't use it, or I recorded commentary. But this thing is getting blocked, especially because they have this Google partnership. I'm sure they were talking to Google about, well, how do we make sure we're not going to get a copyright strike? Well, if you have this, this library and it matches things up enough, that avoids a copyright strike. And that's, you know, because Google, if you upload, if you're upload something to YouTube and they, you get a block or it's, you know, a copyright warning, it'll say, oh, you could just replace it, replace the song. But you could have this song ready to go or you could just actually export the video with that already done. So you don't have to worry about the replacement or any type of delay.
Joe Esposito [01:14:35]:
So that one, I think is probably the most uniformly acceptable one where I don't think a lot of people are going to have problems because, okay, it's Avoiding a problem with licensing and it's trying to match it to a similar song. Well, if all you care about is it as background music, you're not going to really care if it's the licensed one or not because you're not going to pay the licensing fee. So that one I think a lot of people will jump on board with.
Mikah Sargent [01:14:56]:
Yeah. One of my favorite features that's existed for a while is the like match font feature. And even if you can't find the, if it can't find a typeface in its library, it can suggest ones that are similar to it as well. And this feels like an extension of that in a way. It's just a clever, It's a clever actual problem and solution. Right. It's not just, oh, we're doing this thing and it doesn't really. There's not a problem in the first place.
Mikah Sargent [01:15:25]:
No, this is regular and this is something that people would experience. So I love when you actually have. Because it's often you're trying to find these like shiny bells and whistles solutions that you can really just. And talk about the idea that something as small as that where I wouldn't have thought of it personally but then you say it and I go, oh my goodness, the number of times where that would come in handy, those are the kinds of solutions that I think are really cool.
Joe Esposito [01:15:55]:
And so also I also critically think with the, the, the track replacement, it's not replacing your creativity, it's a background sound. Because if you really cared about that song, you would have licensed it. Well, obviously most people aren't, don't have the money to do that. So it's not replacing what you made and what you are doing. It's making sure that it doesn't get squashed by something in the background. And that's why I think it's going to be much more palatable to people than generate me a new voice, which I think is going to be much more case specific and where somebody wants to use that.
Mikah Sargent [01:16:25]:
Definitely. Another thing, Project moonlight, which introduces conversational AI that can work across Adobe apps and also pull from social channels. This one is interesting and feels like I just. This one perhaps in my mind a bit more of a, of a solution looking for a problem. But I could be wrong. And something that I'm really loving about talking to you Joe, is you are really bringing in the mindset of different types of creators and what they might need. And I think that's important to remember when it comes to this. Do you feel like this is a Natural evolution of creative tools or is Adobe trying to do the most.
Joe Esposito [01:17:07]:
This is a fascinating one because I actually talked to people about this one specifically, I was just kind of talking to random people, which is not something I'm normally inclined to do. But again, in this space, you kind of just feel safe because everybody's a wacko like you are, so it's a lot easier. So I was kind of bringing this up on, okay. Because the way they talked about it was, it's your personal creative director. And I thought, ooh, I'm not sure I would have used those words to a room full of creative individuals. And sure enough, it was a very interesting split because people, people who were, I would say older, were repelled by it because the idea that they have to have a machine. And a lot of people here don't make that confusion. They don't think this thing is alive.
Joe Esposito [01:17:49]:
That's not. Other places people mix it up. Not here people are very aware. This is a machine. This is not a real person. This is not a real creative director. Probably because most of them have one. And so they understand the difference.
Joe Esposito [01:18:00]:
And the older people were, I don't need a machine telling me how to do something. When I need to do something, I will talk to a person. And the younger people had the same opinion, but in the opposite direction. Were like, why would a machine know more than I do? They have that arrogance of youth that is so great that you want, as a young creator, where you think that what you're doing is so unique and special that you don't need anybody to tell you what needs to be done. And so I don't know who this is aimed at, except it feels like, because there was a whole portion of this with the Firefly Foundry that is aimed at what feels like a consulting arm of Firefly to Enterprise. And I wonder if this is so that they can go to places and say, oh, you don't have a marketing head. We can provide a tool that will do that for you. So if you have just a couple of designers, but you don't have a head of marketing, you don't have the budget for it, or you just don't have a large enough company for it, well, we can give you an agentic version where you can feed all your brand action.
Joe Esposito [01:18:57]:
And they were marketing this specifically for companies, too, but I feel like that's what they're going for, is somebody who really just doesn't have any direction and needs some. But that's going to be such a small. I agree with You, I think it is a solution looking for a problem because I just don't feel like most creators need somebody telling them what to do. In fact, I bet if you asked most people who are freelancers, the thing that annoys them the most is when somebody comes back, says, I need to make changes, can you do this? And you go, okay, that's four more hours. And it's like, do you really want a machine telling you that, wow, that sucked. Just reverse everything you've done. If you want to be successful, I think that you just want to punch your monitor. So yeah, I don't, I don't know what the adoption rate for this thing is going to be, but I wouldn't go within 500 miles of it.
Mikah Sargent [01:19:42]:
Lastly, my final question for you. Looking at everything announced at Adobe Max, what's your gut feeling about whether these AI features will actually get adopted by everyday creators versus being these impressive tech demos?
Joe Esposito [01:19:59]:
I think it's the things that are efficiencies and like for example, the renaming layer thing, I will use that because I hate doing that. That's not really a part of the creative process. That's an organizational. That's like organizing a filing cabinet or the thing with the soundtrack replacement. I could easily see video editors. You know, you're going out, you're shooting, let's say a live sporting event, something like that. Something like Friend of the show Aunt Pruitt does where he's out at a football game or something like that and there's a piece of music in the background that you know is going to get a copyright strike or it's going to be a point of contention to just be able to replace that and preserve everything else that's immensely helpful. Those things will get used.
Joe Esposito [01:20:36]:
As far as the things that are trying to you be creative for you, I just don't see that there's going to be a lot of people unless they're specifically audience number driven. I just don't feel like that's a lot of actual creative people. That's more of a marketing firm type of thing. And maybe they'll be the ones who use it. Maybe it's people who really don't care about the creative part of it. They just need to have something that performs well on whatever platform they need to be able to boost their product. Okay, they'll use it. But actual people who do everyday creation who are really engaged with the creative process, I don't think they're going to use any of this core creative stuff because I feel like there's two rings to AI in general.
Joe Esposito [01:21:13]:
The stuff where it's trying to replace the human element, and there's stuff that's trying to augment and get rid of the stuff that nobody wants to do, and the stuff that replaces what nobody wants to do, that's what people are going to jump to. The stuff that's trying to replace what humans do. It's never going to happen, at least not on a large scale, because it fundamentally misunderstands that the human element is what most people want. So that's where I think the dividing line is going to be. Some of this stuff is going to just fall the wayside, but the things that are really just trying to keep you creating more than managing those things, I think people are going to adopt and use well.
Mikah Sargent [01:21:48]:
Joe, thank you so much for taking the time to join us here, for giving us a glimpse through the lens of a creative who spends time in Adobe's tools. And it's been great hearing your thoughts. If people would like to keep up to date with what you've got going on, check out your podcast and the creative work that you do. Where are some places they should go to do that?
Joe Esposito [01:22:13]:
Okay, I made this really easy. I based this on choose your own adventure style stuff. So if you go to ozonenightmare.com, there's two buttons. If you're sick of my voice, you're going to use the one that says look at my artwork. If you somehow enjoy hearing my voice, you can use the one that says listen to my podcast. It's really easy. Two giant buttons. Because I can't see for anything without my glasses, so I need big buttons.
Joe Esposito [01:22:33]:
So it's really easy. You go to that address and you just choose your own adventure. Go in whatever direction you want. And as far as if you care at all about anything I say socially, which you shouldn't, because I don't say anything of worth, but if you care, it's ozoneartfoundry everywhere. Real easy.
Mikah Sargent [01:22:47]:
Awesome. Thank you, Joe.
Joe Esposito [01:22:49]:
I'm just happy I made it through without stumbling over every other sentence. So thank you. Great. You did great.
Mikah Sargent [01:22:54]:
Thank you so much and enjoy the rest of the conference.
Joe Esposito [01:22:57]:
Thank you.
Mikah Sargent [01:22:59]:
All righty, folks, that brings us to the end of this episode of Tech News Weekly. I will remind you that you can head to Twitter TV tnw. That is where you go to subscribe to the show in audio and video formats. And of course, tell your friends, tell your family, tell your enemies, tell everyone about the show. If you'd like to get all of Our shows ad free. If you would like to get all of our shows ad free, just the content, none of the ads, you can head to Club Twit twit.tv/clubtwit is where you go for $10 a month, $120 a year. You can become a member of Club Twit when you join the club. Along with all of the ad free content, you also get access to our special feeds.
Mikah Sargent [01:23:52]:
Those feeds include our coverage of news events from many a tech company. You can hear Leo and I do live coverage of Apple products as well as access to the Beats and the clips. Those are behind the scenes looks at our shows and access to our special club to it shows like Mikah's Crafting Corner as well as Stacey's Book Club and the Garlic Hour. That is a show I do not watch. No we do not have one called Garlic Hour but we do have a darling Dungeons and Dragons adventure you can check out in the club. Along with that you also gain access to the member Discord, a fun place to go to chat with your fellow Club Tweet members and those of us here at Twit. If all of that sounds good, pull out your wallets, pull out your cards, put them in. But you get a two week free trial to kick things off.
Mikah Sargent [01:24:59]:
So twit.tv/clubtwit is where you go. If you'd like to follow me online, I'm @mikhasargent on many a social media network. Or you can head to Chihuahua Coffee, that's C H I H U A H U A.coffee where I've got links to the places I'm most active online. Thank you so much for being here today. I hope you have a very spooky and wonderful Halloween if you celebrate and perhaps you will find me on your doorstep. But do not invite me in. Goodbye everybody.