Transcripts

This Week in Enterprise Tech 528 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Louis Maresca (00:00:00):
On this week in enterprise tech, we have Mr. Brian Chi, Mr. Curtis Franklin back on the show. Now, is this the year of derivative AI technology? We are seeing technology like this pop up all over the industry. We're gonna talk about where and when and how. Let's keeping AI as a theme. What about using it in the hiring process as well? Today we have Dan Finnigan, he's C filtered. We're gonna discuss the current challenges each organization has with the hiring process and just where this type of technology might be able to optimize them. You definitely shouldn't miss it. Quiet on the set.

Announcer (00:00:34):
Podcasts you love from people you trust. This is twit.

Louis Maresca (00:00:47):
This is twt this week in Enterprise Tech. Episode 5 28. Recorded January 27th, 2023. Chat G P T. Take my job please. This episode of this week, enterprise Tech is brought to you by Collide. Collide is an endpoint security solution that gives IT teams a single dashboard for all devices regardless of their operating system. Visit collide.com/twit. Learn more in activating free 14 day trial today. No credit card required. End by worldwide technology with an innovative culture, thousands of IT engineers, application developers, unmatched labs and integration centers for testing and deploying technology at scale. WWT helps customers bridge the gap between strategy and execution. Learn more about wwt. Visit WWE t.com/switch.

(00:01:45):
Welcome to twit this week at Enterprise Tech, the show that is dedicated to you, the enterprise professional, the IT pro, and that geek who just wants to know how this world's connected. I'm your host, Lewis Maresca, your guy through the big world of the enterprise and what a big and busy world it is. And I can't guide you by myself. I need to bring in the professionals. Sorry. On Mr. Curtis Franklin, he's senior analyst. I'm Dia and he's the man who has the pulse of the enterprise. Curtis, it's always great to have you back, my friend. How's your week going?

Curtis Franklin (00:02:14):
Oh, it's been a busy week. You know, I have to say I have enjoyed the week because it's got one of the rare things we get to see and that is a victory by the good guys. So it's not the biggest of victories. The feds manage to take down a ransomware group, but we'll take the victories where we can get them. And with that kind of joy and happiness, I'm ready to launch into the weekend after. Of course, we've had a great time today on twit.

Louis Maresca (00:02:51):
Indeed. Indeed. Well thank you Curtis, for being here. Well, we also have to welcome back our veryo, Mr. Brian Cheese net architect at Sky Fiber, and he's all around tech Geek cheaper. Any, any toys you wanna share this week? Any fun stuff?

Brian Chee (00:03:04):
Well, I'm getting ready to drive down to Miami to go and visit what we call our hanai godmother. Kind of adopted. She used to be a New York fashion designer and Wow. Created Kathy's wedding gown from scratch. It was quite spectacular. But I've also been kind of poking around trying to find what the market is for used fiber optics and finding out there's actually, if you're willing to go with like a dozen, dozen fibers, you know, 144 fibers, which is painful to, to splice. Those are actually pretty cheap. In fact, some of the big spools up in about the 6,000 foot range. Cause not many people use those. They're going really cheap. So trying to decide if it's worth the pain to do, to put in a ginormous fiber backbone for the central Florida fairgrounds.

Louis Maresca (00:04:12):
How do now what, what is put in context? What is really cheap?

Brian Chee (00:04:18):
I found the 6,000 foot spool that if I'm willing to pick it up from Cincinnati would cost me a whole $16.

Louis Maresca (00:04:28):
Wow. That is cheap.

Brian Chee (00:04:30):
<Laugh> brand new, a composite fiber cable like that if you pick it up from the factory is probably gonna be about 20 or $30,000. But so few people use them that once they go surplus they just want the yard space back.

Louis Maresca (00:04:48):
Right, right.

Brian Chee (00:04:50):
So interesting,

Louis Maresca (00:04:51):
Interesting. You might have found a market there. Well done. Maybe

(00:04:54):
<laugh>. Thanks Geer. Well, speaking of busy, we have quite the busy week in enterprise now. Have you ever heard of derivative ai? Well, it's, I think it's the year of derivative AI technology. We'll talk about just what that means and where we might see AI popping up in the industry plus keeping AI as a theme. What about using it in your hiring process as well? Well, today we have Dan Finnigan, the c e of Filtered. We're gonna discuss the current challenges companies, organizations have with the hiring process and just where technology might be able to optimize them. So definitely stick around lots to discuss there. But let's go ahead and jump into this week's news Blips, deep fakes have no bounds. Sure, there are entertaining to watch your favorite actors sing a song or do a dance, but what if they also were used to sign important documents and pay for things without you knowing?

(00:05:41):
Well, this article over ours, Technico examines a new technology called Generative Generative handwriting, which can be actually create realistic handwritten text using deep learning. Now, the technology uses a neural network to generate text that looks like it's handwritten by a human. It's not a font doesn't look like one, it even has the you know, different strokes. The technology can be used to create a variety of different hand styles handwriting styles, as well as based on large data sets of handwritten samples. The technology is still in its early stages, but it can be used to create realistic looking handwritten documents or to generate unique handwriting styles for digital signatures. Now, calligraphy or calligrapher.ai draws each letter as if it were handwritten by a human guided by statistical weights. Weights. Now tho those weights are actually came from a reoccurrent neural network or r n n that has been trained on the Im online handwriting database, which contains samples of handwritings from a large data set of individuals digitized from a whiteboard over time.

(00:06:42):
Now, the, since the algorithm producing the handwritten is actually statistical in nature, it puts properties such as legibility can be adjusted dynamically. Now, for me, if this forces my attention to more of digitally signing things more now, the more realistic documents appear by being signed by somebody else, or even a technology creates even higher risk. This could be used to forge documents or signatures, and it could be used for, you know, forging au authenticity of, of, of documents. Now, if you want to think about digital signatures, the question is, are they safer? People can't forge them. Well, I feel they have much higher or much harder to forge. In fact, digital signatures are created using cryptographic techniques, and they're used as a combination of keys to verify the signs identity and create a unique code. Now the question is, can those be used in a malicious manner? They can less detail, less, less chance of doing that. I think that you know, maybe this technology might be forcing people to use digital signatures more.

Curtis Franklin (00:07:47):
Well, let's talk about something we haven't mentioned in a while. Let's talk about printers. They're one of the items most of us don't think about as long as they spit out paper with inco on it. But most modern printers are actually network connected devices with sophisticated processors inside, which means they're a target for threat actors. In the latest example of this, this week, Lexmark warned customers about a critical security vulnerability allowing remote code execution, which is affecting more than 120 different Lexmark printer models. Bug designated c v e dash 2023 dash 2356 carries a C V S S score of nine out of 10. That means, for those who don't follow C V S S is pretty darn bad. It earned the score by being a server side request forgery vulnerability in the web services feature of about 120, as I said, and you were Lexmark devices.

(00:08:55):
Now, Lexmark has issued a firmware patch and has told customers that disabling web services on TCP port 65 0 0 2 is a workaround for protection. In the case of the affected printers, the web server provides management features for the device as it does for many other types of printers. Now, for attackers, printers are often overlooked devices that can provide convenient launchpads for attacks that spread throughout a network. And frankly, whether the printers that sit on your network or from Lexmark or not, it's worth taking a look and making sure that all of your printers are scanned, protected and taken into account when it comes time to secure your enterprise network.

Brian Chee (00:09:50):
I'd like to say thank you to the folks at ours, Technica for following this particular thread of stories. So basically the headline is John Deere Relent says Farmers can fix their own tractors. After all, well in farmers now have the right to repair their John Deere tractors themselves, or through independent third parties ending a lengthy battle with the agricultural machinery company. On last Saturday, John Deere and the American Farm Bureau Federation signed a memorandum of understanding, outlining the company's responsibilities to provide diagnostic tools and software outside of the company's official authorized repair centers. The right for consumers to repair their own property, be that car's, electronics, or farm equipment has been growing over the past few years. And with some states taking action to enshrine the right for their residents, farmers have been at odds with John Deere since 2016 when the company change its end user license to require that any repairs involving embedded software be carried out only by authorized technicians like cars.

(00:11:02):
Modern tractors are now packed full of complicated electronics, and the restrictions imposed upon farmers did not go down well. Anyway in July of 2021, US President Joe Biden weighed in with an executive order that specifically mentioned this problem among other actions. The order called on the Federal Trade Commission to prevent unfair anti-competitive restrictions on third party repair or self-repair of items such as the restrictions imposed by powerful manufacturers that prevent farmers from repairing their own equipment. President Biden's brought the issue up again six months later, saying that quote, if you own a product from a smartphone to a tractor, you don't have the freedom to choose how or where to repair that item you purchase. Well, anyway, worth reading. Other articles are expressing their doubt, and this is from other publishers all expressing their doubt that this is the end of that argument, and they're predicting that John Deere might attempt to wiggle out of the spirit of the agreement.

(00:12:09):
Meanwhile, in the IT world, the ability to repair complex gear has gotten off to a rough start with Apple having some issues with scheduling of special tools necessary to do some repairs under their at-home repair program. As the one-time repair person for Oki data and Toshiba printers in the Pacific, it was for us not the issue of getting access tools, manuals, or parts, but rather not allowing warranty repair checks for anyone outside of the authorized repair center world. While not a huge amount of money, it did include parts reimbursement and replacements, and some labor money that made large scale warranty repair a profitable endeavor for our repair center.

Louis Maresca (00:12:57):
Reversing the effects of aging is like the holy grail of science. Who would wanna find or fa find a way to feel five or maybe even 10 years younger? While researchers at the University of Bristol and the Multim Medica Group in Italy discovered a certain gene in a population of people who are over a hundred years of old, that helped keep their hearts young by protecting them against diseases like aging, such as heart failure. The University of Bristol posted this news and talks about the new study, and it discovered an anti-aging gene that when administered the heart failure, patients could possibly rewind their hearts biological age. By 10 years now, the gene referred to as the B P I F B four gene, it has been shown to be associated with exceptionally longevity, well exceptional longevity, and it's found to individuals in living more in blue zones of the planet who often live a hundred years or more and remain in good ho health.

(00:13:47):
The gene helps to keep their hearts young by protecting them against diseases like aging and heart failure. And another study, they actually administered the gene to mice and observed a process of cardiac rejuvenation, which actually gem demonstrated that the gene could actually, in fact, protect against heart failure. Now, the gene could potentially be used in a clinical trials and is see as a potential target for patients with heart failure. You may be wondering how is this gene actually activated? How can you get it? Now, you may according to the study, it actually activated and administered by a single dose of the mutant anti-aging gene, which has been shown to halt the decay of heart function in middle-aged mice and rewind the heart's biological clock of by equivalent or more than 10 years. I don't know about you, but this Star Trek like medicine can't come soon enough so we can ensure we always have a way to voyage home.

(00:14:38):
Well, folks, that does it for the blips next up the bites. But before we get to the bites, we do have to thank a really great sponsor of this week, enterprise tech, and that's collide. Now, you may know the old saying when the only tool you have is a hammer, everything looks like a nail. You know that one well to the, the traditional approach to device security is that hammer, it's that blunt instrument that can't solve those nuanced problems that are out there. And even after installing clunky agents that users hate it, teams still have to deal with mountains of support tickets over the same old issues. I've seen it with my own eyes, trust me. And they, they have no way to actually address things like unencrypted ss, HTS or OS updates or pretty much anything going on with a Linux device. Collide is an endpoint security solution that's more like a Swiss Army knife.

(00:15:28):
It gives it teens, a single dashboard, that single pane of glass for all devices, Mac, windows, and even Linux. You can query your entire fleet to check for common compliance issues or write your own custom checks. So plus it's instead of installing intrusive software that creates more work for it, collides lightweight agents, shows end users how to fix issues themselves. You could achieve endpoint compliance by adding a new tool to your toolbox. Visit collide.com/twi to find out how. That's K O L I D e.com/twit. And we thank collide for their support of this week in enterprise tech. Well, folks, it's time for the bites, and today we're gonna do a little round table. You know, I think that this might be the year of derivative ai. We will coin that phrase here first on twit. That's right. Derivative AI is where existing artificial intelligence models and algorithms are used to as basically a starting point to create new and improved versions.

(00:16:31):
Now, this approach allows developers to essentially stepping stone step improve or incrementally improve existing AI technologies, or even create entirely new models by combining different components from existing models. Sometimes we call those mix-ins or even hybrid ai. In the past, you know, we've seen this take on new forms. My question is how trustworthy can b I'm gonna bring my co-host in here because we were just recently kind of conversing about this guys, and I, I, I know that we're starting to see some things kind of emerge. In fact one example was this recent new search engine that came out in regards to perplexity, I think it's called. And perplexity is interesting because it, you know, it, it combines the results of a search engine with chat G P T and is able to actually produce, they say, more relevant searches. What are you guys thinking here? Is, is this, is this a good thing or is this a bad thing?

Curtis Franklin (00:17:36):
Well, I will, will leap in. The, the issue gets down to a, a single, a single definition. And that is something that those of us who deal in things like analytics engines and databases are aware of The phrase is source of truth. And whether you have a single source of truth for whatever results you're giving. Now, in things like business analytics in most business databases, what you desperately want is a single source of truth. In other words, you want to have one canonical source from which all your analytics draw from, which all of your say decisions on the the credit worthiness of a particular customer draw single source of truth. The problem is out in the real world. You know, if you're asking a question like, what's the best soccer team? Well, getting a single source of truth on that is likely to be difficult because there are a lot of ways to, to define that.

(00:18:58):
So what we're trying to do is find systems that do what perplexity has done. In this great example that we're seeing, when you go and do something like a Google or being or duck, duck go query, what you get back is a page of results. At least one page, sometimes hundreds of pages of results. And as the consumer of information, you can go through those and decide which ones are authoritative, which ones are biased, which ones you like, which ones you don't. You get to make the decision, especially if you ask something that is tied to a voice generator as a tool like chat. G p t could easily be, when you hear it, you're likely to hear one answer, the top answer. And if you hear that in a human sounding voice, the odds are pretty good, you're going to accept that as authoritative. So one of the big issues that people all over the AI world are, are wrestling with is how do you take this ambiguity that exists in the real world and build it into something that is likely to be accepted as authoritative on its face by the vast majority of consumers? It's a tricky question. And, and one that involves not just programmers, but ethicists and user interface designers and, and a whole bunch of folks in the industry.

Louis Maresca (00:20:47):
<Affirmative>. Well, as you probably noticed of your video, you probably see our, another guest here this is our guest, we're bringing them in a little early, Dan Finnigan, c e o of filtered. Dan, we wanted to bring you in because obviously a lot of technologies that filters using is AI based, and, you know, we're starting to see this kind of as an uptick. We know a lot of open AI technologies has created a lot of buzz around the industry. We're starting to see it show up in a lot of different enterprising technologies. Now, in this particular case, what we're talking about around the merging of, you know, search results that are index data, potentially, you know, untrusted sources with, you know, obviously that of G P T three S database. What do you think of that?

Dan Finnigan (00:21:31):
I think the point just raised is exactly dead on. You know a lot of people are come, you know, brainstorming use cases for chat e p t and wanna build a business on top of it. It's the hops hottest topic here in Silicon Valley. But there is some practical major point, and you just raised it, which is who owns the data? Whose data is this? Where did it come from? And, you know, one of the obvious use cases that have been played around with, with some humor is you don't need to call a lawyer anymore. Just ask Chad Sheet. They'll write a contract for you and their, that's the professional community that's gonna get the most angry that there's no trademark consideration, copyright consideration, which is just a legal version of the same issue. Race is, where did this come from? Who says this? You know, chat p t sounds extremely confident and it seems like every answer it spits out's dead on. But the reality is we don't know where the information came from and we don't know really what's true or not true. And I don't know, I don't have any ideas on how it's gonna solve that problem.

Brian Chee (00:22:47):
Well, I think I'd like to jump in here. I think it's drawn analogy of Wikipedia. There are some Wikipedia entries that shall we say are a little mushy. The Wikipedia people try to go and put in comments saying you need sources, you need, you know, si citations and so forth. But what, especially when I was teaching high school or even younger college kids, they had this really bad habit of using Wikipedia as a do all end all source of information. And I basically said, no, no, no, no, no, no, no, no. It, it is contributed by the community and we don't always know who contributed it. So in certain industries, like in the IT world, Wikipedia tends to be nice and accurate especially when there's a lot of contributors. But I actually found some in the plumbing industry that were wildly inaccurate. So I think we're gonna start seeing something really similar in the world of ai. Where is the data coming from? How well does it do? I actually, when the hosts were considering, you know, how we're going to pose this conversation, Lou actually brought up something about his kids, you know, and I'm gonna throw it at Lou because that was a heck of a nice way of saying, you know, kids do your homework. Lou, what did you do to your kids?

Louis Maresca (00:24:23):
The interesting thing is, you know, obviously we were talking a little bit about our voice assistants that are out there today. A lot of people use them, you know, in their daily lives to throw, you know, you know, throw questions at them, right? You know, who invented the light bulb or you know, or whatnot, right? And a lot of times they throw questions at these devices that, you know, are complex. You know, they're, they are things that might be you know, not necessarily they're combined not only with historical data, but with a little bit of assumption. And so, you know, they'll throw some questions at it like, you know, what is the brightest star in the universe? Well, there are some data around that, but you know, obviously we don't know exactly what the brightest star in the universe is cuz we haven't cataloged all the stars.

(00:25:06):
And I think the interesting thing is what El Alexa comes back with. And I think sometimes it comes back with, Hey, you know, this is from another reader or another user on the internet. We've been able to provide this. They've added these little vocal twi ticks. I basically, I guess you could say to their answers to basically say, Hey, this is not necessarily like, potentially correct, but this is what other people have said, or maybe what other people have given thumbs up on. And I think it's helpful because not only that is my kids have sometimes said, wow, that doesn't seem right to us, or, you know, that doesn't seem like a the right answer. Maybe we should go look up and and research some more. And I think it's actually really helpful that I would go back to, you know, what everyone's saying about GPT three, I've seen as much as to say, you know, Hey, I've actually asked it. Is this, are these websites, you know, trustworthy? Is this information good? And it said yes, and I've gone and looked at the site and it's not, it's not useful. It's not, it's not potentially a hundred percent accurate. In fact, it I've seen things where it's gotten, you know, actual names of things completely wrong. So I, there's,

Dan Finnigan (00:26:12):
There's just no accountability, right? And when you go on Google, you can see, well, where'd this website come from? And you develop the skill to judge yourself. If you read a, an academic paper, I mean, sometimes it's painful, right? Every fourth word there's a little number because they're cit there's a citation to some reference to another academic paper. You can't just say things in an academic paper without saying, where'd you get that assertion from? And I'm on the board of academia.edu and we publish papers, we enable academics to publish papers. Citations is the most important feature of the platform chat. B t is the exact opposite of that. It is, it's, it's an exact summary writer. It's it's, I don't, I it's gonna have to be untangled to be useful, but otherwise there's zero accountability.

Curtis Franklin (00:27:13):
You know, Dan raises an interesting point, and it brings up a, a fascinating area where we've seen chat g p t find some of the most controversial use, and that is in academia we know because of some, we'll call them experiments that have been done chat, G p t has generated academic abstracts for papers where the, the papers have been accepted. We now know that chat, G p t has been in at least one or two cases listed as a co-author on a paper leading some to say that chat g p t can't be a co-author on a paper. And right now there are teachers all over, especially secondary education, who are worried that any homework they send home that has a an essay component is going to come back with something that was generated by chat G P T.

(00:28:26):
Now, there are ways to look at this. I think it's fascinating and, and has a whole bunch of sort of Skynet science fiction aspect about it. But there are AI engines that can analyze text to see if it was generated by an ai ai and part of me says, we'll just let the machines fight it out and we'll sit back and, you know, have a pizza. Yeah. But this is a, a legitimate issue because it's, it's sort of the next step. I'm old enough to remember that when I was in high school, I could not use a calculator right? In math classes, right? It wasn't until I got to university that I could use a calculator. Now of course there are calculators for elementary school kids. So we've accepted that tool as part of a legitimate academic program. I suspect that ultimately we will reach a point where tools like chat, G p t find their use as legitimate tools, but getting there, defining what the parameters of that use are, defining how to signify that a particular piece of text was generated by chat.

(00:29:57):
G P T is gonna take some time. And as, as our guest pointed out, one of the big issues is that chat, g P two chat g p t, is absolutely crap at citations. And believe me, when you go to graduate school, especially, you spend half of your first term learning the correct format for citations for whichever school you are part of, whether you're, you know, using in you know, using APA style or Chicago style or some other style of citation because citing wrong is equivalent to plagiarism, right? And at most universities, plagiarism is instant dismissal. So it's a high stakes game if someone wants to use this. But it's gonna be, it's gonna be fascinating to watch over the next two to five years.

Dan Finnigan (00:31:07):
Just this week someone used the calculator analogy, and I, I think you and I went to school similar time, and I remembered years later thinking, thank God I didn't have a calculator when I was in kindergarten or junior high school because I had to learn the basics. I worry with chat c p T and text words that writing is fundamental to thinking and communicating. And and I do think the every teacher out there is talking about this and is, and literally doesn't know what the implications of this is for the education of our children and what it could get in the way of teaching if we're not careful. It's, I think it is fascinating. There's clearly a lot of use cases, they're gonna be highly convenient and beneficial. But I mean, this is the first in my tech industry experience that I've seen where, ooh, this, this feels unknowable right now.

Louis Maresca (00:32:22):
Well, we have a lot more to talk about when it comes to ai and we're gonna actually bring Dan back in in just a moment. But before we do, we do have to take another great sponsor of this week, enterprise tech, and that is worldwide technology. And I have followed worldwide technology for a while, and they are trail blazers in the world of technology. They're at the forefront of innovation, working with clients all over the world to transform their business. At the heart of wwe, T lies it's Advanced Technology center or their attc. Now, the a TC is a research and testing lab that brings together technologies from leading OEMs. There's more than a half a billion dollars, a ton of equipment in there, and they're invested in their lab. And the ATTC offers hundreds of on demand and schedulable Labs featuring solutions that include technologies representing the newest advances in cloud security, networking, primary and secondary storage, data analytics, and ai, DevOps, and so much more.

(00:33:16):
Now, WW T'S engineers and partners use the a TC quickly to spin up proofs of concept and pilots so customers can confidently select the best solutions. This helps cut evaluation time from months to weeks. And with the a tc, you can test out products and solutions before you go to market access, technical articles, expert insights, demonstration videos, white papers, hands on labs, and other tools that help you stay up to date with the latest technology. Not only is the attc a physical lab space, ww t also virtualized it. That's right. Members of their attc platform can access these amazing resources anywhere in the world, 365 days a year. Now, while exploring the ATC platform, make sure to check out WWTs events and communities for more opportunities to learn about technology trends and hear the latest research and insights from their experts. Whatever your business needs, WW T can deliver scalable, tried and tested tailored solutions.

(00:34:21):
WWT brings strategy and execution together to make a new world happen. To learn more about ww t the ATC and gain access to all their free resources, visit wwt.com/twit and create a free account on their ATC platform. That's wwt.com/twit. And we thank WW t for their support of this week in enterprise Tech. Well, folks, it's my favorite part of the show. We actually get to bring a guest to drop some knowledge on the TWI riot, and we bring back on Dan Finnigan. He's c e o filtered. Welcome to the show. Dan. We, we didn't get a chance to welcome you before.

Dan Finnigan (00:35:00):
Oh, thank you very much. It's really privileged to be here. It's fun.

Louis Maresca (00:35:04):
Thank you. Now. Alright, just before we get started, in talking about more about technology, ai, all that stuff, our audience comes from all different levels and points in their career, and they love to hear people's origin stories. Can you take pe maybe the audience through your journey through tech and what brought you to Filter?

Dan Finnigan (00:35:21):
Sure. well I, I was a lost graduate student in my twenties, you know, didn't know if I was gonna be a, a professor of, of communications. I was at the University of Pennsylvania at the Annenberg School and I was in the graduate program and I thought that's what I was gonna do. And then I needed to get a job and I was, I found a job as a fundraiser and administrator of a research center in the, in school of engineering applied science led by Professor Dave Farber. And he introduced me to this thing called the internet. This is in the late eighties. And I got lost down the rabbit hole of Usenet groups. In fact, my first stereo speaker set, I bought from a recommendation from a Bell Labs engineer, 1988 or so, 89. And I was then became a freelance reporter at night to make extra money.

(00:36:22):
Just been married, had a kid in the PhD program, and also working at this research center. So I was pretty lost. And then I met a funder of our research center, bell Atlantic. He's kind of my first mentor who said, you're a reporter at night, aren't you? And I go, yeah, well, we're gonna do a experiment with the Philadelphia inquiry where you write, publish newspapers over the telephone lines into news, into your TV set. And I was fascinated by that. Called it video text. It would broadcast via that line that when your old TV would swipe through. And then I saw in 1994, the mosaic browser and I thought, oh my god, the, the internet's gonna become like a Apple McIntosh. And I've been pursuing the industry ever since. I mean, I led it with a team as an analyst at the LA Times.

(00:37:30):
We bought 3% of Netscape for 3 million bucks and six months later I was trying to buy this thing called www dot aki bono yahoo edu stanford edu, excuse me. And a thing called the Monster Board. Because newspapers were threatened by the bus, the emerging business models on the internet. So I've been in it ever since. It's, I've been extremely fortunate right place at the right time. I've had a wonderful career in technology and I guess I always say to young people in their twenties, relax, you're gonna live a long life. Try things. Your, your only job right now is to figure out what you're not gonna like

Louis Maresca (00:38:22):
<Laugh>. That's right, that's right. Now I think, you know, the interesting thing about filtered itself now that we, we kind of go back into the company is the fact that, you know, the hiring process is somewhat broken. I think in the tech industry, you know, there are a lot of different corporations. They do it all differently, some of them good, some of them bad. I've been doing this for 20 years. I can tell you that I sometimes like to keep myself fresh. I'll go interview companies every year, just some, just to kind of get an idea of what's out there and also keep up on the, on interview processes. I can tell you a lot of them don't do a good job. Like they just don't do a good job. You know, for instance, you know, resumes are not always a good in indicator of talent. Whiteboards are not always a good understanding of whether they are good at something or not. And I think that organizations have a lot of problems. What are problems that you are seeing in, in this space and what organizations are having?

Dan Finnigan (00:39:18):
Well, I think recruiting of engineers is the one of the most important and painful processes in any company because every company's becoming a, a tech company now, really, everyone's moving their business to the cloud and trying to digitally transform themselves. And what it means is you've got a lot of people who don't know how to recruit engineers, recruiting engineers you know, the pain points that we hear about that you alluded to, some of 'em are, it can be either a I don't really know what I'm doing. I, I, I know how to read resumes and the people in engineering they don't invest the time that I need them to invest the people in engineering, I feel like, oh my God, you don't know what you're doing and you keep passing on candidates who aren't what I described and aren't qualified. And most importantly, the candidates themselves feel like the questions they're asked are irrelevant.

(00:40:23):
They don't have a sense of what the job would really be like if I worked there. It's too time consuming. I got an interview with everybody. And so it is broken and it is painful. And what appealed to me about filtered, it was the second time I joined as the c e o of a venture funded very small startup as the 20th employee here. At Java, I was the 10th employee. I love startups. But what really appealed to me was the notion that doesn't it just matter if you can do the job? And when I met the founders, it was after the pandemic. You know, interviewing now is all virtual. We used to only, we used to only interview people within a 40 mile radius of our office. Now we're, oh my God, I can interview anybody anywhere. Engineers are all over the world.

(00:41:18):
And but they don't they don't know how to do it. And it's really about getting the job, having the skills for the job. There's this movement in recruiting called skills-based hiring in a world of D E I and office culture doesn't really matter as much anymore. Can you do the job? And the founder told me, I kept submitting candidates to engineering teams that weren't being interviewed because they just didn't have the right resume. Right. And I just felt like if they could prove that they could actually do it, cuz they like doing it and they're good at it and they do it all the time, then performance over pedigree was the, the, the the phrase that they coined that really appealed to me.

Louis Maresca (00:42:10):
That's cool. That's interesting. Yeah, I, I have a similar experience. I've seen organizations. In fact, I was at a conference a while back and I had met an individual, really liked them you know, really highly skilled at what they were doing. Great conversations, great ideas with, you know, a lot of my coworkers were with me and we just, we knew of an open position in another company and we said, Hey, you should apply. You should go there because I think you'd be great candidate. And you know, they went and did, they got their confront contact information and I'd say without two months later, they messaged me and said, Hey, I never got a call. I never got, you know, I never got you know, called up. I said, send me your resume that you, you sent them. And so they did. And I looked at it and that was great resume.

(00:42:50):
They go definitely on point with even the, the position they were applying for. So I emailed the hiring manager, I said, you should take a look at this person. Like they're highly qualified guy got hired. And I think the interesting thing about that is, where, where did that, where did it go wrong? Why, why didn't his resume get through the system? And what they found out was the recruiter was using somewhat of a filtering technology that tried to match skills on a resume to job descriptions and it didn't match. And they just never got through the first layer of that and they got filtered out. Is is what, what, what is, is AI helping this, is it making it this better? Because I still think companies are using this type of filtering technology?

Dan Finnigan (00:43:33):
Well, that's a really good question. You're talking to someone who may have been one of the first to build a machine learning algorithm when I was the C e O of HU jobs in 2004, five and six that would read a marketing document called the job description and read a marketing document called the resume, and then track the data of who, how far down the funnel and the recruiting process they got to to recommend jobs to people when they applied and recommend candidates to people when they were searching. And it starts with the fundamental raw data, which is their marketing documents and keywords and resumes. It created a, a phenomena, especially in engineering where people would just pack their resumes with the names of skills that they thought mattered. And they were, it's easy to exaggerate, just cuz you've used Java a couple of times doesn't mean it's the same Java skillset that someone who's been only working in Java for five years, but the softwares see it's the same thing and you get recommended. So I do think that AI and machine learning algorithms in recruiting software accelerate the patterns which may not be very accurate.

Louis Maresca (00:45:06):
Right. I want to get into that a little bit more because obviously, you know, I like, I like talking about resumes because they tend to uncover biases and processes. And I think that one of them being, for instance, lemme go back to the example I gave before was around this person. They were, they were actually had a they had a master's in history and they were going, but they had worked in the industry for 15 years in technology. Great leader had used to manage large teams, I'd say 30, 40 people up to a hundred people at, at some point. And was really successful there. And their, in their resume didn't necessarily say that. It just said they were manager that they, you know, Dr. Drove technology was successful, this, here's some of the things that they achieved, that kind of thing.

(00:45:57):
And then of course their background of being history. You know, and the interesting is the, the job was for a large director position that managed large teams. And I think the, what came back was this person probably is not qualified cuz they probably never, not only did they have a history degree, but they, they, they dis they just really haven't been able to manage at that scale before. And so and so they were again, weeded out because of that, you know, these types of biases, AI, mach, these type of models, are they able to say, Hey, based off of this person's history, companies they work for, it looks like they potentially have the leadership skills, even though they don't come out and say it in the, in the in inside the, the resume. And so still allows them to, to apply or be part of the candidate. Is, is this something that AI can do? Is, is push aside some of those bias?

Dan Finnigan (00:46:47):
It, it eventually, I think it can, it requires inclusion into the, into the recommendation engine if you will, some element of performance job performance data, right? We're still really early on in that. I mean Workday was one of the first companies to promise that it would take the feedback loop and success factors of, of performance data in a company and have that reinforce the recommendation of candidates. No one's done it yet. There's other startups that claim that they've done a good job of that. We'll eventually get there. One way to deal with it is, you know, human trained algorithms are what really work best anyway. And we know what we tried to do in my previous company and what I really strongly believe in and filtered is to leverage the expertise of the users of our software to learn from it.

(00:47:52):
So if we're gonna have expert people interview engineers in our live rooms product, well then what could we put in front of them that we can later use is invaluable data to determine they like the candidate. But I think the most difficult thing that you raise is that the recruiting process is one of risk aversion oftentimes. Mm-Hmm. <affirmative> and people in HR and talent acquisition don't want to be made fun of by their engineering leaders that they passed on a candidate that wasn't qualified. So they try to really focus on the, on only the ones they know are the types of people they would hire. So it takes leaders to take risks to expand the criteria they're willing to consider for jobs. I've talked about this for years. I mean, not in engineering, but if you notice that every job requires a college degree, that's ridiculous. And so by definition we need to be more open-minded when we consider talented people for jobs.

Louis Maresca (00:48:56):
Right, right. Well I do wanna bring my co back in cuz they, they have a lot of experience here and they of course have a lot of questions. I'm sure cheaper. You wanna go first?

Brian Chee (00:49:06):
Yeah, I'm, I'm just gonna bring up that old saw about garbage in, garbage out. You know, we all know that old saw and I always have thought that an AI is only as good as what you teach it. And that's why I'm hoping, you know, cuz I started off as, you know, a tech and grew up and I, I realized very, very early on in my career that I stink as a manager. But I'm a great team lead. And the problem is I've dealt with hundreds of HR groups now and not a single HR group differentiates between a team lead and a manager. It's frustrating. And I had a very interesting conversation with a director at IBM Global Services and he says, you should be working for us. I get, I told him I applied to IBM m nearly 20 times in between every single job I've had in my entire career. I've applied at I B M and you guys blew me off and I just got dead silence out of them. So I guess the, the thing I'd like to pose to you is how are you teaching your ai? Do you have groups of people? Are you trying to identify the people that actually know how to interview to teach your ai how to interview?

Dan Finnigan (00:50:32):
Oh, good question. I we had filtered tried to stimulate the job as closest whether that's front end programmer, backend and programmer full stack data science or DevOps to simulate the job as it would be in the environment as if we ship you to the laptop that you would use on the first day of the job. And then ask you not just to solve one little particular skill, but to build a little mini application. And the reason we do that is we think doing the job allows someone to demonstrate their skills as opposed to just describing. Now to get your point about ai, so right now as a small fast growing startup, I would argue we don't have enough data to be smart to make recommendations yet. I remember years and years ago I was on a panel and there was some AI startup, this must have been like seven years ago.

(00:51:34):
And he was talking about all the algorithm you can do. And I remember saying, you don't have any data so you can't do anything yet. And it took years for JobBite to build enough data to have actually strong, I think we had very good, well-trained data to, to rank candidates in the system. It filtered. Right now we're collecting data and we're building data models. Cuz I don't want to get this wrong. If we're asking applicants to create a shopping card and draw data from an API and display in a ui that simulates the kind of front end work that you would be doing and backend work you would be doing. I, I I that's good enough. We're sharing with our customers additional signals about that candidate that you would not get if they just took a coding test.

Brian Chee (00:52:28):
Well, one more question for you. In dealing with all these various HR groups, the number one complaint was I never have enough people to check people's references. And it occurs to me that reference checking, especially credential checking is a service that might be perfect for an ai. Is this something the industry's actually asking for, or am I just wishing in the wind

Dan Finnigan (00:53:02):
<Laugh>? Oh, you're right. I mean, as someone we've already used the analogy of Google, what made Google different was it was an algorithm that took into account the traffic and referrals to other websites. And so the websites that got higher ranked were the ones that the crowd on the web was starting to like. And we don't do that when you just match a resume to a job description. So references do matter. There with blockchain technology, a lot of companies got funding to see if they could be the ultimate s source of truth as which was stated earlier about someone's background and work experience that would verify they are who they are, they did what they did, verify their education, but then also verify certain references. We all know LinkedIn tried to do this with, with their comments and recommendations and was became kind of spam, right? So I think there is no, there's the, there's the references that a candidate gets you, and then there's the back channels people do when they go on LinkedIn. But there have been some companies that have come out that have been creative in leveraging the social web, but AI and machine learning applied to someone's reputation I haven't seen it yet. It can, it's certainly possible, but as you said, it's about garbage in, garbage out. And I'd be really careful about where you get the referenced information.

Curtis Franklin (00:54:49):
One of the things that I am interested in is looking at it from a slightly different perspective, and that is from writing the qualifications. I mean, I, one of the things that I had fun with this last week and I'm gonna be writing an article about this for dark reading next week. I went over to chat G P T and asked it to write a couple of ads for me. Yeah. one for a security analyst and one for a cyber researcher. And I have to say I've seen worse

Dan Finnigan (00:55:27):
<Laugh>. 

Curtis Franklin (00:55:29):
And you know, one, one of my questions and, and I I, I think it'll be interesting, I don't know, I I think you could look at it as either chat, g p t is very good at doing this or that as an industry, we're using an awful lot of the same verbiage in all our ads. You know, is there any advantage to using verbiage in an ad that is tailored specifically for your instance or when it comes to something like a researcher or an analyst or, or any technical position is one set of words pretty much as good as another when it comes to a given level of expertise in a company.

Dan Finnigan (00:56:21):
Wow, this is a huge topic. And in the recruiting world, it could have a whole panel just on this <laugh>. And this, this route, this actually better covers the garbage in garbage out concept than anything. There's been a lot of interesting work around job descriptions lately. There used to be tiny little things because you had to advertise them in expensive media, then they became very wordy things in a job board. And then when they were optimized for job boards, like search engine optimization, then they became really wordy documents. I've heard by the way, it's so like one of the biggest topics about chat C B T and recruiting is that companies are already starting to use chat sheet b t to write job descriptions. And you asked, you know, is that good or bad? I don't know. One thing that's been the most fascinating application of AI and machine learning to recruiting was to discover the certain words in a job description attract women.

(00:57:26):
And certain words repel women, certain words attract people of color and certain words repel people of color. And that if you think about it, it's like, it's what you said, we are just redoing the same habit over and over. And people typically just cut and paste a job description. They go online, they go to Google, job description four, you know, cut and paste it. They're moving fast, they don't have enough time and not enough thought is put into job descriptions. And if you ask candidates, and there's been a lot of surveys on this from Glassdoor to Indeed candidates don't think job descriptions mean anything. And most often job dissatisfaction within a year of being hired is referred back to the original job description. Like it's not the job they described. And

Curtis Franklin (00:58:22):
It's

Dan Finnigan (00:58:23):
Interesting, and I think not enough thought is put in by hiring managers of what do you really want? So, and recruiters of course, they want it to be a very broad job description. So that attracts the most candidates as possible. And if you try to get a recruiter or talent acquisition person on a very narrow job description, they'll push back. But maybe that's what we need more is descriptions that are more literally described the job that they would do.

Curtis Franklin (00:58:55):
Well. My my last very brief question is this, if you are a traditional recruiter, how nervous are you about AI in the industry?

Dan Finnigan (00:59:11):
Well, I think like all lo all kinds of white collar work identified by McKinsey a few years ago about what work will be automated and won't work, won't be automated chat G p T is quickly in, in a manner of weeks, opened our eyes that what we used to call white collar work or is gonna be automated. And in the recruiting process, certain steps are gonna be automated and it filtered. You can interview and prove and demonstrate that you can create front end programs that you know how to take data and write an algorithm without bothering someone's live interview time. That essentially automates the initial interviewing process. So as a recruiter, the analogy I always use is that when the internet first came out, everyone thought the realtors job would go away. You can just go online, look at a home, and buyers and sellers could meet and who would need realtors back before the internet and seem, I think half of all Californians were realtors.

(01:00:21):
There's fewer of them, but they're more productive. And so I would say for the recruiting industry, that's what's gonna happen. It's gonna force recruiters to focus on the things they can do that chat G P T can, which is to have a real conversation with a hiring manager about what their goals and what they seek and have a real conversation with a candidate about their hopes and dreams and try to be the matchmaker. That usually is what attracted people to that profession. But a lot of the other work that they do that I would call busy work and recruiting, it's gonna be automated from AI and machine learning,

Louis Maresca (01:01:03):
I think so. As well as so as well. Well, Dana, time flies when you're having fun. This has been a great conversation. Thank you so much for being here. Now we're getting a little low on time, but we wanted to maybe give you a chance to tell the folks at home a little bit more about filtered, where they can go, where their organizations can get started, that kind of thing.

Dan Finnigan (01:01:20):
Well hiring managers and engineering and data science and DevOps and certainly the talent acquisition professionals and recruiting professionals who, who work with them and support 'em can come to filter and get a demo and see how we can essentially automate and streamline their recruiting process. We customize job simulation, so it, it is literally like giving the engineer the laptop they would use on the first day in the job, have a create a virtual terminal forum. We auto score the results of the simulations. And this is so that this, the qualified candidates who can actually do the job get advanced in the process and candidates learn what the job would be really like.

Louis Maresca (01:02:09):
<Laugh>. Well, thank you again, Dan. Well, folks, you've done it again. You sat through another hour, the Best Day Enterprise and IT podcast in the universe, so definitely tune your podcaster to twt. I want to thank everyone who makes this show possible, especially to my wonderful co-host. Sorry, they're very on Mr. Brian Chi Sheer, what's going on for you in the coming week and what, where can people find you to get ahold of you?

Brian Chee (01:02:32):
Oh, all kinds of fun. But I, I will go and make a mention. Dan didn't say it, but it's, his website is filtered.ai so make sure you're going to the right website. If you go to filtered.com, you're gonna get some interesting results. <Laugh>. Yeah, so You bet. Anyway. A, as the producer for the sho you know, I don't wanna produce the show in a vacuum. I'd love to hear suggestions from you. One of the easiest ways right now for me is Twitter. And my Twitter handle is a D V N E T L A B advanced net Lab. This is a leftover for my days as a re oceanographic researcher at the University of Hawaii. I brag about the Undersea Observatory a lot. I will actually post all kinds of interesting things. We will also make sure we highlight some of the threads that we're trying to do.

(01:03:37):
Obviously there are lots and lots of people interested in ai. It is a white hot topic, so obviously we're gonna try and get guests in the AI thread. We're also going to hit things like i o t you know, you know, zero trust, you name it. We want to hear from you. We want to hear what you want to see and you want conversations for. So throw it at me on Twitter. You're also welcome to throw it at me on my email. I am cheever, spelled C H E E B E R T twit tv. You can also send email to twt TWI tv and that'll hit all the hosts. Oh, and for the people that have been asking Che Burt is a leftover, foreign wise teaching at the University of Hawaii in computer science. And it was we had a Dilbert naming convention for our servers in the lab. And of course, because we had Dilbert, I had to be cheaper. That's the story. Want to hear from you everybody. Stay safe.

Louis Maresca (01:04:43):
Thank you. Cheaper. Appreciate you being here. Well, we also have to thank of very own Mr. Curtis Franklin. Curtis, what's going on for you? The coming weeks? Where could people find you in all your work?

Curtis Franklin (01:04:54):
Well, as always, people can find me@omnia.com. Our subscribers get first crack at most of what I do over there. I write for dark reading. I'm doing stuff on LinkedIn and I am getting ready, starting to make my plans for the s a conference, which is coming up in just a couple of months. I've already got my plane tickets. I've got a hotel room booked at an outrageously high price. And I'm looking forward to seeing members of the TWT Riot out there in San Francisco. In the meantime, you can find me on Twitter. I'm at KG four gwa, I'm on Mastodon KG four dot sdp.org. And also me on LinkedIn. I'm on Facebook. I'm practically everywhere. So find me online. Reach out. I'd love to know what you think about TWT and what we should be doing here for your benefit.

Louis Maresca (01:06:13):
Thank you Curtis. Well folks, we also have thank you as well. You're the person who drops it each and every week to get your enterprise goodness. So we wanna make it easy for you to watch and listen to catch up on your enterprise and IT news. Go to our show page right now. That's right twit tv slash twit. There you'll find all the amazing back episodes, the show notes, the cos information, our guest information. Of course, of course the links that we do during the show. But more importantly next to those videos there you'll get those helpful. Subscribe and download links. Support the show by getting your audio version or your video version of your choice. Listen on any one of your devices cuz we're on all of them and we wanna make sure that you subscribe and support the show. Plus, you may have also heard we also have Club Twit.

(01:06:56):
That's right. It's a members only free podcast service with a bonus twist bonus TWI plus feed D can't get anywhere else. And it's only $7 a month. That's right. And there's a lot of great things that come with it. And one of them is the exclusive access to the members only Discord server. That's right. Chat with hosts. Producers have separate discussion channels. Lots of fun of, lots of fun stuff on there, a lot of great channels to talk about. Plus they also have some special events that are exclusive only to it. So definitely check that out. Join Club twit, be part of that. Go to TWIT tv slash club twit. Now also, you may have also heard Club Twit offers, offers corporate group plans as well. It's a great way to give your team access to our Ad Free Tech podcast. And the plans start with five members and this discounted rate of $6 each per month.

(01:07:45):
And you can add as many seats as you like to that. And this is really a great way for your IT departments, your developers, your tech teams, any one of your teams to really stay up to date and access to all of our podcasts. And just like regular memberships, it's, you can join the TWI Discord server and get that TWIT plus bonus feed as well. So definitely join Club twit at twit tv slash club twit. Now you may have also heard us talk about it. It's TWIT audience survey time. That's right, it's the annual survey. It really helps us understand our audience you so he can help you make your listening experience even better. It only takes a couple minutes and it's about to end January 31st. So go right now. Tweet that TV slash survey 23 to take it. Don't wait. Last day is January 23rd, 31st, sorry, 31st.

(01:08:30):
It's coming up. So please get your voice in. We really wanna hear from you. Check that out right now. Tweet tv slash survey 23. Now after you subscribe, impress your friends, your family members, your coworkers with the gift of tw cuz we talk a lot about fun tech topics on this show and we guarantee they will find it fun and interesting as well. So definitely share the gift of tw if you wanna watch this show live, we do this show live at live Twitter tv at 1:30 PM Pacific Time on Fridays. Come see how the pizza's made. All the behind the scenes, all the banter that we do during the show and after the show. Check that out. Live Twitter tv and you wanna watch the show live. Jump in our IRC channel as well. I s c dot twit tv there. We have some amazing people in there.

(01:09:11):
They always give us some great ideas. They give us some good topics, some good questions, and of course some good show titles as well. So definitely check all the great people in there at irc twi tv. If you wanna get ahold of me, you can always hit me up on Twitter at lumm. Also LinkedIn's a great place as well. I talked to a lot of people on there. In fact, I have a backlog that I have to go through and I will go through right now after the show because I wanna make sure I get back to everybody on there. So definitely hit me up at Lum on Twitter or on LinkedIn, Louis Maka on there. Direct show. Direct message would be show ideas, topics, whatever. One even. I talked to somebody recently about how getting, getting started in programming in tech.

(01:09:51):
So definitely hit me up on there anytime, anywhere. Appreciate it. If you wanna know what I do during my normal work week at Microsoft, please go to developers.microsoft.com/office. There. We post all the amazing and great ways for you to customize the office experience to make it more customizable for you, to make it more productive for you, to make it more using your workflows that you have, automate things, check that out. Definitely check out our newest and latest or greatest thing office scripts with allows you to, to write these cross-platform macros that run anywhere in the cloud. You can even schedule them and run them through Power Automate. So definitely check that out@developers.com. Developers microsoft.com/office. I wanna thank who everyone who makes this show possible, especially to Leo and Lisa. They continue to support this week at Enterprise Tech each and every week, and we couldn't do the show without them, so thank you for all their support over the years.

(01:10:41):
Of course, I wanna thank Mr. Brian Chi one more time. He's not only our tireless producer, but he's our, not only our co-host, but he's also our tireless produ producer and he does all the show bookings and the plannings for the show. So thank you Cheever for all your support over the years. And of course I wanna thank all the engineers and staff at twit. And before we stand out, I want to thank our editor Mr. Anthony today. He makes us look good after the fact. He takes out all of my mistakes, so I appreciate that, Anthony. Of course. I also want to thank our technical director, our TD for today, Mr. Ant Pruitt. He makes it look easy, but it is not easy to make us look good. So thank you aunt for all your support. And of course he does an amazing show here on twit called Hands-On Photography, where you get to learn about different techniques and different different things within the world of photography every week. What's what's going on this week in, in the show? An

Ant Pruitt  (01:11:29):
Thank you Mr. Lou. this week I get the play around with a couple toys drone in particular. People have asked about drones. And I share my thoughts and, you know, stuff like getting part 1 0 7 certified and why that matters and why a lot of folks probably ain't gonna do it. So check it out. TWIT tv slash H o p for hands on photography.

Louis Maresca (01:11:55):
You wanna see some serious drone footage. Check out the new video around the mission of possible movie that's coming out soon with the cliff Hanging

Ant Pruitt  (01:12:02):
Drone footage. Oh, yes, please. Pretty amazing. Yes, please,

Louis Maresca (01:12:06):
<Laugh>. Thanks Anne. Well, until next time, I'm Liz Mareka just reminding you if you wanna know what's going on in the enterprise, Jess, keep quiet.

Mikah Sargent (01:12:16):
If you are looking for a midweek update on the weeks tech news, I gotta tell you, you gotta check out Tech News Weekly. See, it's all kind of built in there with the title. You get to learn about the news in tech that matters. Every Thursday, Jason Howell and I talk to the people making and breaking the tech news, get their insights and their interesting stories. It's a great show to check out. Twittv slash tnw.

All Transcripts posts