Transcripts

This Week in Enterprise Tech 532 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Louis Maresca (00:00:00):
On This Week in Enterprise Tech. We have Mr. Brian Chee, Mr. Curtis Franklin back on this show today. Now, have you ever tried to build your own authentication system from scratch? Question is why. We'll definitely talk about that. Plus, performance on your website is definitely hard. Today we have Tim Kadlec, he's Director of Engineering and fellow at Catchpoint. We're gonna talk about application monitoring performance plus experimenting for your application in the future. Lots of important stuff to talk about, so you definitely should miss it. Quiet on the set.

Announcer (00:00:32):
Podcasts you love from people you trust. This is TWiT

Louis Maresca (00:00:46):
This Week in Enterprise Tech episode 5 32, recorded February 24th, 2023. Render time. It is Money. This episode of This Week in Enterprise Tech is brought to you by Melissa. Over 10,000 clients worldwide in industries like retail education, healthcare, insurance, finance, and government. Rely on Melissa for full spectrum data quality and ID verification software. Make sure your customer contact data is up to date. Get started with 1000 records, clean for free. And melissa.com/wit and by Miro. Miro is your team's visual platform to connect, collaborate, and create together. Tap into a way to map processes, systems, and plans with the whole team. Get your first three boards for free to start creating your best work yet. Miro.Com/Podcast. And by worldwide technology with innovative culture, thousands of IT engineers, application developers, unmatched labs and integration centers for testing and deploying technology at scale. WWE t helps customers bridge the gap between strategy and execution. To learn more about W A B T is at wwt.com/TWiT.

(00:02:08):
Welcome to TWiT This Week in Enterprise Tech, the show that is dedicated to you, the enterprise professional, the IT pro, and that geek who just wants to know how this world's connected. I'm Lewis Maki, your host, your guide through this big world of the enterprise. And I definitely can't guide you by myself. I need to bring in the professionals, the experts in their field. He is our very own senior analyst at md. He's the man who eats and sleeps the enterprise enterprise security, and he's our very own Mr. Curtis Franklin. Curtis, it's great to see you, my friend. How are you this week?

Curtis Franklin (00:02:36):
I'm doing very well. It's been a good week. Research had the number of good calls with companies looking at how different companies measure and understand risk, which is something I'm gonna be diving into more and more this year. You know, risk is one of those things that we all talk about, but what's difficult is finding a common language to talk about risk. And so that's, that's one of the big things I'm looking at for the coming year. And it's exciting. It's exciting trying to find a way that all of us can talk about something together.

Louis Maresca (00:03:13):
<Laugh>, it's always good to come together. Thank you Curtis. Great to see you. Well, speaking of coming together, we have to bring in Mr. Brachi. He's our resident network network expert. He's also the net architect at Sky Fiber and he's all around tech heat. Cheevers, you're a, you're a busy man. You're, you're getting gearing up for something important coming up soon, right?

Brian Chee (00:03:31):
Yes, the Central Florida Fair starts this Thursday and I'm looking forward to that. It's also going to be my very first board meeting. I get officially installed as a board member, so that's gonna be fun and I'm actually really looking forward to really good carnival junk food elephants. It's been a really, really long, yeah, elephant ears. You know, that kind of thing. It's been a long time since I had any of that and I'm kinda looking forward to, I think what I'm gonna do is I'm going to eat really, really healthy up till Thursday, and then I'm gonna have fair food, oughta be fun. Sounds,

Louis Maresca (00:04:11):
Sounds good to me. Sounds good to me. Well, it's great having you guys here. Thanks for being here. Well, we, we should get started cuz there's lots to talk about for sure. Now have you thought about building your own authentication system from scratch? The question you should be asking yourself is why. We'll definitely talk about that. Plus, your company wants to deliver great websites, right? While you need to iterate over time and improve the performance and reliability. Today we have Tim Kadlec's Director of Engineering and fellow at Catchpoint. We're gonna talk about application monitoring performance plus maybe even some experimentation as well in the future. So lots to talk about here, lots to discuss, so stick around. But before we do, let's go ahead and jump into this week's news blips. Now I wanna start the blips this week for talking about some startups, some small businesses, and just how they're pivoting around the AI wave that's coming.

(00:04:58):
That's interesting to see the direction of the industry as it unfolds. Now in this Tech Crunch article, it talks about a company called Voice Mod. And they have been in the field of digital signal processing for over a decade. And if you wanna talk about pivots, they actually originally focused on creating and, and funding sound emojis, fun, really found funny, funny sound emoji effects and reactions for gamers to augment their voice chats. Now, voice mod is going to stay in the gaming industry, however they want to go all in on AI and generative sy and synthesize, synthesizing real data and sounds. And then in their case, they're actually starting with realtime voice generation now tools to entirely create synthesized or unreal voices and realtime to overlay on actual people's voices. Now this is avatars plus voice changers. If you think about it. Now we have seen corporations use text to speech, right?

(00:05:47):
We see a lot of those large models happening right now and they're actually able to generate in a more natural way. But this is actually speech to speech generation. Now this think almost for people singing, but completely altering their sound in real time. Almost feels like deep fake technology, but in real time. And it brings up a bunch of ethical questions for me now, voice Mod acquired an audio effects startup last year called TRO Labs, which has been helped them to get closer to and allow them to actually develop text to speech, a text to song capabilities, but they're actually able to clone actual musician. Sounds pretty creepy. Now, there's a great tech talk o on that as well. So definitely check it out. The technology seems like they've been having a lot of applications, but they're gonna start targeting enterprise applications as well. The being able to augment customer service and sales representatives using generative capabilities to provide a more familiar experience to customers for calls and other voice interactions. Now using data around the customers, they can provide an experience that can be maybe backed by a human, but what they hear is something more familiar, more localized. What do you think of that? Ethical, maybe even a bit scary.

Curtis Franklin (00:06:54):
Well, if you're like many people using Apple systems, you've got something of a smug thing going on when it comes to device security. Rising user counts mean rising risk though, and some recent research highlighted in a dark reading article shows that at least a bit of that smugness might be completely unjustified. A new class of bugs that could quote, allow bypassing code signing to execute arbitrary code. In the context of several platform applications. In Apple's, iOS, iPad, OS and Mac OS has been uncovered. Researchers say these new vulnerabilities could allow an attacker to escalate privileges and make off with absolutely everything on a targeted device. Trellis researcher Austin Emmett says that attackers could potentially gain access to a victim's photos, messages, call history, location data, and every other kind of sensitive data to the point of completely wiping data from a device and denying the legitimate owners access to the machine.

(00:07:58):
Now the vulnerabilities in this class range from medium to high severity with CVS S ratings between 5.1 and 7.1. For those of you who don't live inside cvs s those are moderately high but not absolutely catastrophic. Apples group them into two CVEs and there's no indication that they've been exploited in the wild. The cyber failure in this case arises from MS. Predicate. That's a class that enables app developers to filter lists of objects on a device. The problem is that the syntax of MS Predicate is a full scripting language. In other words, through NS predicate, the ability to dynamically generate and run code on iOS has been an official feature forever. That's according to the researcher Emmett. In proof of concept attacks, trellis found that an attacker could use MS predicate to execute code in root level processes that allow entryway into parts of the Mac, like oh, the calendar address book photo, and into every piece of hardware storage and software on iOS and I iPad OS devices.

(00:09:13):
Now the good news is that the attacker must already have access to a victim's machine to run these exploits, and there are known ways to harden a system against giving that kind of access. More good news comes from the fact that Apple has already remediated the vulnerabilities. A hint to you that you should be accepting all those update your systems messages that have been flying around the past few days. The bad news is that these vulnerabilities point out that Apple hardware is not uniquely hardened against vulnerabilities. The perception of greater security comes from the combination of hardware, software, and application provisioning infrastructure along with user habits that form the greater Apple ecosystem break or even badly bend any one of those and attackers could find themselves inside the walled garden.

Brian Chee (00:10:17):
I love this article, it's title The Big Reuse 25 Megawatt Hours of Ex Car Batteries Go on the Grid in California. And this story comes to us from ours, Technica. Well, last week a company called B two U Storage Solutions announced that it had started operations at a 25 megawatt hour battery facility in California on its own. That really isn't news as California is adding a lot of battery power, but in this case, the source of the batteries was a little unusual. Many of them have spent in earlier life powering electric vehicles. The idea of repurposing electric vehicle batteries has been around a while. To work in a car, the batteries need to be able to meet certain standards in terms of capacity and rate of discharge, but the performance declines with use even after a battery no longer meets the needs of a car, however, can still store enough energy to be useful on the electric grid grid.

(00:11:19):
So it was suggested that grid storage might be an intermediate destination between vehicles and recycling. Well, considering an electric vehicle news items were customers are hit with massive costs to replace a battery pack is becoming all too frequent. It is my hope that reuse of battery electric vehicle battery packs might provide some sort of trade-in value so that the overall cost of battery replacement stops being close to the cost of a new vehicle. Another pain point is that battery recycling or lack thereof quickly negates much of the green value of a typical electric vehicle. Now, keep one thing in mind. The US Forestry Service has been recycling Nissan Leaf batteries for quite a while to power remote ranger stations. My imagine is that I should someday be able to recover some of the value of a vehicle battery pack, either by them being remanufactured into municipal battery storage, or maybe even bringing being reconfigured into a residential battery system.

Louis Maresca (00:12:31):
M V D has been looking for ways to step up their game when it comes to AI and machine learning. However, large language models like those of chat G P T require a lot of horsepower, farms of servers and may lots of compute power will. NVIDIA is looking to reduce the time and the number of devices required to do the compute of this data. Now, according to this, cnbc, NVIDIA thinks that they have what they need and their new platform coming soon, and they currently power several AI applications with their $10,000 chip. The A 100 as they call it, it's able to perform many simple calculations, which is required for training and using neural network models at scale. Now, hundreds of GPUs are required to train artificial intelligence models like large language models. Now, the chips need to powerful, need to be a powerful enough to crunch terabytes of data quickly to recognize patterns.

(00:13:19):
Now, after that, GPUs like the A 100 are also needed for inference or using the model to generate text or make predictions or even identify objects inside of photos, no matter its capabilities, the horsepower required to power new AI technologies means companies require access to a large number of these A 100 s. In fact, stability ai, the company that helped develop stable diffusion and image generator uses around 5,400, a 100 GPUs to power its technology. Now, let's put this in perspective for a second. Now, there's some data from research firms that says open AI-based G chat G P T models. In fact, inside Bing search could require eight GPUs to deliver a response to a question in less than one second. Now, if we extrapolate some data from that, that would mean that they would need over 20,008, eight GPU servers just deploy the model in Bing to everyone suggesting that that can actually cost 4 billion in infrastructure spending.

(00:14:17):
That's why Nvidia debuted their H 100 platform chip last year, which focuses on optimization for transforms used in large language models. It's an increasingly important technique that many of the latest and top AI applications used. Now, this new platform can train over 1000000% faster than the previous generations quite fast. That means that corporations will need less chips to produce even more output. And the quest for real-time processing at scale, even though it's just started personally, I think that chat, G B T and AI investments are pushing industry in the needed direction to start making some serious progress in the world of AI and chips like H 100 will definitely power it. Well, folks, that does it for the blips next up the bites, but before we get to the bites, we do have to thank a really great sponsor of this week weekend enterprise tech.

(00:15:08):
And that's Melissa. Melissa's a leading provider of global data quality, identity verification and address management solutions today announced its successful partnership with Tom Tom, a global pioneer and satellite navigation for consumer use by layering Tom Tom's comprehensive global address data, location data, and country data on top of its industry leading data quality, enrichment, and identity verification solutions. Melissa has earned its reputation as the address expert, multi-language and multi-format support is key to the postal address as not every country is alike. Now TomTom ingests country data into one global standardized data set while supporting multiple languages. Tom, Tom handles the various addressing nuances, country by country, making the Melissa solution stack efficient and seamless. This capability has enabled Melissa to increase its global support across 240 countries and territories. Greg Brown, VP of Global Marketing at Melissa said, by adding Tom Tom's rooftop precision capability to our solutions, Melissa's customers can rest assured that their data is not only clean and verified, but also pinpoint accurate and high value to business operations.

(00:16:26):
This ensures that restaurants are found, packages are delivered to the right place on demand drivers find their passengers and so much more. Poor data quality can cost organizations an average of 15 million a year. That's a lot of money. Now, the longer it stays in your system, the more losses your business could accumulate by leveraging Tom Tom Maps points of interest, seven digit postal codes, address points and routing api. Melissa provide a best of breed address engine critical to business needs worldwide. Since 1985, Melissa has specialized in global intelligence solutions to help organizations unlock accurate data. For more compelling customer view, Melissa continually undergoes independent security audits to reinforce its commitment to data security, privacy and compliance requirements. They are SOC two, HIPAA and GDPR compliant. So you know, your data is in the best hands. Make sure your customer contact data is up to date. Get started today with 1000 records clean for free and melissa.com/TWiT. That's melissa.com/TWiT. And we thank Melissa for their support of This Week in Enterprise Tech. Well, folks, it's time for the bites. Now, authentication and user management. It's, it's more than just storing username and passwords in a database. Am I rankers?

Curtis Franklin (00:17:50):
<Laugh>, you're absolutely right, Lou. And you know, at, at first glance, you might think that authentication would be pretty simple. Now it is the bedrock of application and system security, making sure that the people who are trying to log into your system are who they say they are. And as you say, some people say, well, you've got a database of your users with all their credentials. You make sure that what they type in matches the credentials. Boom, you've got authentication as you point out. It's not that simple. You've got to have a secure database. You have to have multifactor authentication to make sure that they are who they say they are. And you have to have ways to manage this account. One of the key pieces of security is making sure that when someone leaves the employee of your company, that you take away their user credentials.

(00:18:48):
If they change jobs, you have to make sure that their new privileges match the new jobs. And oh, by the way, that you take away the privileges from the old job. There are all kinds of things that go into this. And for most organizations, it is not what the people in the C-suite like to call core competency. You've got people who write applications, you've got people who maintain applications. You probably don't employ a lot of people who specialize in authentication unless you are a company like DI Scope. Now, DI Scope is a new company that was featured in a recent dark reading article. And their whole premise is pretty simple, that developers shouldn't be spending time and resources on authentication and user management if it's not a core part of the service they're building. Now they say that with their authentication user management platform, the developers can simply go in, create authentication flows, add multifactor, manage access privileges, all of these without being security experts.

(00:20:13):
Guru Cheal partner at light speed venture partner, but one of the venture companies that gave them money said authentication is too important to be done incorrectly, but it's too complicated and time consuming to be done in-house. That's the basic conundrum, and it's one that disco says that they can solve. Now, identity attacks are on the rise, and the disco platform support different kinds of developers. No code, low code those who like SDKs, those who like APIs. And the nice thing is that if you've got up to 7,500 monthly active users for business to consumer are up to 50 tenants for business to business applications, it's free. Now, I mentioned that venture capitalist who gave a quote, the scope has raised 53 million in seed funding. For those who don't live in that world, that's a pretty hefty chunk of seed money. And it's got a founding team that's worked together for years, something that venture capitalists really like to see. So Brian, I'm gonna come to you first. You know, we've got all these different co different ways of building applications. Why shouldn't we expect the development teams just to roll their own authentication programs? Isn't this sort of basic at blocking and tackling for developers? You

Brian Chee (00:22:02):
Bet, and I have seen many, many teams say, I can do better. No-Code, low-code is for lazy programmers. I don't want to do that. Well, here's the problem. What happens if your industry says all of a sudden that passwords and say fingerprint reading is isn't good enough? We have to add in pulse detection behind the fingerprint, or we have to make sure there's actually some human heartbeats behind that retina scanner. How do you change that? New technology and changing technology on how you authenticate is going to happen. You know, as we start getting in, moving towards zero trust, something's gonna change. We're gonna start saying, oh, we have to have Dons or we have to have this or that, or that widget and large changes, especially when you're changing the base technology means if you're, if you rolled your own, that means you have to drag back your DevOps team, brush off all the notes, try to figure out what they did, you know, what are those variable names, you know, that's a big one. And then rewrite it. In the meantime, how many hundreds of thousands or millions of dollars in lost revenue have have rolled in your door? I like this in that now you can start adding more building blocks. I, I was a big, big supporter of the concept of building block programming a long,

Curtis Franklin (00:23:45):
Long,

Brian Chee (00:23:46):
Long time ago. You know, back in the FORTRAN days. I would like bringing in pieces that have been tested, have been stressed, have been, you know, all kinds of this, that and the other thing. And peop other people, lots of other people have tested and make sure, oh, it's, it works. So I like the concept of having authentication being done by a company that spent the money, spent the efforts due the testing, talk to people about what they really need, and being able to use dramatically more complex and more robust authentication. Then maybe my DevOps team has time and money to create themselves, you know, and for those of you out there, anything is better than HT access. Come on, people, <laugh>, you know, we need to do better. We need to move towards zero trust, and we're not gonna do that if we're trying to reinvent the wheel time and time again.

Curtis Franklin (00:24:51):
You know, that's a, that's a good point. And Lou, I want want to turn to you because, you know, the, the quote in that that we gave earlier talked about core competency being part of what an application developer does. From your experience, is designing an authentication system something that is right in the wheelhouse of most application development teams? You know, is is or is it something that most teams leave to someone else to specialize in a authentication to provide, even if it's someone else in-house?

Louis Maresca (00:25:37):
That's a good question. I I would say it's definitely not in the wheelhouse of a normal application developer, whether a microservice developer, backend, or even a front end developer. I would say that there is a lot of chance that they're not experts and they're gonna need to have some specialized people in the organization. I can tell you, like for instance, I design and architect a lot of microservices and to be able to hand, hand off au authentication tokens between those services so that they can acquire the right data and the concepts of the right user, that's very complicated stuff. And so being able to handle that and being helped to support by other organizations or by other products is definitely very useful. I did want to comment directly on this, this company discos, you know, they're in a very, I would say, high competitive market, right?

(00:26:25):
We have placed companies like Okta backed by OTH zero. They have, you know, stitch Transmit security, you know, I think they're differentiating themselves by allowing them to have this workflow system or their screen editor that allows that kind of drag and drop, you know, non code based kind of low-code, no-code solution so that, you know, users don't have to write apps or code to be able to support the scenarios. You know, that, that would be super useful for organizations who don't really understand that. But the problem is, you know, companies like Okta, they, they have lots of resources and they're already in this space for a long time. I don't see why they can't go go build something like this themselves that just kind of overlays on top of what they're already offering. In fact, they have a lot of low-code and no-code solutions already, like Okta workflow and stuff like that. So I think, I don't know why, and this is a, a very interesting startup. I just don't think that they are. They're, they're, they're, they're, they're entering a, a you know, a shark infested waters, I guess you could say. And I'm curious to see if if they will survive. But it's a great, it's a great solution. It's a great opportunity to help our customers get into the authentication space easier.

Brian Chee (00:27:32):
I got something, you know, I I really and truly think this solution is great for the small and medium business that are still running large apps, you know, not everybody can afford Salesforce. Not everybody can afford Microsoft crm. Those have well-developed authentication systems, but if you have to build something yourself, that's when something like this really comes into bear. And I think it's really going to be useful for the IOT teams because it, those are gonna be inherently small and inherently semi custom. So not having to build an authentication systems means I can much faster meet changing guidelines without having to test, test and more test.

Curtis Franklin (00:28:25):
You know, that, that's an interesting point that companies have to test. Companies have to be sure that something like this is going to work. And to lose point. I have watched a lot of companies over the years, and one of the, one of the interesting things to ask startups is what's their exit strategy? Because it's gonna be one of two things going public are being purchased, and I have not spoken to this company. I don't know what theirs is. But given those sharks some of them very large and well established in the water wouldn't be surprised if being acquired isn't high on their list of the ways that this comes along and, and finishes out. Well, that's it for authentication. And I suspect that we've got a really good guest waiting in the wings. Lou, why don't you take us in that direction?

Louis Maresca (00:29:31):
Thank you, Curtis. Yes, we definitely do. Next up our guest who drops the knowledge on Twight Riot. But before we do, we do have to thank another great sponsor of this week, enterprise Tech, and that's Miro. Now I have a quick question for you. Are you and your team still going from tab to tab tool to tool losing brilliant ideas and important information along the way? It happens daily. I can tell you that with Miro, that doesn't need to happen anymore. That's right. Miro is a collaborative visual whiteboard that brings all of your great work together, no matter where you are, whether you're working from home or in hybrid workplace, everything comes together in one place. Online. At a first glance, it might seem like just a simple digital whiteboard, right? But miros capabilities run way beyond that. It's a visual collaboration tool packed with features for the whole team to build on each other's ideas and create something innovative from anywhere.

(00:30:21):
Now a shortened time to launch, so your customers get what they need faster. Now, with Miro, you need only one tool. See your vision come to life, whether it's planning, researching, brainstorming, designing and feedback cycles, they can all live on a mial board across teams. And faster input means faster outcomes. In fact, mural users report the tool increasing project delivery speed by 29%. That's huge. You, and share the big picture over you in a cinch, when everyone has a voice and everyone can tap into a single source of truth, your team remains engaged, invested, and most importantly, happy cut out any confusion on who needs to do what. By mapping out processes, rules, and timelines, you can do that with several templates, including murs Swim Lane Diagram. Strategic planning becomes easier when it's visual and accessible. Tap into a way to map processes, systems, and plans with the whole team so that not only have one view, but have a chance to give feedback as well.

(00:31:21):
If you're feeling meaning fatigue, I know I am Murro users report saving up to 80 hours, 80 hours per user, per year just from streamlining conversations. Now, ready to be part of more than a million users who join murro every month. Get your first three boards for free to start working better together at miro.com/podcast. That's M I R o.com/podcast. And we thank Miro for their support of this week in enterprise Tech. Well, folks, it's my favorite part of the show. We actually get to, we're gonna guest to drop some knowledge on the TWiT at Riot. Hey, we have Tim Kadlec he's director of engineering and fellow at Catchpoint. Welcome to the show, Tim.

Tim Kadlec (00:32:04):
Thanks Lou. Happy to be here.

Louis Maresca (00:32:07):
Now our audience are from all different chapters in their careers and they love to hear people's origin stories. Can you take a, maybe take us through a journey through tech and what brought you to Catchpoint?

Tim Kadlec (00:32:17):
Sure, yeah. Boy. All right. So I guess, you know, the probably the, the, you know, I'd always kind of dabbled with it. Mostly I was a geeky kid <laugh> and decided, you know, I wanted to I think have a website. So I could write about basketball history, actually, when I was engineer High. And that's how I taught myself html, bought a magazine at a store that was like, teach yourself html, read through it and whipped that up. But they never really thought about doing anything with it professionally and didn't touch it for a while after that, until I was working at Radio Shack, actually quite enjoying that. And then my wife saw a posting for a agency that was looking for an entry level web developer, and that I look at that instead. As it turns out from a longevity perspective, that was the right call <laugh>.

(00:33:04):
So, yeah you know, early on it was a couple small agencies learned a lot there. It was really rapid pace. Ended up writing a book which kind of looked, worked as a catalyst for getting to do a little bit more consulting with some organizations. Got really into performance probably about a little over a decade ago. And yeah, as I did that, then that kind of led to, you know me writing more and more getting to meet more people you know, got to work at Akamai for a bit, sneak in the early days, I think when there was maybe, you know, eight, 10 people there when I came on. And then kind of when in between all that, doing a lot of performance consulting, so working with a lot of enterprises actually to fix front end performance issues. You know, get tools up and running, get the processes honestly in place too to make sure that things were tested and checked. And then yeah, had the opportunity to come to Catchpoint and work on webpage tests, which I'd been using for over a decade. So that was a very easy sell.

Louis Maresca (00:34:06):
Fantastic. That's a great journey. I was I think my first book was HTML for Dummies, and I put it down right away and moved on to something else for years. So you definitely have a lot more experience for me with that on that. But I, I do wanna jump into performance right away. Cause I think that this is an important thing. You know, I, I run a lot of sites on my side, and I can tell you one thing, like building your own monitoring system. You know, obviously you can use the performance counters in the browser and you can build your own thing and then acquires the data and then sends it somewhere, and then you have to do your own analytics and it becomes kind of complex and you're not always, you know, getting the right data. You're not always getting the right points in your site. Where, what are you seeing organizations doing here? Are they, they rolling their own solutions? Like, like I've, I've tried to do in my past

Tim Kadlec (00:34:50):
<Laugh>, well, you know, I, you don't see that many do it for long, let's put it that way. I think there's very few organizations that are rolling their own monitoring and then sticking with it. It's actually fairly common, you know, as a first step. Somebody decides, you know what, oh, we can probably do this on our own. We're gonna throw a little bit of JavaScript on the page to collect some metrics and shove in a new Oodle analytics or somewhere else where they can surface it in Grafana. You know, maybe they're gonna run like Lighthouse for some front end performance checks, and again, shove that to Grafana instance, but they usually don't stick with it. Because all the things you just talked about, like it's, you know, making sure that you're getting accurate numbers and real world metrics from those results. There's a lot of, there's a lot of work that has to happen behind the, for that to happen.

(00:35:38):
 You know, if you are trying to monitor real user stuff with actually injecting code on your site, it's really easy to cause a lot of performance headaches that way. You know, so there's getting that right? And then honestly, it's also, I think one of the biggest things with that company struggle with, with, even with out of the box monitoring products is, all right, I've got this data now what? Right? Like, getting the data and putting it is only part of the problem. The next thing is your team has to be able to look at it and decide what the heck do we do with this information? And, you know, that's where, again, rolling your own puts a lot of that onus on, you know, a couple people in your team to be able to figure that out for you. Yeah. Whereas, you know, some of the outbox you know, offerings and stuff that are available, some SaaS solutions can actually start to get you, you know, at least a few steps down that path.

Louis Maresca (00:36:21):
So the one thing that I, you know, I really think that Catchpoint does well is, is, is the fact that it, it helps improve user experiences. I can tell you from experience of building web applications for 15 years is like the fact that customers expect performance to be, you know, really good. And if, and it improves over time, you know, they kind of expect that, you know, I, especially a large web application that the performance and the experience would be is very fluid. You know, the whole concept of if it doesn't load in 200 milliseconds, then it's probably, you know, users will leave the page or whatever. And I think the, the problem is we don't really necessarily understand what experience users are having. Like, we don't like cust, I know a lot of organizations say, Hey, you know, my page is pretty fast. I'm pretty good. Like, I don't understand. But I don't think they really understand the type of experience users are having. Sometimes they lose users because of just the performance and the experience they're having. What, what can, what can organizations do to help with that? Give them the knowledge and the information they need to know what their, what users are running into and where they can improve?

Tim Kadlec (00:37:24):
Yeah. I mean, so you, you hit on it, right? Like, we don't, I mean, look, I geek out on making things faster. It it, I get a sick weird thrill when everything goes down by another couple hundred milliseconds or whatever, right? But it is important to remember that we don't work on performance or make things more performant for the sake of making it more performant. We do it because it does provide a better experience. You know, I've, I've got a site w p o stats that I been maintaining with Tami efforts for a few years. And all we do is we're just collecting business case studies out there that show, you know, demonstrable impact on performance. And there's, you know, studies out there that connected to everything for your conversion rate, balance rate, engagement you know, your, your server bandwidth bills, like literally everything you could possibly think about.

(00:38:11):
 You know just how many, even visitors, like one of my favorite stories comes from YouTube where they, you know, experimented, rolled out like a YouTube feather, a super lightweight version of their video page. And what they found out as a result was that they ended up getting traffic from countries that they had never gotten traffic from before because YouTube is just too heavy. And now all of a sudden, you know, all these other folks were having their daily <laugh> productivity levels depth because they were able to access YouTube instead. You know, but like all, you know, so it, it's connected to all of that. It's a, it's has a massive impact on the total experience. So part of it is, yeah, you know, trying to tie it back to that, you know, one of the first things if any company is trying to look into per, I always try to advise, like first get something out there where you're actually able to see where you are today.

(00:39:02):
 You want it to be as real world as possible. It's important not to like test on your own machine over a cable connection on a souped up MacBook Pro. Like, that's not what your users are experiencing, right? That's gonna give you the best of the best results. That's, that's useless information. <Laugh> you need to find out what they're looking at. You need to stress test it throw it under high throttling, low powered machines. That's the way to find the actual issues. And then you need to like, yeah, see what you can do upfront to tie it to something business related. Because if you want any momentum in your organization, you have to be able to show that, hey, look, it's actually impacting users and when we fix this, it actually helps our organization, our business metrics, something. Cause if you can't do that, you know, it's gonna be hard to continue to allocate resources to performance tools or people in the organization going forward.

Louis Maresca (00:39:56):
Let me get your opinion about something. I hear, I work a lot of organizations and some of them feel that, you know, the whole concept of perceived performance is easier than actual performance. This is the concept of lazy loading things in. So this way customers think that it's there and then they kinda load in things over time and, you know, they feed things in there, they, they load the scale to the page and then they load things under the covers. And I see, I see this a lot in large web applications out there. Some of the biggest enterprise applications out there in the world today do this. And they do this because they don't want to go in and, like you were saying, fine tune things maybe because they don't have the data, maybe cuz they don't understand how to do it. I do you see this a lot with a lot of organizations.

Tim Kadlec (00:40:37):
Yeah. So man, you're gonna make me bust out the soapbox here pretty quick, aren't you? <Laugh>? so first let me say when, like, I don't know, like five years ago when perceived performance really was starting to get like popular as a topic, like it was starting to emerge in a lot of blog posts and talks and stuff like that I was all in on it. You know, I think there's a lot that can be done to make things feel faster. And if you carefully apply those things, there's a really solid use case for it. So I, you know, one of the patterns that you see a lot we call these the skeleton screens, right? So if you've ever gone to, it's usually a, a JavaScript heavy site you go to it and maybe you see like the little gray boxes and a few, like maybe some dummy text, and then the content kind of populates on top of it, right?

(00:41:18):
It sort of replaced the, you know, the spinner pages from, you know, decades ago or whatever. And not even decades ago unfortunately. But you know, those skeleton pages, those were one of those things. The, the funny thing is the origin story of that comes from a native app called Polar. And Luke Robowski was the person who had, you know, started polar. It was this mini native app where, you know, the whole use case was you just had a or B two photos and it was a question and you'd quickly do it. It was all about micro interactions. How quick could you do it? And what they found is that they had some limiting constraints on the network that were causing sometimes some delays. And so they played with these skeleton screens and found out that people thought it felt a lot faster.

(00:42:04):
 But the key to the whole, so he wrote this post about it and it kind of took off, everybody started jumping on the skeleton screens, but we lost track of what perceived performance was meant to do in the first place. Like if you go back to that, that first conversation, that first post that he is having about it, they had done everything they could, they had optimized the heck out of this thing. They'd made sure that it was lightweight. They had optimized the network calls, they just ran into a natural constraint of running something on a mobile device, which is that occasionally you're gonna get a slow network. And they're like, well, what can we do to make that experience better? And then they would, you know, try to populate it with as much real data as possible upfront and just load in a few pieces after.

(00:42:42):
Now fast forward today, and when you see that pattern, it's usually like there's nothing there except for a few boxes. And it sits there because the reality is people are using it, like you said, to sort of mask the actual issue. So it's a bummer because again, I think perceived performance matters. I think it's actually a very useful technique when it's applied, but it, you can't use it to pretend that you don't have actual performance issues. You have to address those because, you know, the perceived performance stuff isn't gonna mask that. And you're still, if you have those underlying issues, all it's gonna take is one slow network call or one person on an underpowered device, whatever it is. And, and they're gonna suffer through this really slow behind the scenes experience. And nothing that you do from a perceived performance impact is gonna, it's gonna help at all. It's just gonna frustrate at that point.

Louis Maresca (00:43:36):
Great, great, great information. I do wanna bring my co back in a moment. I do have one more question, and this kind of steps into the, what Catchpoint is doing in some of the spaces. I know a lot of this is making sure that organizations can do more proactive testing and verification of their, you know, the performance of their site. Now you have some pretty cool tools. I've actually used them myself, webpage test being one of them. Can you tell us little me a little bit about what they're doing, how they're helping organizations in this space?

Tim Kadlec (00:44:04):
Yeah so, you know, Catchpoint is just massively comprehensive in terms of the number of products and the different angles that they, we can tell you about performance impacts, right? Like we can get down to tell you that like, look, there's an outage on Verizon networks in Boston. You know, we can go down to that level of the network stack and then with webpage tests we can go like extremely comprehensive on the front end. And so we have all this different data points, but as I said, one of the things that I think is really important is making sure that there's action there. Something you can do about it. Like what's that next step? When I'd consult with organizations, often I'd come in and they'd have some sort of performance monitoring or tooling in place. But what you'd hear often was a couple things.

(00:44:48):
First off, maybe they'd try to do some performance work on their own. And maybe they even sunk months of work into it in a couple of situations and saw no return on their efforts. And that was frustrating and that completely derails that entire, any chance you have a developing a performance culture. The other bit is if the monitoring did have these monitoring products in place and, you know, if the data didn't look good or the data, you know, you had a blip or a spike or something, they would do their due diligence. They'd try to figure out why. But when they couldn't explain it, like it wasn't clear why the data regressed or it wasn't clear why this tool was showing very different results than Lighthouse Run and Chrome on their own machine they start to lose trust in the monitoring product.

(00:45:34):
So one of the biggest things that I think is really, really important to do, and you know, we've got a couple concrete examples with webpage tests in the last year, is really closing that gap between something's going on to here's the solution to it. So for webpage tests, that was kind of the catalyst for one of the big features we launched last year's experiments. These no-code experiments. So the idea is you run webpage test it'll tell you it'll grab all the performance data that's always done. But then we'll have our own set of opportunities. These are things where we're like, okay, you know, if you opt this optimization tends to work you have some render blocking resources, you're loading something that's a single point of failure and it could bring down your site. These are suggestions and that's great, but then the next step is, okay, as a developer, as a team and I see these things, I need to understand is it worth my time?

(00:46:26):
And so that's where the experimentation came in. What we do is we actually have, we're using some cloud you know, edge workers and stuff like that at computing to be able to proxy the site on an experimentation run. And then we apply those optimizations on the fly and then we spit out results at the end. So the idea is, let's say, yeah, you've got a, a third party script, it's blocking the render of the page, we think it's gonna slow things down. You click a button says, you know, to unre make sure that it doesn't block, run the experiment. And at the end of it you can see exactly like, hey, on a 3G network it looks like this saves you 1.5 seconds. Or sometimes it's, you know, we applied the optimization, it didn't save you any time. And that's useful information too, cuz then you're not wasting six months chasing it down, right? But it's really about like, yeah, what can we do to close that gap between something's going on to knowing how to fix it and whether or not you should spend time on it.

Louis Maresca (00:47:22):
It's fantastic. These are some tedious things that you do over time and these types of tools definitely reduce that barrier for people. So, great stuff. I do wanna bring my co-host back in, but before we do, we do have to thank another great sponsor of This Week in Enterprise Tech. And that's worldwide technology. And the w e T is at the forefront of innovation, working with clients all over the world to transform their businesses. And now at the heart of WWE t Lieser Advanced Technology Center, or at c now, the A TC is a research and testing lab that brings together technologies from leading OEMs. There's more than a half billion dollars in equipment invested in the lab. The a TC offers hundreds of on-demand and schedulable labs featuring solutions that include technologies representing the newest advances in cloud security, networking, primary and secondary storage, data analytics, and ai, DevOps, and so much more.

(00:48:14):
Wwts engineers and partners use the ATC to quickly spin up proofs of concept and pilots to so customers can confidently select the best solution. Now this helps cut evaluation time from months to weeks. Now with the A T C, you can test out products and solutions before you go to market access, technical articles, expert insights, demonstration videos, white papers, hands-on labs, and other tools that help you stay up to date with the latest technology. Not only is the a TC a physical lab space, but WWE t has also virtualized it. That's right. Members of the ACC platform can access these amazing researchers anywhere in the world 365 days a year. Now, while exploring the ACC platform, make sure you check out WWTs events and communities for more opportunities to learn about technology trends and hear the latest research and insights from their experts. Whatever your business needs, WW T can deliver.

(00:49:12):
Scalable, tried and tested tailored solutions. WWT brings strategy and execution together to make a new world happen. To learn more about wwt ATC and gain access to all their free resources, visit wwt.com/TWiT and create a free account on their ATC platform. That's wwt.com/TWiT and we thank WWT for their support of this week and enterprise tech. Well folks, we've talk with Tim Kadlec, he's director of engineering at Catchpoint. We talk a lot about performance here, but I do wanna bring my co-host back in cuz they're chomping at the bit here. They have lots to ask Curtis. I wanna actually, I wanna throw to Brian Cheeefer, sorry, I'll throw the, throw the Brian Cheeef first cuz I think he has some questions, Brian.

Brian Chee (00:50:02):
Well, I'm gonna say Curt and I were doing some amazing work probably about 10, 15 years ago for, in those days, different magazines and eventually we kind of converged and the whole idea was we wanted to go and create what if scenarios to be able to go and do testing. I would actually borrow statistics from various large corporations and build simulations. Well, one of the tools I used to use was web load and web in from Mercury Interactive. You know, I'm, I see Tim nodding his head, he's almost, everybody's heard of them, but it was a really, really complex system and almost required a computer science degree to really get the instrumentation to work. Now I put that into what I call proactive testing, whereas instrumentation, where you're finding out the performance of pages that have been built, that's more reactive testing. So let's go and drag up some dirty laundry from the US federal government healthcare.gov. When it first opened, it had one heck of a meltdown. Now, if these cool tools for proactive testing had existed, what kinds of things could the healthcare.gov team have done to try and prevent that meltdown?

Tim Kadlec (00:51:36):
That was a, it was a, that was a bit of a classic meltdown for sure. Yeah, you know, to be fair, I do think there probably were some tools available that they may have been able to take advantage of. I know there was a lot more that was happening there in terms of trying to rush that out and stuff that probably you know, meant that they kind of bypassed that, which is an issue. I'm sure you're very familiar with that. You know, Brian, with all the work that you were doing back there too, is just like, you know, making sure that performance isn't an afterthought, but it's something that you're actually prioritizing and baking in from the start. That was step number one, probably, right? Making sure that they treated it that way as a first class, first consideration around the experience of their product.

(00:52:23):
 But yeah, I mean, you know, they had there was a lot of, it was just loaded testing related if I recall correctly. There were some front end issues, like there was a lot of heavy stuff on that. You know, again, if you're running that through a tool that would've you know, ahead of time doing that proactive testing that you talked about it absolutely would've been able to surface, you know, hey, there's, there's a lot of stuff going on here. There's a lot of server load, there's a lot of front end weight being sent down on the page. Like in all of this is going to be bad <laugh>. You know, so I'm not sure if I, you know, I I never got the behind the scenes story, so I do not know if it was a matter of, you know, they didn't test at all or they tested, but they didn't test at scale or under real enough considerations or, you know, what the deal was there. But it is a perfect classic example of why yeah, that proactive testing and being on top of it and, you know, checking that throughout before you're ever hitting production is absolutely critical because when you don't do it, the the impact can be catastrophic as they learned.

Brian Chee (00:53:31):
Yeah. And, you know, but there

Tim Kadlec (00:53:32):
Was a taint that they never, they never got rid of that stain, really. Like I know there's like, look, it's, anytime it's politics, there's always gonna be stuff, right? But like that, that launch haunted that that entire thing for quite a while.

Brian Chee (00:53:46):
Oh yeah. Well, and you know, le flipping the coin, actually testing has more than just a performance aspect to it. One of my favorite hacks for Apache was let's just ramp up the authentication against HT access and see how fast I can crash it. Being able to test things like authentication, making sure that a system doesn't let you have access to something that you, you don't have the rights to is all part of testing. And so I gotta imagine webpage test has some components there. I I remember saying all kinds of really cool widgets and literally having table driven choices and also a database driven choice so that I could pretend I'm a user. Those tools have existed for a long time and even though the tools I used were really expensive webpage test is not as expensive as a lot of people think. And the excuse of I can't afford it isn't there anymore, is it?

Tim Kadlec (00:54:54):
No, it's, I mean it's, it's, it's definitely priced in a range for you know, if you want to do it, start enrolling with webpage test, there's a free tier, but there's also a tier that I think starts at 15 bucks a month. You know, it, it's, it's up. You can afford 15 bucks a month if you're an enterprise for sure. You know, at least to kind of kick the tires and do at least a little bit of the testing on there. But you're right. Like it's not just perf and even, you know, even from the facet of webpage test, we've kind of expanded into a few different areas. Like webpage test started in 2008, patent meeting and built it when he was at a O L because they needed a PERF tool. So it's always had a hardcore performance focus, but we found out that, you know, we had the ability to help people do the right thing when it comes to performance, so why not try it on a few other things.

(00:55:40):
So there is some built-in security testing. You know, we do some checks around security headers. We do some checks around, you know, different vulnerabilities. We actually tap into the sneak database to see if there's any vulnerable client side, you know components being used. There's actually some accessibility testing that we provide as well. We run an accessibility audit on every single test to see how you're stacking it up against those guidelines. We actually have the ability to grab the entire accessibility object model that the browser uses under the hood. Like we are starting to look at like, what can we do to encourage just a healthier website in general and not just limit it to perf, because you're right, like all of that stuff matters. And you know, I don't know what your experience is, but mine has definitely been, the more manual that testing process is, the more likely it is for things to be missed. And so when there's an opportunity to automate it. When there's an opportunity to make it just part of something that happens without somebody remembering to go do that, you've gotta take advantage of it.

Curtis Franklin (00:56:47):
Well, now we say that we've gotta take advantage of it, but now I've gotta say, I have heard a lot of managers point over at their Agile development methods handbooks and say, we've got to kick it out. We're going to sprint, we're gonna kick it out. The users will let us know if there's a problem. Is there an economic reason to build testing in rather than just letting our users, you know, they'll tell us if, if there's something they don't like, why should we spend even a small amount of money and more important development time on testing when our users will do it for us?

Tim Kadlec (00:57:38):
That is yeah, I've heard that before. So first off, you know, I think there's mountains of studies at this point that show that like when the performance is out there, it does have a negative impact on everything. It'll, you know, there's concrete percentages, concrete dollar amounts you know, I think several large companies along like the Amazon, Walmart I'm trying to remember the third, there's three companies that all found a hundred milliseconds, could make a dramatic difference, like a million plus in ar, just from the 100 millisecond difference. And no, that's fine. Like, that's good. Like being able to demonstrate that there is, that economic interest in doing this is important. But I think more specific to this question, it's like, why care about it upfront, right? Like, why not let the users figure it out and then we solve it later?

(00:58:27):
 And my favorite, favorite study on this is one that Google had done where they, you know, it's Google. So they get to do all sorts of those crazy experiments that, like, the rest of us would never be like, why? But they did one experiment where they slowed down artificially slowed down search results. They wanted to see what impact it had. And it was a negligible amount again, in the hundreds of milliseconds. I don't remember what it was, it wasn't significant. But they did this, this study, they, so they served up basically to a large chunk of their users, the search results as it was, and then to another chunk search results with this little artificial slowdown. And sure enough, they found that like artificially slowing it down meant less engagement. People were less likely to dig deeper in the search results.

(00:59:10):
They were less likely to click through, you know, their engagement metrics went down like you would've expected based on everything else we've ever seen related to this. That's not the interesting part. The interesting part is after, so they removed the delay, so they brought the speed back up to par, so everybody's getting the same experience. Now, what they found out was that the people who had that initial slow experience continued to interact less with Google for months after that. And eventually it corrected, eventually they got back to parody in terms of what the, your other cohort of users was doing. But it was a several month thing where people were just not engaged as much because that initial experience had left a bad taste on their tongue. And so it took a long time to recover from that. So could we rush something out with our agile, you know, sprints, get something out, and then yes, wait and see what the users say and then try to tweak it after. Absolutely. We could, we know that during that period until the user discovers that we're gonna be hurting our revenue numbers and our conversion numbers, but we also worse know that even after that, it's gonna take a while for that initial experience that they've had to get better to like, get fade out of their hand. It's the whole, like, you only get one chance to make a good first impression that absolutely holds true with user experience

Curtis Franklin (01:00:33):
Well.

Tim Kadlec (01:00:34):
So yeah, it's risky to do that for sure. Like, you're literally giving away money when you make that option.

Curtis Franklin (01:00:40):
<Laugh>. I, I like that. Because no one likes to give away money.

Tim Kadlec (01:00:44):
<Laugh>. No, no, no. I don't think so.

Curtis Franklin (01:00:46):
Yeah. If, if you are doing the testing, you, you've talked about building it in, how thoroughly do you have to integrate you, the, the test vendor with all of the other development infrastructure out there with the ticketing, with the the entire way of dealing with the problems that you surface through the testing? Are you integrating with, with all of, you know, with Jira, with all of the ticketing companies? Or do you find that it's best for, for testing to stand alone? How, how does that work?

Tim Kadlec (01:01:28):
Ah, that's a great question. I'm a huge, like, you have to integrate and I'll explain. So we do first off, like both Catchpoint every, all the products offered by Catchpoint as well as webpage test specifically, you know, there are different integrations available. Some issue trackers, some things like GitHub and stuff like that. A lot of C I C D processes. That's a very common point of wanting to integrate with your monitoring tool so that you can automate performance testing when you submit like a poll request, for example. Like immediately check what the per impact is. You know, our issue trackers is a, it's a great idea a great point of integration because then you can see an issue in your monitoring tool and immediately file that off to an issue tracker somewhere so that your team can deal with it.

(01:02:12):
 Those integration points matter a lot because, you know, what happens otherwise is performance becomes a thing that the select few who decide who use the monitor have access to the monitoring tool, continue to use the monitoring tool and check into the monitoring tool to go see what's going on on a regular basis. Those are the people who are gonna catch things. And that's a select, that's a small group, and it's probably a group that's gonna shrink over time because you're making them leave their workflow. You're making them leave the tools that they're using every day throughout the rest of their day to go check this one experience. Now if you've got it connected, on the other hand, if you've got this tight integration with an issue tracker, if you've got it set up so that when you know, whether you're using Jenkins or GitHub or whatever it is, when you're making a deploy or you're putting a pre poll request, you're automatically checking performance and then getting the results right there in that build process or in that PR approval process.

(01:03:12):
 Now the performance information is coming to where the engineers, where the developers are pushing it off to Slack, where teams are already communicating in other ways, you know, these integration points matter because now they're not having to go out and find this information. It's getting back to that proactive versus reactive to some extent, right? Like, you know, we're putting this, we're proactively inserting this information where they are. And so yeah, I think that's absolutely critical. And honestly, if you're not doing that you know, as a, as an organization taking advantage of your tools that way, or as a vendor not making a point to prioritize that you're basically condemning that tool to kind of sit off in a corner. And I guarantee over time it's, you know, people just aren't gonna use it as much because they're gonna have to go out of their day-to-day workflow to get there.

Louis Maresca (01:04:02):
Tim, I love this space. I could talk for hours about it. Really great audience.

Tim Kadlec (01:04:05):
Yeah, you and me.

Louis Maresca (01:04:06):
Great information today. Thanks so much for being here. You know, time flies when you're having fun. Since we're running low on time, I wanted to give you maybe a chance to tell the folks at home where they can learn more about Catchpoint, where we they can go to get more information about it. Maybe we can get started on it.

Tim Kadlec (01:04:20):
Yeah. so best place for Catchpoint is catchpoint.com. There's also a blog at Catchpoint that you can check out where there's a lot of good information that gets posted. A lot of really good technical in deep technical insights as well, not just, it's not just all marketing stuff. Webpage test itself also has its own separate sites webpage test.org and there's also a blog for webpage test that's much more centered on the front end performance specifically at the moment. But yeah, both of those are great resources to learn more. We also do from the front end performance space, if you're interested, we actually also do a fairly frequent a webpage test live over on Twitch where we'll have guests on sometimes to interview them about, you know, we just had Nick Jansma on talking about you know, a new rum archive project that they're working on over at Akamai. But then other times it's live audits. So there's, there's a number of different resources to kind of dig in.

Louis Maresca (01:05:21):
Fantastic. Well, folks, you've done it again, you sat through another hour with the best thing enterprise and IT podcast in the universe to definitely tune your podcast to TWiT. I want to thank everyone who makes this show possible, especially to my co-host I the very Mr. Brian Chee sheer, what's going on for you in the coming weeks. So where can people find you?

Brian Chee (01:05:42):
I'm getting ready for the central Florida fair. I'm also thinking I need to go and start building some electric carts. I must have toys and I'm retired. What's not to like about building stuff, you know, anyway, I, I've been throwing up all kinds of idea ideas and I'm not sure I'm ever gonna do it, but it's fun talking about them. And I do a lot of that on Twitter. My Twitter handle is A D V N E T L A B advanced net lab. And people throw ideas at me. Sometimes people throw some flames at me because they don't agree, but that's all right. I don't mind people not agreeing with me. That's, this is a free country and I like to hear contrary opinions. But you're also welcome to throw email at me. My email address is sheer spelled C H E E B E R T TWiT tv. You can also send email to TWT TWiT tv and that'll hit all the hosts. We've got lots of interesting guests and interesting topics. I've been trying to fulfill the threads that you folks have been suggesting, and I think I'm going to make some of you very, very happy in the coming weeks. Take care and stay safe.

Louis Maresca (01:07:08):
Thank you, Chiefer. Well, we also have to thank your very Mr. Curtis Franklin. Curtis, it's great to have you here. Thanks so much for being here. Taking, tell the folks on world that can find you in all your work.

Curtis Franklin (01:07:18):
Well, you can always find me on Twitter. I'm at KG four gwa. I'm also over on Mastodon k kg four GWA at mastodon dot sdr sdf.org. I'm on LinkedIn, Curtis Franklin, I'm just all over the place. You're welcome to follow me, contact me hit me up with messages on indie or all of those. I'd love to hear from people and would love it if we got together in person at one of the conferences I'm going to be attending. There's Enterprise Connect coming up in March. There's RSA Conference coming up in April. And believe it or not, I'm already starting to think about my presentation at Black Hat in August in beautiful Las Vegas. So let me know. Love to hear from members of the TWT Riot.

Louis Maresca (01:08:18):
Thanks Curtis. Well, we also have to thank you as well. You're the person who drops in each and every week to watch and to listen to our show, to get your enterprise and IT goodness. Wanna make it easy for you to catch up and listen about your enterprise and IT news right now. Go to our show page to tweet that tv slash TWiT. There you'll find all the amazing back episodes, the show notes, the cos information, the guest information, of course, the links of the stories that we do during the show. But more importantly there, you'll see those helpfuls subscribe and download links right there. Next to the video. Support the show by getting your audio version, your video version of your choice. Listen on any one of your devices, any one of your podcast applications cuz we're on all of them. So subscribing definitely supports the show.

(01:08:58):
Now you might also heard we also have Club Twit as well. That's right. It's a members only ad free podcast service with a bonus TWIT plus feed that you can't get anywhere else. And it's only $7 a month. That's right. There's a lot of great things that come with Club Twit, but one of them is exclusive access to the members only Discord server. I'm on it right now. There's, you can chat with hosts, you can chat producers, you can have separate discussion channels, lots of great ones in there. Plus they also have special events. That's right. Join in on those for sure. Lots of fun discussions, lots of channels. So definitely join Club TWiT, be part of that movement. Go to TWiT.tv/club TWiT. Now don't forget as well Club TWiT offers group or corporate group plans as well. That's right. And it's a great way to give your team access to our Ad Free Tech podcast.

(01:09:42):
A plans start with five members at a discounted rate of $6 each per month. And you can add as many seats as you like there. And this is really a great way for your IT department, your sales department, your developers, your tech teams to stay up, stay up to date with access to all of our podcasts. And just like regular memberships, they can join the TWIT Discord server as well and get the TWIT plus bonus feed. Just join club TWiT at TWiT tv slash club TWiT. After you've subscribe, I want you to impress your friends, your family, your coworkers with the gift of twi, cuz we talk a lot about some fun tech topics on the show and I guarantee they will find it fun and interesting as well. So have them subscribe. Now if you've already subscribed, we do the show live. That's right, 1:30 PM Pacific on Friday.

(01:10:23):
We're doing it live right now. You can go to live dot TWiT tv, you can see the stream, you can come see how the pizza's made, all the show stuff behind the scenes before and after all the banter, all the fun we have here at TWiT. Definitely join the live stream@live.TWiT.tv. And if you're gonna watch the show live, you gotta jump into the IRC channel as well. Go to irc TWiT do tv. We love the chat room. We have a lot of great discussions, a lot of great characters in there each and every week. So we thank you for them being there and being part of the movement. So definitely check out our irc IRC dot TWiT tv. Now I definitely want you to hit me up. I'm on Twitter at TWiTter.com/lum there. I post all my enterprise tidbits. I'm on Macon lum TWiT social, you can direct message me, you can hit me up with just messages in the channels.

(01:11:06):
Feel free to do that. I love having conversations with all of you. So definitely, whether it's show ideas or whatever automation I don't hit me up with there. If you wanna know what I do during my normal work week at Microsoft, definitely check developers.microsoft.com/office. There we post the latest and greatest ways for you to make your office more productive for you. I wanna thank everyone who makes this show possible, especially to Leo and Lisa. They continue to support This Week in Enterprise Tech each and every week and we couldn't do the show without them. So thank you for all your support over the years. I wanna thank all the staff and engineers at TWiT. And I also wanna thank Mr. Brian Chee one more time. He's not only our co-host be, he's also our tireless producer. He does all the bookings and the plannings before the show, so we couldn't do the show without him.

(01:11:52):
So thank you cheaper for all your support. And before we sign out, we have to thank our editor for today because they make us look good after the fact. They cut all my mistakes out. So thank you for making me look good. And of course we have to thank our technical director for today, Mr. Amper. He's the talented Mr. Amper cause he does an amazing show on TWiT called Hands-on Photography, which I learn each and every week. In fact, the lighting I have in this room is because of that show. So an what's going on this this week, I wanna learn some stuff.

Ant Pruitt (01:12:20):
<Laugh>. Mr. Lou, that's pretty funny. Thank you so much. And yes, your lighting looks pretty dag. I'm awesome sir. <Laugh> did this week on Hands on Photography, we talked about changing color and objects again, but this time it is with video. So if you want to change the color of something in your video, there is a way to do that. And I show an example of me just changing my shirt from one color to the other. So check it out, TWiT.tv/h o p.

Louis Maresca (01:12:50):
Love it. Thank you aunt. Well, until next time, I'm Lewis just reminding you, if you want to know what's going on in the enterprise, just keep quiet.

Rod Pyle (01:13:01):
Hey, I'm Rod Pyle, editor in Chief VAD as magazine. And each week I joined with my co-host to bring you this week in space, the latest and greatest news from the Final Frontier. We talk to NASA chiefs, space scientists, engineers, educators and artists. And sometimes we just shoot the breeze over what's hot and what's not in space. Books and tv. And we do it all for you, our fellow true believers. So whether you're an armchair adventurer or waiting for your turn to grab a slot in Elon's Mars Rocket, join us on this weekend space and be part of the greatest adventure of all time.

All Transcripts posts