This Week in Enterprise Tech 516 Transcript
Please be advised this transcript is AI-generated and may not be word for word.
Time codes refer to the approximate times in the ad-supported version of the show.
Louis Maresca (00:00:00):
On this week at Enterprise Tech, we have Mr. Brian, She, Mr. Courage Franklin back on the show today. Now, patching is a challenge for any organization. What if I told you there's patching as a service? We'll talk about what that can do for your organization. Plus true Observability is a North Star for any organization OpenTelemetry. You co-founder Morgan McLean is here to talk about the OpenTelemetry standard and just what it can do for your cloud applications. You definitely shouldn't miss it. Quiet on the set
Podcasts you love from people you trust. This is,
Louis Maresca (00:00:43):
This is TWI this week at Enterprise Tech. Episode five 16 recorded October 21st, 2022. Single pain in the Glass. This episode of this week at Enterprise Tech is brought to you by Thanks Canary. Detect attackers on your network while avoiding irritating false alarms. Get the alerts that matter for 10% off and a 60 day money back guarantee. Go to canary.tools/twit and under the good twit. How do you hear about A box? And by nor Layer? Nor Layer is a secure network access solution for your business. Join over 6,000 fully protected organizations. Go to nor layer.com/twi to get your first month free when purchasing an annual subscription. Welcome to Twat this week at Enterprise Tech. The show that is dedicated to you, the enterprise professional, the IT Pro, and Nike who just wants to know how this world's connected. I'm your host, Lewis Meka, your guide through the big world of the enterprise, But I can't guide you about myself. I need to bring in the professionals, the experts in their fields. Start with very on Mr. Brian Chee. He's net architect at Sky Fiber. He's a network expert, a security professional, Plus he's all around Tech Geek. He likes to play with things. I love that about him. Sheer. How are you doing my friend? How was your week go? What's keeping you busy?
Brian Chee (00:02:00):
I'm doing good. I'm actually getting ready and trying to get all the pieces together and packed away so that we can get ready for Maker Fair. I was visiting some folks in Gainesville last weekend, well last week, couple days ago. Running, you know, the first teams, basically it's a robot high school robotics team and they're going to be running the Learn to Solder portion. So I went through and did some, you know, train the trainer stuff with them and make sure I stressed safety and boy, those kids are talented. You know, I really wish I could, you know, I was able to teach high school again. That was fun, <laugh>. So big, big shout out to them. And looking forward to Maker Fair is basically everything I'm concerned with at the moment. And thank you to everybody with the well wishing my back is much better. Thank you very much.
Louis Maresca (00:03:02):
Now, safety of course, with so, and of course no sniffing either. No sniffing, that's an important tactic. Now Fer is coming up pretty soon, right?
Brian Chee (00:03:11):
Yep. November 5th and sixth at the central Florida fairground.
Louis Maresca (00:03:14):
Fantastic. Looking forward to that. Well, speaking of Maker Fair, we also have our very talented security professional. He's Mr. Curtis Franklin, he's senior analyst at, I'm Dia Curtis. How are you my friend? What's keeping you busy and how is Maker Fair Prep going for you?
Curt Franklin (00:03:32):
We, we are also neck deep in maker Fair prep. My dear wife and I have been down at the makerspace etching printing plates that we will use for the printing with a steam roller project. Looking forward to that. We're also gonna be in the makerspace's booth for most of the time during Maker Fair. So good a got a lot going on. And then there's the whole thing about, you know, even though Orlando Maker Fair is coming up, the cyber criminals of the world just won'ts take a breaks. So I've been having to continue to follow them. Got lots of interesting stuff going on. Looking forward to talking to a little bit about it at the end of the show.
Louis Maresca (00:04:19):
Now, October Security Awareness month, right? Does that mean some extra stuffs going on this month for presentations and stuff?
Curt Franklin (00:04:27):
Well, you know, it's been interesting. I've done a couple of presentations. I've been interviewed by a couple of journalists for Cybersecurity Awareness Month. It's, it interesting the idea of a cyber security awareness month. You know, this is one of those things where on the one hand you would hope that people would be aware of cyber security year round, but on the other hand, it's nice to have some time to spend some special time on it. Help keep people aware of things like, oh, I don't know, passwords and the importance of keeping them unique, safe, and secure. So I think it's overall a good thing. Like I said, been interviewed a couple of times and do intend to have at least one more good article in dark reading before it's all over.
Louis Maresca (00:05:17):
Indeed, indeed. Well, speaking of spending special time, it's been quite the busy week in the enterprise, so we're gonna spend some special time on it right now. Patching hosts and base images, clients, devices, services are a challenge to keep up to date for any organization. What if you had a patch as a service to handle it all for you? We'll get into what that means. Plus true observability as a North Star for lots of organizations out there. Open until I be you Go Founder Morgan McLean is here to talk about the OpenTelemetry standard and just what it can do for your cloud applications. Lots of exciting stuff to talk about. So definitely stick around, but like we always do, we do have to dump, jump into this week's news blips. We've talked a lot about just how, how vulnerable GPS is. In fact, we talked about in episode four 20 of twit, lots of things are synchronized with GPS clocks today.
And with all the vulnerabilities built into it, organizations like the US Army are looking for new solutions. Well, according to this MIT Tech Review article, starlink just might be able to fit that role. Proposed back in 2020, Todd Humphreys of the radio navigation laboratory of University of Texas and Austin proposed the idea of using starling's constellations of satellites to add a more robust service in place for gps. Now, unfortunately, starlink and Musk didn't want to veer away from their focus of getting their network up and scaling out at that time. So for the last couple years, Humphrey has been reverse engineering the signals sent from those thousands of Starling satellites and low earth orbit to the ground receivers. In fact, they even built a little radio telescope to eave drop their signals, which is kind of cool. They've been able to discover that starlink relies on a technology called Orthogonal Frequency Division Multiplexing or O F D M, and it's an efficient method of encoding digital transmissions.
And it was originally developed by Bell Labs in the 1960s. Now, what he found out was that the regular beacon signals from the constellations are designed to help receivers connect with the satellites, but could form the basis of a useful navigation system even without the help from SpaceX. Here now, in a paper he posted the information to discovery are the first steps toward developing a new global navigation technology that would actually operate independently of gps or even its European, Russian and Chinese equivalents. Now, in fact, the synchronization sequences which are predictable, repeating signals that are being down from the satellites in orbit to help receivers coordinate with them, we're done so frequently enough for them to actually use them for positioning. Even each se, which was also contains clues about the satellite's distance and velocity as well. Now with the Starling satellites transmitting about four sequences every millisecond, that was enough for them to actually figure out positioning. I really love this type of innovation, thinking about really outside the box and solving real world problems with some of the technology that's even brand new that's out there. I, for one hope that Todd and his team are able to get the SpaceX in starlink and get them on board and start evolving. And old technology like gps,
Curt Franklin (00:08:12):
We're all busy. I get that and our brains get filled up with all kinds of important information. But taking the time to come up with truly unique passwords can be the most important security step we take, especially since pretty much all cyber criminals are using an existing list of passwords to do most of the heavy lifting for their initial attacks. Rapid seven recorded every attempt to compromise its honey pod over a 12 month period and found that there were just over half a million password permutations used in credential attacks. All but 0.03% of those passwords are included in a common password list called the Rocky U 2021 file, which has so 8.4 billion entries. According to the article, in dark reading, it would seem that attackers being typical lazy humans are sticking to a common playbook. Todd Beardsley, who's director of research at Rapid seven says, nobody, 0% of attackers is trying to be creative when it comes to unfocused untargeted attacks across the internet.
Rapid seven researchers focused on the common passwords used by attackers rather than those being used in defense. So the analysis applies to attackers guesses in brute force attacks. These brute force attacks have risen dramatically during the COVID 19 pandemic with password guessing becoming the most popular method of attack in 2021. Now, it's important to note that the half million passwords, rapid seven salt represent LA less than a 100th of 1% of all the permutations possible in the rock Q 21 data set. And it's even more important to note just how generic the attacks really are. Beardsley says that the traffic Rapid seven is seeing indicates that these off the shelf attacks are happen with essentially no custom configuration. So what's an enterprise to do to up its security game? Organizations should continuously monitor systems for default and easily guessable passwords, which means running the rock U 21, 20 21 list of stolen credentials against exposed and internal systems. Rapid seven also recommends paying particular attention to external facing SSH and RDP servers, as well as IOT systems that may not have easy to change or even possible to change passwords.
Brian Chee (00:10:47):
Thank you to ours, Technica for this story about the FTC Federal Trade Commission is looking at fixing the concept of appliance repair, but it, you know, needs to step up its game and go beyond just manuals. Anyway, the story goes. The Federal Trade Commission is considering new rules. It would require any appliances touting a familiar yellow energy guide label to also include information on how consumers can repair their own products. Citing its own kn the FIX report, the FTC states that repair information will strengthen consumers right to repair damaged products without the need to go back to the manufacturer. This could save customers money, allowing non-licensed dealers and repair techs to better compete and protect the environment. Right to repair advocates are energized by the proposed rule making the publication of which has was unanimously approved. This is a big deal, says Kyle Wine, CEO of Repair, Advocate and Store.
I fix it. In this last Mondays press release. It's hard to think of a more impactful consumer facing and repair policy move from the FTC or a more sure fire way to get repair instructions since in hands of more consumers who need them. Weens noted that appliance manuals, whether provided by the company or written by I Fixit members on the company's wiki style site, are harder to come by than for small electronics. Most people don't want to take apart devices that weigh hundreds of pounds and draw heavy power or flammable gas to learn more about them. So the huge push for the right to repair seems to center around our friends, John Deere and farmers fighting to be able to repair their aging equipment and having lots and lots of friends in Congress to complain to the right to repair devices to get more life out business critical machine. We has become a battle cry in the open source world farming in Silicon Valley. The problem was certainly illustrated when the TWIT studio folks tried out the Apple repair at home system and found it more than a little cumbersome when the parts arrived barely before there were forced to send the extremely expensive toolkits back.
Louis Maresca (00:13:19):
Botnets are still a thing, folks. In fact, a VM exposed to the internet, you bet yourself that some malicious actor's gonna want to add it to their botnet attacks. Now, according to this bleeping computer article, security research has observed malicious campaigns leveraging a critical vulnerability in VMware Workspace one access to deliver various malware, including the R one ransom tool that locks files in password protected archives. Now the issue leveraged and the attacks is actually called CVE 2022 22,954. It's a remote code execution bug triggered through the service side template injection. Now on the latest campaign, researchers noticed that a threat actor deployed the Mira botnet for a DDoS attack, the guard, miner, crypto miner, and the Rawan ransomware tool. Now, VMware released security updates when the flaw was disclosed on April 6th, which actually quite a bit will go now, once the proof of concept exploits became publicly available, the product quickly became a target for threat actors.
No. Starting in August, Fornet actually saw a change in the attacks, which went from targeted data, ex filtration attempts to the crypto miners file lockers and DDoS, and listing from AAI variant. Now as always, although VMware released a fix several months ago, the report indicates that many systems still remain vulnerable. Now, the dangers are how they've shifted from limited scale targeted attacks to large scale infections using the entire malware sets while the inclusions of the R one ransom exposes companies to the risk of actual data loss. Now, what this goes to show you is if a patch comes out, don't even go take a coffee bake break because you know, you gotta get upgrading and deploying that pipeline, right? Well, folks, that does it for the blips. Next up we have the bites, but before we get to the bites, we have to think a really great sponsor of this weekend Enterprise Tech, and that's think Canary, that if there's anything we've learned from this last year is that companies must make it a priority to layer the security of their networks.
Now, one of these layers needs to be think canary. Now unfortunately, companies usually find out too late, they've been actually been compromised even after they've already spent millions of dollars on IT security. Now attackers are sneaky, right? Unbeknownst to companies, they actually pro your old networks looking for the valuable data. But the great thing about Canary is that they've turned this into an advantage for you. Now, while attackers browse active directory for file servers and explore file shares, they'll be looking for documents and they'll scan for open services across the network. Now, things Canaries are designed to look like the things that hackers want to get to. Canaries can be deployed throughout your entire network, and you can make them look identical. Identical. So a route or a switch, a NAS server, a Linux box, or a window server, so attackers won't know they've been caught.
Now you can put fake files on them and name them in ways that get the hacker's attention, and you can even enroll them in active directory. Now, when attackers investigate further, they give themselves away and you're instantly notified. Now, Canary Tokens act as tiny trip wires that you can drop into hundreds of places, and that canary is designed to be installed and configured in minutes and you won't have to think about them again. Now, if an alert happens, Canary will notify you any way you want and you won't be inundated with those false alarms. In fact, you can get alerts by email or text message on your console through Slack, Web Hooks, sys Log, or even just their api. Now, data breaches happen typically through your staff, right? And when they do, companies often don't know they've been compromised. In fact, it takes an average of 191 days for a company to realize there's been a data breach.
Canary solves that problem. Canary was created by people who have trained companies, militaries, governments, on how to break into networks. And with that knowledge they built Canary. You'll find canary's deployed all over the world and are one of the best tools against data breaches. Visit canary.tools/twi and for just $7,500 per year, you'll get five canaries, your own hosted console, upgrade, support and maintenance. And if you use Code Twit and how to hear about his box, you'll get 10% off the price for life. We know you'll love your thinks canary, but if you're not happy, you can always return your Canaries with their two month money back guarantee for a full refund. That's canary.tools/twi and enter the Go twit in the hat I hear about Fox and we thank thanks Canary for their support of this week in enterprise Tech. Well, folks, it's time for the Bites.
Now, patching hosts your base images, clients, services, devices. It's a challenge, right? A challenge for any organization, and it takes a number of cycles for any teams. In fact, it takes a bunch of cycles for my team to actually figure out the right way to do it and to do it securely and efficiently. Now, if your organization doesn't have the resources to stay on top of things, what does that mean for your security? What does it mean for all of your services, right? Well, what if there was a possibility of patch as a service? Could that help? What do you think, Curtis? Can I help?
Curt Franklin (00:18:35):
Well, I think we can all understand that patching second, perhaps only to choosing good passwords, is about the most important thing an organization can do to reduce the chance that there will be a breach. It is critical to isolate risks to ensure that vulnerabilities don't become exploitable, and to ensure that workflows aren't interrupted due to allowing software to fall out of supportable versions. Now, the security risk is enormous. Verizon's 2022 Data Breach Investigations report, which is one of the more significant publications in the industry, found that 70% of successful cyber attacks exploited known vulnerabilities with available patches problem. While everyone agrees that patching is important, it may not be urgent, at least not in the minds of the team. And when you, especially if you have a smaller team, those urgent tasks tend to mount and mount and mount until you have no time for the important things.
So what can you do? One option that's becoming more popular is outsourcing. The process of patching can save an organization. Time and money can lead to improve security. It can provide a verifiable SLA or service level agreement that guarantees that patches will be applied within specific timeframes. But like everything else, it comes with costs. And in this case, those costs can go beyond just the dollars that have to exchange hands to contract with the service. One example an organization can lose visibility and control over the patches. The IT team might not know precisely which patches have been applied and which still need to be applied. Now, I think most people would say that if your organization has the capability, taking care of patches themselves is probably the best idea. So small to mid-size organizations that don't necessarily have the resources to keep up with patching are the first wave of organizations taking advantage of this. And we're seeing more and more companies from different areas add the capability of patching. For example, there are companies like on, on analytics that have added co-managed patching as a service to its security solutions. So security companies are adding patching discovery companies, those that go out and do inventories to tell you what versions of everything you're running are beginning to add patch management and patching as a service to their portfolio.
But still what you are doing here, ultimately, and you have to recognize this, is paying a third party to take responsibility for a critical part of your infrastructure and your processes. Now, as these groups take over more and more accounts, they are relying more and more on automation. And this is the same kind of automation in many cases that your company could apply if it chose. Now, from where I sit, patch management software, the applications that automate patch management are absolutely critical because they can take care of going through and patching not only all of your servers, but the various endpoints that need to be kept up to date. And they can do it without requiring that you develop that you devote multiple individuals to the task. We've seen in the past that there are critical pieces of the infrastructure that require extraordinary access to things like administrator credentials. And if those pieces of the infrastructure are compromise, if those services are compromised, well then your administrator credentials are compromised. So is it worth it? Where do we come down? Brian? I I want to turn to you first and ask you, what do you think about this? Is it a risk worth taking for every company? Or should companies really qu carefully look at what their capabilities are before they consider a a service like this? You know, this is one of those questions that I keep asking
Brian Chee (00:23:44):
Over and over again. You know, how much trust am I going to put in the outsourcing? You know, whether it's, you know, someone, you know, sweeping the floors and taking out the trash, you know, we're giving them key cards so they can get in after hours or credentials, administrator credentials so we can patch the machines. Now, I know there's some granularity. Active directory definitely has it. I'll show my age. Novell network certainly had it. But things like Unix, some of those, some of it just doesn't have enough granularity. So you're, no matter what patching as a service is going to require a level of trust that you're outsourcing. Now, is it better to just leave it unpatched? No. That, that's asking for trouble. But I gotta foresee that patching is a service is going to come with a lot of liability insurance.
You know, as a certain type of consultant. I carried a million dollars with an extra million dollar umbrella in liability insurance in case I messed up. What are the patching as a service companies going to do? How are they going to make you whole if your administrator credentials have been stolen because of their negligence? Those are questions you definitely have to ask yourself. If you are a small company and you just can't keep up, you know, having unpatched systems needs to be weighed with, you know, outsourcing your authority, you know, outsourcing the credentials, there's a lot of interesting things that need to happen. It is a really, really new industry and there's a lot of questions, just like service level agreements for things like clouds have changed radically, radically over just the last couple of years. I think pets, you know, patching as a service is going to have to, you know, grow up a bit. But it sounds like a really, really good idea, especially if you just can't keep up.
Curt Franklin (00:26:14):
Well, I was go, Lou, it's good to see your face here because you were gonna be next. I was going to ask you, you know, you deal with the creation of the software, and so from where you sit, I mean, how important is it? Is this something that companies should consider moderately important or is keeping up to date on patches something that should be elevated beyond the merely important to the urgent and critical level for most companies?
Louis Maresca (00:26:47):
See now I thought I was gonna get away skate free here without having to answer that question. It's, it's a, you know, patching is a challenge and I think it should be at the top of the list for every organization. I mean, we just talked about, you know, in, even in the case of VMware where, you know, people's VMs were, were still being exploited because of the fact that they didn't take the time to patch, you know, and there was already a fix for it. You know, this happens with many organizations. I work for a ton of organizations that are always already architecting services with cloud native services and applications. And they don't take the time to, you know, even if it's a manual deploy of an update to their services. You know, if you're using things like Kubernetes or Docker containers, you know, you have to sometimes go and patch the host VMs that are running these things.
And, you know, and sometimes that can affect your services, your applications and your software. And I can tell you that that's a challenge for an organization to have to deal with that, you know, why is there some side effects based off of these updates? And, you know, do I have to take my service down? Am I patching in the right order? Am I, you know, are, you know, services running or not running? And it just becomes a very big challenge and a very big time sink. And, you know, a lot costs a lot. And I, like we were saying in this article and the discussion is it sometimes takes a lot of resources and a lot of knowledge on how to do this right, efficiently and reliably. And I think that it should be the top of mind for any organization because it me could mean the fact that their software and services is vulnerable and so is their data and their customers. And so taking it, you know, taking it seriously could mean the difference between a business and an organization and an application being up and one being down and one being no longer trusted. So I think it's, it's definitely something they need to, to put as prize zero for most things that they do
Curt Franklin (00:28:35):
Well, I think it's important to note that it is something that most companies are aware of to the point that having a cloud service provider keep up with the patching and updates is one of the reasons many companies give for moving to the cloud in the first place. It's complicated, it's critical, and most companies would rather not have to do it, but if they do, a service might be the way to go if the conditions are met. Well, that's it on this story. We appreciate all of the attention people are giving to this critical topic and attention is something that I think Lou is ready to give to one of our sponsors here. Onya.
Louis Maresca (00:29:25):
Thank you, Curtis. That's right. Next up we have our guests, but before we get to our guests, we do have to take another great sponsor of this week in Enterprise tech and that's Nord layer. Now we talk a lot about Sassy, you know, the security access service Edge on the show, and you need SAS security, you need threat prevention, you need secure remote access. Plus you wanna adopt zero trust well nor layer safeguard your company's network and data. There has been a huge surge of ransomware attacks and employees choosing to work remotely. Business networks have become more vulnerable than ever now, Nor Layer secures and protects remote workforces as well as business data. And it can help you ensure security compliance. Now with nor Layer, it's easy to start, takes us less than 10 minutes to onboard your entire business on a secure network. You can easily add new members, create teams and private gateways, and even do things like IP allow listing site to site connection, network segmentation.
And in fact, you can even set up secure network access. Now, if you're looking for a way to get started, go right now to nor layer.com/twi and get one month free and see how good nor Layer actually is. Now that's not all, that's right because Nor Layer is also easy to combine as it's completely hardware free and it's compatible with all major operating systems allowing you to implement security features across all of your teams. There's features like two factor authentication, single sign-on biometrics, threat block, and smart remote access. Now, Nord Layer is easy to scale as you can choose a plan unique to your business requirements and your growth rate. You'll have everything you essentially need in one place where you can check the server usage and you can even view the activity log. Now, one nor Layer User said, We're looking for an easy way to securely connect our remote workforce to our infrastructure.
This is it Awesomely quick, friendly, and efficient support got us up and running in no time. Another one actually says, Simple to install and operate no funny business and so fast that our teams don't even notice they're using it. Most modern businesses are already adopting network solutions like Sassy Zero Trust and hybrid Work Security. Nor Layer has all of that and much more. Don't leave your business vulnerable. Try no layer today and join more than 6,000 fully protected organizations. Also, don't forget, October is Cyber Security Awareness Month. So now is the time to safeguard your company with Nord Layer, Nord layer. If you wanna secure your business network, go to nor layer.com/twi to get your first month free when purchasing an annual subscription. That's nor layer.com/twi. And we thank Nor Layer for their support of this week in enterprise Tech. Well, folks, it's now time for the guests where they actually get to bring in some knowledge and drop it on the twit riot. Today we have from OpenTelemetry and Splunk Morwick Morgan McLean, welcome to the show, Morgan. Really excited outta here. Hi, glad to be here. Now, before we jump into the world of observability and instrumenting your code, our audience ha was really a large spectrum of experience out there and they love to hear people's origin stories. Can you take us on a journey through tech and what brought you to OpenTelemetry and Splunk? Splunk,
Morgan McLean (00:32:41):
Certainly, yeah, I was mean as a child I was always very into computers. But in university I had notions of going into mechanical engineering and, and things like that. But I was doing, I was in the co-op program and it was during the global financial crisis, and it seemed like the only companies that we're hiring were software companies. So I did a variety of internships at Microsoft and, and BioWare and a few other firms, and ended up working full time at Microsoft on high scale web services for a number of years. On the specifically, it was like the Xbox marketplace, the Windows store, the back and further retail store network that I think might not be around anymore. But I always ran into challenges, debugging it. Whenever anything went wrong, we'd be on call, we'd have to go fix it. And our tools were very basic and insufficient for solving a lot of these problems.
At some point, Google came around and chatted with me and gave me an opportunity to go head up development or head of product management of what they call distributed tracing and APM tools which was incredibly exciting. And it was an opportunity to go build the next generation of DevOps tools that I'd always wanted to use. So I took advantage of that and at Google was going and building those and going our customer base. And this will, this is really how we got into telemetry, discovered that the biggest challenge for a lot of these tools is the ability to extract data out. So you can build the world's fanciest, coolest analytics system that does, gives you tons of great insights. But the ch biggest, biggest issue you have is that people use tens of thousands, if not hundreds of thousands of millions of different types of infrastructure and software packages and operating systems and libraries that they want to instrument.
And so we started a project called Open Census that later became OpenTelemetry that solves this not just for Google and not just for Splunk but for every single backend processing solution in existence. And so OpenTelemetry provides a way that tho that those combinations of millions of different pieces of software or things that you want performance, production performance signals from can send that data to any telemetry analytic system, whether that is Splunk Observability Cloud or Splunk Enterprise or Light Step or, or Microsoft Solutions or what have you. It can send that data in a format that is understood and it'll send all the data you need. And so it's solves a major problem and that that my, basically my whole careers journey is, has taken me where I am today.
Louis Maresca (00:35:02):
That's fantastic. Yeah, you're definitely right. I mean, I, I definitely deal with this on a daily basis, but it's working with other organizations is internally, you know, obviously, Yeah, you know, instrumenting code, collecting the right telemetry, having it flow throughout the entire, all the different tiers and making sure that you make use out of it and be able to develop monitors and automation workflows off of it. It becomes a very difficult thing. And I can, I guarantee it's a, it's a very rich environment. It has its own complexities. Now obviously there's, you know, client service telemetry, there's metrics and analytics, there's, you know, these are special things in the cloud of, you know, the world of cloud native. And there's tons of think about there. Now, I, in fact, I was recently working with an organization around client design and services, and we had a conversation around Adobe analytics, Google analytics, snowplow analytics, Twilio segment. Of course there's concept of the Prometheus scrapers and to kind of build into these packages <laugh>, just lots of things to think about here. Can you maybe take us through the landscape here and what organizations should be thinking about?
Morgan McLean (00:35:59):
Yeah, and and it's a bit of a history lesson there too, right? Like, it's not my history in this case, but it's the history of monitoring and, and, and alerting and, and really observability, which is what the term people are using today. And so when we wind the clock back many, many years, you had logs really that everyone was using and, and even if you went back to like old web services providers in like the mid late 1990s, they would be using this, right? You go to like, I'm, I'm not stating this as like my own experience, I would've been too young. But like you, you, I'm certainly, if you went back in time to like Hotmail, when they were a startup or someone, they, they would've had some kind of logging that they were, they were paying attention to, that they would use to debug issues and, and form other insights over time.
Monitoring also started to get added to that. And so monitoring is, is kind of a confusing term cuz it's both the concept of, you know, setting triggers and alerts and things like that. But classically it also refers to using metrics. And so you would start to capture metrics from your infrastructure, your CPU consumption, your memory consumption things like that. And so those became sort of the two core pillars that a lot of organizations would rely on. And they would start to add metrics also from their core applications. So the request rate and latency and throughput of their various endpoints that people are accessing. And they would trigger alerts based off of that starting a few years ago. And this is really when open census and OpenTelemetry come into the fold distributed tracing started to become more popular and that allows you to capture basically see how a single request passes through a chain of microservices.
So if you are, I'm trying to think of a good example. If you're, if you have a e-commerce system and you, your customers are having difficulty checking out their products that they're purchasing from you when they click checkout, that presumably goes from your front end, like the actual website they're interacting with to your backend services and hops around a bunch of bunch bunch of different services there. What a distributed trace will show you is four a given interaction there, the exact chain of service hops that took place. So if the issue was five or six layers deep in your nested services, you can very quickly find that and then remediate that. And so distributed tracing over time got added. And now there's, there's newer solutions like distributed profiling that allows you to see like how a single function call or effectively line of code would impact the performance if your application and various other solutions that are being added to this.
And so those are the different concepts. And, and just to relate this back to OpenTelemetry, OpenTelemetry captures most of these types of data today. It's, it's adding more of a time. And, and OpenTelemetry allows you to send these to different backends for analysis. So you mentioned Prometheus. Prometheus is like OpenTelemetry, a a Linux Foundation, Cloud Native Computing Foundation project that provides a backend for analyzing those metrics, right? Right. So you can use Prometheus as this giant time series database to go store all of the metrics that you capture and you can build alerts off of them and you can go inspect them and build really nice dashboards using Prometheus and also tools like fin on top of it. And there's similar solutions for logs and there's similar solutions classically for dis distributed traces and profiles and other types. And OpenTelemetry allows you to send these types of data to those destinations that you want.
Sometimes the destinations are siloed. The ones I mentioned tend to focus on a single type of data, right? There are others that capture all the different types of data. For example, I work on Splunk observability cloud. It uses all of those different data types to give you very strong observability. The ability to, to see all of your services and infrastructure and how they interact and how to fix them. It solves that problem. There are other solutions as well, like Datadog is one that's very popular. There's, there's many, many others. But, but that's fundamentally sort of the power that it's giving. And I think your initial question was like, explain why should people be paying attention to this? The answer is yes, absolutely. Right? If you are running web services at any important level of scale, you need these solutions not only to solve problems when you have an outage, but also just to speed up your development, right?
So like I I, I mentioned I worked at Microsoft. One of the biggest challenges that I faced there was we were building a brand new, brand new set of eCommerce systems. And there was a very small number of people who could tell you all of the services that we were, that we built or were building and how they interacted. And to have any meeting of significance where you wanted to change something or build some huge type of enhancement to that system, you would need to have one of those people in the room who could draw on the whiteboard all the different services. And you couldn't really do any sort of architectural designs without those people. Because you wouldn't know how certain things interacted. You wouldn't know how to extend a system, An observability solution, like the ones I mentioned, will, they'll show you how all of your infrastructure and services interact, right? They'll draw that whiteboard drawing for you instantly. And indeed they'll tell you the state of everything right now if it's broken, where you know how to solve specific problems and, and much, much more than that.
Louis Maresca (00:40:39):
So, so let me jump in here cause I know we, you know, obviously they, they should think about it, right? They should really keep it in their mindset that they needed to do this thing. Now, we talked a lot about, there's, I I kind of named a bunch of solutions out there. You named some bunch of solutions. Yeah. Now some of them, like for instance, OpenTelemetry, you said it finds a way to collect data. It does it in a strongly type way, has budget data types. You can send it to any backend. Now there are solutions out there, like for instance, I just recently talked to Snowplow. They do something similar, but then they also have a backend. And so they kind of promote the fact that it's very tightly integrated. They can do a lot more things with it because of the fact now what is the advantage of having these kind of like separation of concerns there?
Morgan McLean (00:41:16):
Yeah. And so I think the power for a true observability system comes from having all of those different types of data. So you're right, there's been various point solutions even in the open source space, right? I mentioned tracing. If you look at tracing, there's really two major open source backends that are popular. And there's Zipkin, which is a bit older. And, and then Jager, which came out a few years ago. And, and those are both very, very popular and, and they historically came with their own instrumentation that would capture spans for distributed traces and send those back. And in some languages the instrumentation would be quite good and others it might be quite, quite limited. But pretend for the sake of argument that, that it was great and did everything you wanted, right? But that still only gives you traces, right? And, and so when you want this full solution that actually tells you what the heck is going on with all your backend services and clients connected to them and, and speeds up your development, it allows you to, to debug problems very quickly.
You need all of these different signals. Historically, there hasn't been a single solution that provides all these signals. And, and so that's what open, where OpenTelemetry really steps in. The other big thing to note about OpenTelemetry is after Kubernetes, it's the next biggest project in the cloud native com computing foundation. It's huge. It has integrations for thousands or tens of thousands or, or, or I don't know the exact count, but like, like basically anything you want, it has an integration for. And so it can provide integrations where in places where some of these point solutions wouldn't classically be able to provide one because they would need some hilariously large team of engineers to provide those which they didn't have.
Louis Maresca (00:42:43):
Now I have read a little bit about, there are obviously two standards here. There's OpenTelemetry and there's Open Metrics. What is the difference between these two specs?
Morgan McLean (00:42:51):
It's a good question. Open Metrics is mostly, I I think it's mostly Prometheus people working on it. And so it, it adheres very closely to what the, the Prometheus metrics model that has always existed. I believe when the first draft of Open Metrics came out last year, it's mostly a description. It's like a description, like effectively in English of, or you know, human language of this is a metric, A metric can exist in these various formats. You can do these things to metrics if you're performing an aggregation. Here's the mathematically correct way to, to perform it. I don't believe, and I, my knowledge here might be out of date, but I don't believe Open Open Metrics has any implementations. I think Prometheus is the implementation of, of Open metrics. Opentelemetry has its own specification, its own definition of metrics, how they interoperate, how they work, the things you can do.
It also has a full implementation, right? There's language libraries for every single language you can use. There's language specific agents for every language you would want. There's a collector agent that captures these. It has all the components you need. And it implements these both with its own model, which it can export to any destination you want. And it can also receive metrics from Prometheus using open metrics from its own components, its own open Telemeter components or various other metrics sending components like Telegraph or others. And so you can use OpenTelemetry natively. I think most people do when they're using OpenTelemetry to natively pick up metrics from your applications and your systems and everything else and send those to wherever you want. But you can also use it effectively as a hover router where it's pulling in metrics directly from Prometheus. And indeed it can, it can process these transparently where you could pull them into OpenTelemetry and then export them again as Prometheus or as open metrics in this case. And they would just, it would do the processing you want. So there would, nothing would go wrong. It just works.
Louis Maresca (00:44:40):
I could geek out about telemetry and metrics for a long time and my team can tell you that. But I do wanna bring my cos back in cuz they're cho at the bit here. Let's start with Curtis first, Chris.
Curt Franklin (00:44:49):
Thanks Lou Morgan. One of the things that we hear a lot about in the industry, you know, people will use the phrase a single pane of glass <laugh> as short sort of shorthand for what they think the ultimate situation might be. Where you have one screen that shows you absolutely everything a person could want to know about the most complex infrastructures in the known universe. Is that the goal of what you're working toward here or is that the wrong way to look at where this could be going?
Morgan McLean (00:45:25):
It's certainly a way this is going. To clarify, it's probably important also to distinguish, like I'm one of the co-founders of OpenTelemetry. I work on it. I also work on observability tools that Splunk on, on observability clients specifically where I'm responsible for, for a number of things. But, but I think in both cases generally, yes, and, and I'm, I'm sort of caveating this somewhat, but I think generally observability tools and thus OpenTelemetry, which provides the data to, to a great number of observability tools already are gonna be focused on this single pane of glass scenario where you have an observability tool that shows you effectively everything you need in your entire estate. And so it's important that OpenTelemetry capture each of those different types of signals with as much fidelity from as many sources as possible and that it can correlate them.
The whole, one of the big points of using OpenTelemetry is that it can pull in these different types of data and, and like I mentioned, actually perform those correlations. So you can do things like for a given span, see what service it was from and then correlate that with various logs that were captured from the same service and, and host metrics or, or application metrics that also are related to that, that allows you to build that single pane of glass experience. You can't really do that effectively without those correlations. That being said, there will probably always be a place for certain point solutions, right? If you look at the observability market today, there's certainly a lot of consolidation. You see that at Splunk. You see that at a, at a lot of other firms as well where they're attempting to build a single pane of glass cuz it's very useful and very powerful.
But there's still like tools that are designed to delve into databases with great depth, right? That give you like very deep insights that only maybe a database tool was would. And there's others, you know, specific examples of that. I think someone mentioned Google Analytics. There's still a lot of web analytics tools that are very specialized over time. The number of specialized tools may shrink to some extent because the, the, the big single pane of glass tools simply absorb their capabilities. There may also still be pockets where, where you have niche tools that have very specific capabilities that may pull in their data from OpenTelemetry cuz it has the data they need, or in some cases might pull it in from specialized sources. If those are so unique so niche that that maybe OpenTelemetry doesn't support them.
Curt Franklin (00:47:42):
Well, speaking of niche sources one of the collections of niche sources is operational technology or for those not in the business I OT on an industrial scale, we've been talking a lot about telemetry from the IT perspective, but as you look at OpenTelemetry, is this something that would have application perhaps in that OT world to be one of the things that makes it easier to bring OT observability and telemetry and IT together Yes. Into, to one collection of, of data that can be analyzed
Morgan McLean (00:48:29):
Over time? I think definitely OpenTelemetry historically has been very focused on backend services, the things you're running on cloud infrastructure in your own data centers. That focus has broadened over time to also include client applications. So capturing like critical performance telemetry from like Android, iOS, web applications, things like that, that's still under development. That's not quite GA yet, but, but it is a thing that community is very engaged on. I O T devices I think are also an interesting extension of this. It isn't a thing that the community has explicitly focused on yet. But I think that's a, a very clear next or step at some point in the future to, to capture telemetry from those sources because they're being used in in pro production ways, right? Like they're probably not being used within the data center to actually host live web services, but there's still, they're, they are critical sources of telemetry that you want processed somewhere and there's also a lack of standards in that space. So I think OpenTelemetry could bring a bunch of value there. And it would also be just a, a natural extension of OpenTelemetry vision once the immediate next steps are already filled out.
Curt Franklin (00:49:31):
Well, I know that Brian wants to come in. I I do have one more question though, and it it, it gets to one another one of those niche applications. It seems to me that this would be an ideal solution for things like scientific instrumentation where one of the critical points is gathering data from all kinds of different systems and, and you know, is that something that you have thought about? I mean, whether or not you're getting active queries when you think about this, is that one of the places you, that you let your mind go?
Morgan McLean (00:50:04):
You know, it's funny for the iot question you asked, I've heard before and it's something I've muled over it, it hasn't been a top priority the project because we've been so focused on web services and infrastructure and then front end applications. But I suspect for the IOTT part, if you'd pulled anyone in the community at any point in the last year, they'd be like, yeah, you know, eventually we'll get to that. No one has mentioned scientific measurement equipment and it, I I appreciate it does actually actually fall in some cases into the IOT category. I don't know why not, right? Like it's another place that would benefit from these standards. I don't know if members, the, the people who use it or the, the kinds of people who use it contributed it or not. The people who have classically been part of the OpenTelemetry community. That being said, that's actually really interesting and I could definitely see us trying doing that at some point. I don't really have a better answer cuz I haven't thought about it before. That's, that's very, very clever. It's a very good question
Brian Chee (00:50:58):
Actually. That's a good place for me to jump in because I am in the scientific community and time series data has been a huge challenge. Yep. And let me give you a clue. The vast majority of time series data, do you know how it's stored? Flat files, comma limited
Morgan McLean (00:51:17):
Brian Chee (00:51:17):
Right? Yeah. And yeah, it's been, I've been yelling like crazy in, in the community and say, Hey, you know, guys, guys, guys, we need to do better. And I actually got a grant from Splunk, but to, to go and do this, but they just didn't want to eat the learning curve anyway. That's enough.
Morgan McLean (00:51:42):
I could definitely see value there. Like I remember I have a degree in, in two degrees and once an applied physics and, and I definitely remember like, like various lab sessions were collecting like yeah. You know, gigabytes of data and just spitting it out in CSB and then then using Python code or even at Microsoft Excel to, to cut through it. Definitely. I
Brian Chee (00:51:59):
Used to teach
Morgan McLean (00:51:59):
Brian Chee (00:52:01):
Yeah. I used to teach Ocean three 18 in four 18 where we taught students to use RINOs and raspberry pies to build their own instrumentation. Anyway, the real question I'm asking is, gee, this is all sounding really good. I love the direction of standards and that single, the capability of going to a single pane of glass or maybe it should be a customized pane of glass. That seems to be good. But let's ask the question for the people that want to get started to use OpenTelemetry that you're describing. Do I have to run Splunk? How do I No,
Morgan McLean (00:52:40):
No, no. To be clear, like let's ask, I could have been more explicit initially. Opentelemetry is not a Splunk project or we're one of the biggest contributors. Yeah. But like there's many like, it's like Splunk, is there Light Step, Microsoft, Google? Sure. Amazon New
Brian Chee (00:52:54):
Relic. Where do people go?
Morgan McLean (00:52:56):
Brian Chee (00:52:57):
Do you get started?
Morgan McLean (00:52:58):
Yeah, so there's website, OpenTelemetry.io is the core project website that has guides on how to get started. The code is available on GitHub. It's a totally open, like pure open source projects, not even open core where there's like a single vendor behind it. There's various components that you, depending on your requirements, you might need various components to get started. Typically you use the open so much collector, this is a effectively an agent that runs on a host. It captures host metrics and logs and things like that. And then if you're running your own applications that you want instrumented, you would use one of the language components. So for Java for example, there's a Java SDK and a Java automatic instrumentation agent. And you would use one of those to the instrumentation agents. Tends to be the much easier one to get started with to capture spans for distributed traces and perform context propagation inside of your Java application. So you would just deploy it alongside your jvm, reference it once and it just works. And it will send data to the collector. The collector sends data to wherever you want to send any of that open tele telemetry information. And it's probably important to note, you can send data to many places at the same time. So you can use multiple analytics solutions on the exact same streams of data.
Brian Chee (00:54:06):
Cool. Yeah, cuz one of the frustrations in the world of electrical iot, you know Yeah. Smart meters and things like that is needing to go and serve multiple masters. Yes. that has always been a big deal. And the other problem is a lot of systems like smart automatic smart transfer switches smart meters and things like that are actually really primitive. Yes.
Morgan McLean (00:54:34):
I went down the deep end this year on, on home automation with home assistant and various other home opensource. And I'm actually trying to get even the big my actual meter to work with it right now.
Brian Chee (00:54:43):
Yeah. I I've been doing a lot of work with like the GE smart meters for commercial uses. Even those are really primitive. So here's the question. Is the community working on trying to have some sort of collector that can front end the less intelligent devices like PLCs, transfer switches even the smart, there's
Morgan McLean (00:55:08):
Nothing aren't like, like there's nothing there specifically for, say, scenarios that involve PLCs or some of these iot devices. Yet that being said, architecturally, the, the open tele collector, the agent that I mentioned earlier, you can deploy it as a web service and in hundreds of thousands of organizations around the world, they have it deployed as a web service to basically proxy all their metrics or proxy all their, their outgoing telemetry from their back end web services and infrastructure and things they're running architecturally. That is the same way you would probably use it in an IOT scenario where you have it running on a hosts or as a container somewhere in your network and it is going and querying or receiving data from your various iot devices. Again, the, the integrations, the, the receivers in the OpenTelemetry pilots don't exist yet for the collector for most of these iott sources.
Right? So like I, I mentioned, I'm trying to get my power meter to work. It has some adapter that sends out data we're using like mqtt for example. I don't mm-hmm. As far as I know, there's no MQTT like metric receiver and OpenTelemetry yet. There might be, I perhaps I'm just mistaken, but I, I don't think there is one. But as soon as there, as soon as someone goes and builds that and adds it to the collector, you're basically good to go, right? Like architecturally the rest will just work. And OpenTelemetry can send those metrics to Prometheus or Flat Files or even CSVs if you want 'em, or really anywhere else where you want that data to be sent and processed. And so most of the components, all of the really hard part is done. It really for that scenario. It just needs a few receivers to be written and it'll just work at that point.
Brian Chee (00:56:38):
Well, you know, speaking of hard part, <laugh>, let's talk about compliance and security. That's been a really big challenge in the iot world. And if you're doing business in Europe, you know, GDPR is a big deal. Yep. Have you guys had conversations about this? Is that being designed in
Morgan McLean (00:56:58):
So, so, so security is like a, a first class thing that, that, that we're, we're very concerned about and, and take action on in the community. To be clear, like you mentioned gdpr, the types of telemetry being captured by OpenTelemetry today, unless someone went it explicitly like extended it to do this, are not capturing the types of things that you'd, you'd generally be concerned about gdpr. This is like back end performance metrics from, from hosts and from your infrastructure and your services. So things like request latency, throughput, stuff like that. Though I suppose someone could go, like an end user who's using it could go extend metadata, add things like that. But generally this has not been a concern. That being said, it's, it's a good point. And as OpenTelemetry expands to capture performance and production telemetry from client applications, that will become a bigger concern.
And so it is a thing that we will need to take note of. To go on a actually slightly different direction with your question about security though, like I, I do think OpenTelemetry effectively prep provides significantly more security to companies. There was an example last year of, of a firm that supplied it management agents to customers including I think like actually Microsoft and the US government that was compromised. And the agents that they supplied in turn to their customers were compromised and thus Microsoft and the US government in various other large organizations got hacked. And that was just because they had a private third party agent that they couldn't validate the code of, that they had to deploy to deploy their it infrastructure. And that agent was compromised. If you are in a high security environment and you care about this sort of thing, OpenTelemetry is absolutely fantastic for you because sure, you can take the pre-compiled builds of different components that we have available on, on Maven and, and various other, you know new get and various other language specific places.
And you can just go build those and put those into your application. Find that works, that's very easy. But if you're more concerned, OpenTelemetry lets you control your software supply chain when it comes to the agents you're using to capture telemetry. Cuz you can just build the whole thing from source, right? It's all on GitHub. Even the exporters that send data specifically to like a certain maybe private endpoint, that's all open source. You can validate that code, you can run your own builds. And so if you start adopting OpenTelemetry instead of proprietary third party agents, you are dramatically in my mind improving the security of your organization, your backend services, because you're not letting the compromise of a vendor compromise all of your production workloads.
Louis Maresca (00:59:28):
Unfortunately, time flies when you're having fun. Morgan, thank you so much for being here. You great, great topic. We probably could talk about it for forever. I love geeking out on data, but we wanted to give you a chance to maybe tell the folks home where they can learn more about Splunk OpenTelemetry, how they can get started, obviously.
Morgan McLean (00:59:42):
Yeah, so for OpenTelemetry, it's OpenTelemetry.io and github.com/open-telemetry. Those are the two best places to go to get started of the project. Like we love more people using OpenTelemetry. We love having more people enter the community and, and start contributing. It's a very, very healthy friendly community. We, we just adore having more people there. If you wanna learn more about Splunk there's, Splunk really has two products related to this. There's the core Splunk products, Splunk Enterprise, Splunk Enterprise Cloud, and there's Splunk Observability Cloud, which, which uses the, the sort of new types of data that OpenTelemetry sends natively. It's, it's our first class way of getting data in. We don't have our own agents for Observability cloud anymore, we just use hotel. And for that you can go to splunk.com, I think slash observability. And we also have various landing pages for OpenTelemetry with Splunk.
Louis Maresca (01:00:29):
Fantastic. Thanks again. Well, well, folks, you've done it again, you sat there another hour. The best thing enterprise podcast in the universe of definitely tune Podrick Cop podcast, Catcher to twt. I wanna thank everyone who makes this show possible, especially my co-host I the very own Mr. Curtis Franklin. Curtis, what's going on for you in the coming weeks? You work people find you in all your work.
Curt Franklin (01:00:51):
Well, I am working on a bunch of different research pieces trying to get a lot of things done before the end of the year in both the training space I'm working on cyber security awareness training, that's employee training, beginning to work on professional training and starting to do some research on things like AI and security. Would love to know what people think about AI and security and whether their company is using it. Drop me a note. You can do a direct message to me on Twitter. I'm at KG four gwa. You can follow me on Facebook or on LinkedIn or even shoot me a a note a comment to one of my articles on dark reading. That's dark reading.com/amd. Love to hear from you. Always appreciate hearing from members of the Twit Riot.
Louis Maresca (01:01:51):
Thank you, Curtis. Well, we also have to thank our everyone, Mr. Brian Chee achiever, what's going on for you in the coming weeks? Where can people find you and where can people find you at Maker Fair?
Brian Chee (01:02:01):
Well, I'm gonna be wandering around at Maker Fair, but you know, I was actually very quickly typing away with my thumbs, sending the OpenTelemetry.io link to a bunch of researchers I used to work with at the University of Hawaii. I used to teach Ocean three 18 and Ocean four 18 and OpenTelemetry sounds like a righteous way of organizing and data mining data that we bring in from the field, especially since so far. I betcha I, I'll betcha if I ask that OpenTelemetry doesn't require a ton of bandwidth, which is perfect for field ecology. Anyway, I love to hear from you folks on what you're doing. What, and you can hear what I'm doing. I do a lot of stuff on Twitter. I'm A D V N E T L A B advanced Net lab. You're also more than welcome to toss me some email. I'm sheer spelled C H E E B E R T twi.tv or you're also welcome to send email to email@example.com and that'll hit all the hosts would love to hear from you and you got, you folks have some great ideas, guys and girls, and I want to hear 'em talk to me. Take care.
Louis Maresca (01:03:26):
Thanks Geer. Well folks, we also have to thank you as well. You the person who drops in each and every week to get your enterprise goodness. And we wanna make, wanna make it easy for you to watch and listen to catch up on your enterprise at IT News. So go to our show page right now, twi it.tv/twi. They'll find all the amazing back episodes that we've done. Of course the show notes, the coast information, guest information and of the links, the stories that we do during the show. But more importantly next to those videos there you'll get those helpful subscribe and down the links that's right. There you go. Get your audio version, your video version of your choice. Listen on any one of your podcast services or podcast applications cuz we're on all of them. So definitely subscribe and support the show. Plus, you may have also heard we also have Club twi.
That's right. It's also a member's only ad-free podcast service with a bonus TWI plus fee that you can't get anywhere else. And it's only $7 a month. And there's a lot of great things about Club twi. One of them is the, the exclusive access to the members only Discord server. You can chat with hosts, producers, you can have side discussions in different channels, lots of great channels on there, plus special events. They have a lot of great events you can listen to. Of course, lots of great discussions of fun. So definitely join Club twi, be part of the movement. Go to twi.tv/club twi. You gotta also remember Club TWI offers corporate group plans as well. Well it's a great way to give your team access to all of our ad-free Tech podcasts. The plans start with five members at a discounted rate of $6 each per month.
And you can add as many seats as you like. It's a great way for your IT department, your developers, your tech teams, your sales teams to really stay up to date with access to all of our podcasts. And just like regular memberships, you can actually join the TWI Discord server and get that TWI plus bonus feed as well. So definitely join club TWI at twi.tv/club twit. Now, after you subscribe, press your friends, your family members, your coworkers with the gift of Twit cuz we do talk a lot about fun tech topics on this show and I guarantee they will find it fun and interesting as well. So definitely share it with them and have them subscribe. Now if you've already subscribed and you're available at 1:30 PM Pacific Time on Fridays we do this show live. That's right, live right firstname.lastname@example.org. There are all the amazing streams that we have, different stream providers and you can come see how the pizza is made, all the behind the scenes, the banter, the fun that we have here on twi.
Definitely watch the show live. If you have a chance, of course you're gonna watch the show live. You have to jump into the infamous IRC channel as email@example.com. I am in there right now. I'm in there every week. I try to be in there. It's sometimes over the weekends cuz there's a lot of great characters in there. We like get a lot of great conversations. So definitely join that, the channel, the IRC channel as well. Definitely hit me up. I'm at twitter.com/lou. Mm There I post all my enterprise tidbits. I like direct messages from people like you. You get me a lot of great show ideas and also hit me up on, on LinkedIn as well because I got a lot of direct messages and messages there on show ideas and options. And of course you can always hit the, the host set firstname.lastname@example.org as well.
Now if you wanna check out what I do during my normal workweek at Microsoft, definitely check out developers.microsoft.com/office. There we post all the ways to greatest ways for you to customize your office experience and definitely check out office scripts because my office script services team is, is one of the big teams behind all of the office scripts that you can run macros in. The power automate environment allows you power automate them in, in an unattended way. And definitely check that out because it's a new more powerful way for you to customize your experience. I wanna thank everyone who makes this show possible, especially to Leo and Lisa. They continue to support this week at Enterprise Tech each and every week and we couldn't do this show without them. So thank you for all this support over the years. Of course, thanks to all the staff and the engineers at twit and of course wanna thank Mr.
Brian Chee more time. That's right. He is our co-host but he's also our tireless producer as well. He does all the show bookings in the plannings for the show. We really could do the show without him. So thank you sheer for all your support and of course, thank you. I wanna thank before we sign out, thank for our editor cause they make us look good after the fact and of course cut out all of my mistakes. So thank you sir. And of course, thank you our to our TDS here today. Mr. Anthony, thank you for all your support for making this show seamless. As usual, you a master there. Thanks again, Anthony. And until next time, I'm Louis Maresca just reminding you, if you want to know what's going on in the enterprise, just keep quiet.
Ant Pruitt (01:07:49):
Hey, what's going on everybody? I am at Pruitt and I am the host of Hands On Photography here on TWI tv. I know you got yourself a fancy smartphone, you got yourself there, fancy camera, but your pictures are still lacking. Can't quite figure out what the heck shutter speed means. Watch my show. I got you covered. Wanna know more about just the i ISO and Exposure Triangle in general. Yeah, I got you covered. Or if you got all of that down, you want to get into lighting, you know, making things look better by changing the lights around you. I got you covered on that too. So check us out each and every Thursday here in the network. Go to TWI tv slash hop and subscribe today.