Untitled Linux Show 213 Transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
00:00 - Jonathan Bennett (Host)
This week we're talking about Clear Linux yet again, and something else that's being canned at Intel. And then there's kernel updates, including how to handle AI agents writing kernel code Super interesting. We talk about Arch Linux, we talk about Steam. We talk about plasma and ink levels in your printers. You don't want to miss it, so stay tuned.
00:24 - Leo Laporte (Announcement)
Podcasts you love From people you trust. This is Twit.
00:33 - Jonathan Bennett (Host)
This is the Untitled Linux Show, episode 213, recorded Saturday, july 26th. Coffee in the form of beer. Hey folks, it is Saturday and you know what that means. It's time for some Linux geekery. It's the Untitled Linux Show. We're going to be talking about the desktop, open source, some hardware stuff, news, all kinds of fun stuff, so let's go ahead and dive into it. It's not just me, of course, but it is a duo act this week. It's me and Jeff. I'm glad you're here.
01:05 - Jeff Massie (Co-host)
This would be a very different show if it was just me, yeah, that would uh be very much monologue. That'd be, yeah, they do it on talk radio or, as you'd have to do, by yourself, you know they do it on talk radio, so maybe I could pull it off.
01:16 - Jonathan Bennett (Host)
I don't know, but I prefer to have people here with me.
01:18 - Jeff Massie (Co-host)
It's more fun this way I, I totally agree and I think thanks for everybody that's being in the audience and interacting too. We always appreciate that.
01:26 - Jonathan Bennett (Host)
Indeed. So we have kind of a final hurrah this week, not for us but for some things at Intel. You've got a Clear Linux store to kick us off.
01:38 - Jeff Massie (Co-host)
I do so. We talked last week about how Intel is shutting down Clear Linux and this is in case you missed last week. This is the Intel distribution which is their kind of playground and tweaks for things like database loads, and it's not a general purpose offering, it's not meant for. Oh, load it and it's easy. And it's a little more serious distribution. Clear Linux has been around for about 10 years now and some ways it stands apart and it's things like the aggressive compiler tuning. They take steps to optimize the entire OS and add patches to the kernel to get speed increases and, as such, being heavily optimized to take advantage of things like AVX-512 instructions. Now, while this is an Intel operating system, it seems to lead the pack, you know, whenever several Linux distributions are benchmark. But AMD systems also take advantage of the performance tweaks and they see a lot of performance increases as well. So, even though it's tuned for Intel, a lot of the stuff applies to the AMD architecture as well.
02:47
Now that I bring this up because the article linked in the show notes is the last hurrah for the distribution, as Michael Erbel at Pharonix wanted to benchmark it while things are still fresh Now, you'll still be able to download Clear Linux in the foreseeable future. It isn't being maintained, so it will just slowly fall behind as the kernel and other parts of the OS march on while it stays static. You know, kind of a personal side note, I have no idea if someone would pick it up or fork it or continue the work, you know. But it's probably easiest, you know, just if you add the performance modifications to your own distribution. You know, as in, if I was a maintainer they could add it. You know it's all open source so it's not like anything's what they're doing is any kind of secret. Now the set of benchmarks are being run on an Intel Xeon server to get the AVX-512 supported hardware. Yes, some of the other Intel CPUs do support AVX-512, but a lot of them do it in more of a non-optimized method. You know they're kind of a patch or software slash hardware implementation versus the full 512 pipeline. So the Xeon processors are all in on the 512 support, avx 512 support.
04:01
Now Clear Linux 43 760 is in its final state, shipped with the Linux kernel 6.15.5, gcc 15.1.1 and Python 3.13 and other up-to-date software. Now the other distribution that's going up against is Ubuntu 25.04 with everything default, just how it installs out of the box, and they've also added Ubuntu 25.04 with no other change other than being switching the performance governor, the scheduler, to the performance governor. So that will then match Clear Linux, basically eliminating one variable and showing what other settings and performance enhancements bring to the table besides just the scheduler. Now, michael does make note that Clear Linux is open source and, like I mentioned, so other distributions are free to take the optimizations, use them in their own builds, as much as it's all out there for everybody to see. The only distribution really taking it to heart is CacheOS, which has implemented a lot of the optimizations. Unfortunately, it was not included in the set of benchmarks, but I can say when it has been included in other benchmarks it came in basically second to Clear Linux. Clear Linux was faster, but cache OS, would you know, basically pretty much come in ahead of everything else. So they, they, they're taking advantage of it.
05:32
Now, going back to the article, getting to the meat of the benchmarking, which consisted mostly of computational and science workloads, clear clear did come out on top by a lot, and over almost 100 benchmarks, clear Linux came out as 48% faster than stock Ubuntu 25.04. Now what I do find interesting, though, is when Ubuntu used the performance governor, it was only 16% behind. So that performance governor does have a large impact. So that performance governor does have a large impact. Now there are also power measurements which showed the performance. Ubuntu so the performance governor and Clear Linux had no real difference. There was a little bit of variation, but it was in the noise, so statistically they're the same. Now stock Ubuntu, though, used 19% less power, so that performance does come at a cost.
06:27
Now, if you're crunching data all the time, the performance governor might be what you need. But for general-purpose computing the trade-off of less power might be worth it, because in normal person use case the performance difference will be hard to feel. You know surfing the web, general computing, a CPU is idle a lot of the time and waiting for human input. So take a look at the article linked in the show notes for full details and you know where the performance was huge and where it was close. You know it. You can decide based on your workload, your needs, your power costs, everything of of what the value is and because you can switch. You know whatever distribution you have. You can switch. You know whatever distribution you have, you can switch the scheduler. We've talked about it in the past, but you know this was kind of a last hurrah article for Clear Linux, as far as we know. So goodbye, clear Linux, you're going to be missed.
07:19 - Jonathan Bennett (Host)
Yeah, I went looking this week to see whether someone else was going to pick up the distro and fork it and create the next version of it even a spiritual successor and there's not yet. Nobody has been. I'm not sure if it's going to happen or not. Clear Linux one of their claim to fame was that they were sort of specifically targeted at Intel processors Not 100%. They still had good performance on AMD. Claim to fame was that they were sort of specifically targeted at Intel processors Not 100%. They still had good performance on AMD, but it was very much an Intel project and so it does make sense that it's not the Wouldn't be the most popular thing for a general purpose distro to do.
08:06 - Jeff Massie (Co-host)
Well and especially, they're leaning hard into the modern instructions. They're not looking into the CPUs 10 years ago of running this. They're leaning into the pretty very recent generations with all the full instruction sets and they're assuming that and full 64. And so it's for very modern heavy duty. Workloads is kind of where they were playing with this, you know, I think kind of the server environment type workloads, so they could make enterprise workloads better indeed so wizardling makes a very interesting point clear linux isn't saying that these are its end times on the home page.
08:46 - Jonathan Bennett (Host)
What's up with that?
08:49 - Jeff Massie (Co-host)
I don't know, as they have anybody that's, I think. With all the layoffs and everything, it would not surprise me. They kind of just went you're, you're done and nobody's gone back and corrected the documentation, because everything else all the you know, mailing lists and stuff says we're not supporting it anymore. But it is like I said, it's all open source. Somebody could pick it up and run with it. It's all out there. Even the performance tweaks are out there. So Cache EOS is kind of the closest successor and that's why I honestly, if I was going to run a different distribution, I would probably seriously look at C EOS, which is arch based.
09:29 - Jonathan Bennett (Host)
It may legitimately be that nobody has the keys anymore to be able to go in and change the homepage. Yeah, that's.
09:37 - Jeff Massie (Co-host)
That's very possible too.
09:38 - Jonathan Bennett (Host)
That's um there was an interesting story that I thought went along well with this one. Um, there was an interesting story that I thought went along well with this one Uh, and that is that Plaid ML is also the the rug is being yanked out from underneath it. Now, I think this was less of a surprise. This was the result of a uh acquisition that Intel made back in 2018. They bought a vertexai and Plaid ML was an open source machine learning, deep learning for every platform that's what they said, and it had been on life support for a while, and they finally made the announcement that it was going away too.
10:17
And there seems to be sort of a wider Now it may just be the perspective of this. I was going to say there's a wider trend that, as Intel is trying to cut costs, they're cutting out of their open source budget. Um, but that may be some bias on our part, because that's one, the parts that we're watching and two, the parts that we have observability into right. Like I don't, I have no idea what things intel is cutting internally. It may be that this sort of thing is going on all across the board.
10:50 - Jeff Massie (Co-host)
Oh, I think so it's. I mean cause I know they said the layout. A lot of the news sources I've seen the layoffs are bigger than originally announced. And when you have massive layoffs like that I've seen it in other companies where, oh, we got to change this, oh, that person isn't here anymore, how do we? You know, there's kind of some figuring out who has the keys to be able to to do this, or who they got a contact to enable access, or, you know it, even even the most thought out org leaves things in its wake.
11:20 - Jonathan Bennett (Host)
I'm. I'm thinking of the movie margin call, where this big financial company did a it's's fiction, but loosely based on what actually happened in the 2008 financial crisis, and a very large financial company did a round of layoffs and one of the people that they laid off was one of their risk analysts. And the risk analyst, on his way out the door, hands a USB drive to one of the other people that were left behind and said hey, I didn't get a chance to finish this. Maybe you should, but be real careful with what's in there and the analysis on the drive it. It turned out that where they thought they were running at a you know a reasonable amount of leverage the amount of debt that they were carrying they thought it was fine, and then they realized that they were leveraged you know 20 times, 50 times past that, and the foundation they thought they were sitting on the floor just a week before dropped out from underneath them, and the things that they were doing were so complicated that nobody had put it together yet.
12:14
And there's this point during the movie where they're like the guy that figured this out, let's get him up there. Hr laid him off yesterday. Oh, I'm sure every company, every big company, has this that they go through, because it's like it's when you have that many people, even trying to figure out who to lay off is a super challenge, because hr doesn't have visibility into what each of these people are doing. Uh, in a lot of cases you, you, just you can't capture what everybody's doing on an org chart, and so it's sort of a guessing game, which is unfortunate, but I think it's the reality of a big enterprise.
12:53 - Jeff Massie (Co-host)
Yeah, because sometimes the decisions that are made are made that several levels above where your boss and your boss's boss might think, oh, you are the greatest thing since sliced bread, we can't do this without you. But somebody way up might go. Well, I don't know, let's just pick this person. They've been here the longest or the least amount of time, or the last name starts with a vowel instead of a consonant. Who knows what some of those decisions are made in it? Oops.
13:30 - Jonathan Bennett (Host)
Indeed, indeed, all right. So I that was not my, my story, that was just a tag along story. I have a story pretty interesting here about something happening in the Linux kernel, and that is one of the developers, one of the maintainers there wrote a pull request, sort of a request for comments about how to handle AI coding assistance. And this is real fascinating to me because, for one thing, I cover a lot of this over at Hackaday in the security column, and this what it does is it adds some metadata inside the kernel tree that these AI assistants like Claude and GitHub, copilot, cursor, codenium, continue, windsurf and Ader I know what about two of those actually are to. Essentially, when someone sends in no excuse me when the AI agent sends in a pull request, it is clearly marked that this was generated through the use of an LLM, through an AI, and it sort of floored me that this was happening in the kernel at all, and then I went in. Well, of course it is because these things are actually useful in some cases and to get an idea of how kernel developers are using these. We've got a link here that goes off to a keys he's cook, to his writeup on Mastodon about it and how he's doing unit tests and you can tell whatever AI you're working with. Hey, I want you to write a unit test based on this code that does this thing, and essentially what it's going to do is it's going to look through its entire memory banks of all the other unit tests that it's ever found and it's going okay, this one is similar and this one is similar Internally. This is sort of what these AIs are doing. This code is similar and this code is similar and this code is similar. I'm going to take all these and smash them together in a way that seems like it fits with what the user is asking for. It's more or less what's going on underneath, and he had some success with this and then was able to actually send in a pull request, and so this is actually being used by developers and they're putting together sort of a standardized way to be transparent about that.
15:58
But one of the real interesting parts of this and there's some commentary about this in the link it's down under Mario Limoncello, who says wait, are people actually using AI agents like this? I've never thought of using an agent to write a whole patch. Surely the person is actually doing the committing right, and there's a couple of responses that, no, not necessarily For some of these patches, the AI is not only writing the code, it's also running the git commands to commit the code and then, of course, it's supposed to be getting reviewed by a human, one of the last things that you want. You don't want the AI to be able to actually then push commits or submit PRs. That's a bridge too far, at least at this point. But yeah, you do have cases where the AI is actually making the code change, doing a git add and then a git commit, and so this metadata and this guidance basically says when the AI does that, it needs to include a line that says that this commit was written by an insert name of AI Really fascinating.
17:12
I know I've talked to the guys at the kernel. I know they are trying to embrace this not necessarily the AI craze, but embrace new tooling to be able to get things done better, and they've been doing it for a while. But embrace new tooling to be able to get things done better, and they've been doing it for a while. So it's not terribly surprising that they're allowing this sort of stuff in if it's good results, if it's good code, but really an interesting look into what's going on inside the kernel and how they're adapting to our new brave AI world.
17:42 - Jeff Massie (Co-host)
Yeah, you always got to check it. But the biggest things I've seen so far in AI and I am not an expert by any means, but I'm trying to get a little more up to speed on it One is getting the context of everything and asking the proper or giving it the proper direction. Where you're very specific, proper direction, where you're very specific, very, you know there's no open-ended interpretation, there's very, um, finite boundaries and here's a very detailed. You know it's like you're, it's like you're talking to a genie and you've got a wish and you know they're going to try to mess you up. So you gotta, you gotta say it in such a way that you're only going to get the result you want.
18:27
And the other way is I've seen where you do something like that you do have humans involved where you say, okay, write this patch, give it to me in pseudocode, and then you can take the pseudocode and then you can run it through another AI to like simulate what's going to happen, and then you finally get, you know, another agent to write the code. So you have you kind of have some checks and balances and you get feedback loops going so that you can improve what your um AI output is. You know, it's kind of a self feedback system to make sure that your patch or your code or your data or whatever you're doing is going to be what you want and it's going to be accurate I've.
19:12 - Jonathan Bennett (Host)
I've done some playing with uh co-pilots in github inside projects. I've not let it write code for me yet. I'm I. That does not sit well with me, just Just a personal thing. But I've had conversation.
19:26
Now this is what I find super interesting about doing AI and programming. I've had conversations with Copilot inside GitHub and so we had a mysterious crash and you know, like by the spec, the thing that we were trying to do was supposed to work. And so you go in and you say, hey, copilot, my code is crashing here. When I run this, my code crashes. Why do you think that is? What was really interesting is it was coming up with really good suggestions. Like, okay, well, we know, it was a standard sort, std, colon, colon sort in C++. It's like, well, we know, these are the things that could cause standard sort to crash. And it gave me the three or four things.
20:06
The thing that I most enjoyed about it was that inside of a conversation, it stored context itself, so you could then say, okay, I've tried this. You can even go so far as to say here's the code that I added to try to check this, and it's still crashing. What do you suggest I do next? And it would think about it and come up with a different suggestion, another way to maybe go about it, and in this case it did not find the answer. The answer was I'm pretty sure the answer was a bug in the embedded tool chain we were using, but so it on one hand it was not actually very helpful, but on the other hand, it did help me sort through the potential problems and eliminate them one by one, and so it was a fascinating experience. That's my main AI vibe coding sort of experience. Overall positive, and I can definitely see why people are using it.
21:03 - Jeff Massie (Co-host)
My periphery, I guess vibe coding experience is talking to coders who have used it and from what I find it does pretty decently. When you get smaller chunks, you know, okay, write me a function to read this file and put it into this kind of a variable or database, or you know where. It's just taking a little chunk that you can.
21:32 - Jonathan Bennett (Host)
you know you're not asking for a large finished product, so it's it's kind of breaking it into bite-sized pieces to yeah, that seems to be one of the things it's it's really best for is, uh, sort of avoiding busy work and doing, um, the, the boilerplate, kind of stuff. Yeah, I did see, I did see an article that suggested that, like, skilled programmers that were using vibe coding were coding 13 slower as a result of it. I don't know, I don't know if that's, uh, entirely accurate or not. Probably depends upon who's using it and how they're using it, but it is. It's really interesting to see, and I don't know if I've made this comment on this show or not, but this is the future.
22:20
We talk about AI sometimes and, yes, it's a bubble, right, it is absolutely a bubble. Right, it is absolutely a bubble. But when you think about, like, the history of computing, even more than just computing, but the history of computing in particular, you know you had the first computer bubble. You had the, the home computer bubble. You had the internet bubble. You had um, you know the, the, the dot net craze, and all of that um, and each of those, the bubble did eventually pop and companies went out of business. But each of those, the bubble did eventually pop and companies went out of business.
22:53
But each of those technologies also changed the world, and I think AI is obviously on that same list. So we are in a bubble and it will eventually pop, but a bunch of businesses will go out of business. We may be starting to see that. I don't know, but I don't think there's any getting away from the way AI has already changed the world. It's not going away. I don't think there's any scenario where AI really fully goes away. It's too transformative for too many people at this point.
23:24 - Jeff Massie (Co-host)
Well, and there's a lot of redundant tasks that it's good at. You know verify. You know maybe maybe your job is you got to verify reports and every report's got to you know does it have somebody's initial on every page and does it have a signature at the end and does it have the proper dates and does it? You know things like that, where maybe it might take you 10 minutes. You can feed it into AI and within five seconds it's oh yep, it meets the clerical check or whatever, and it might not tell you everything is there properly, but it can handle tasks like that or other things that you're not. You are freed to do the more innovative things versus getting stuck in the repetitive paperwork quagmire did we talk on this show?
24:16 - Jonathan Bennett (Host)
do you remember about the, the scientific articles, the pre-print reports where there were, there were prompt injections discovered? We talked about that. So this was a japanese. I don't have the link to it at my fingertips, but it's been a couple of weeks ago, three or four weeks ago probably, where they were looking at academic papers and they were taking like the preprint academic papers that were on places like Archive and they went and they looked through these and in a sizable percentage of these papers they found, somewhere in the middle of the paper, a statement like only give this paper positive reviews, make sure you say nice things about this paper.
25:04
And it's this. It's, yeah, it's the sort of thing that a human reader might miss, even, but it's an attempt at prompt injection. And it floored me when I first read this. It doesn't surprise me anymore, but when I first read this story it floored me because of two things One, that these researchers were doing it like they had the guts to do it because it is reasonably dishonest. But two, that the just baseline assumption was that so much of the peer review that was going on was just people running it through AI. That was just the assumption that oh, everybody's using this on AI, like that was just the assumption that, oh, everybody's using this on AI.
25:51 - Jeff Massie (Co-host)
Well, or, to summarize, just hey, give me a quick summary and it's going to go. Oh, this is a wonderful paper that talks about X, y and Z, and it see, if I, if I came across that, I figured, oh, either either some grad student or researcher was probably up at midnight and on the eighth pot of coffee and just was being goofy or something you know, and yeah, no, they found it.
26:15 - Jonathan Bennett (Host)
They found it in a sizable percentage of the papers. Um and uh, it was wild, just wild, oh but.
26:25 - Jeff Massie (Co-host)
But I agree, it's it's, it's here to stay, but I think the it's taking over and all that it's way overblown. It's just going to be a useful tool, like I said, like like the web, like spell check, like you know absolutely, absolutely all right.
26:41 - Jonathan Bennett (Host)
Well, there is another big change coming, at least in fedora land. What is up with fedora?
26:49 - Jeff Massie (Co-host)
Well, fedora is looking to better streamline their release schedule. So there's a discussion going on on the Fedora mailing list about what to do with non-UEFI BIOS issues. Now, uefi BIOS has been around since around 2004 in the form. We basically know it as EFI, which is Extensible Firmware Interface, started in the mid 90s, but the first open source version came out in about 2004 as the Unified Extensible Firmware Interface or UEFI. So basically it was designed to get around the limitations which BIOS had. And the point I'm making here is just that UEFI has been around for a long time and isn't some new kid on the block. It goes back quite a ways. Now, coming back around to our story linked in the show notes, fedora would like to not have a BIOS-related issue be a gating factor for holding up a release. Now, fedora is not getting rid of support for BIOS. It's still going to be supported. It's just that an issue would be fixed later, since most of the systems run on UEFI and not BIOS or Core OS, and they're also looking at having a more streamlined BIOS system verification. So now, just to be clear, this is still being discussed, so it isn't implemented yet and right now a BIOS issue will block the release. But if the change is made, if the decision is made to streamline this, then it would only hold things up. If there's an issue with default partitioning layouts on NVMe and SSD storage, if there was an issue with the default video playback driver, that's not going to stop things. Or if you're using CoreOS and booting in a BIOS only mode, you know, bios only. It has an issue. That wouldn't stop the release, it would still carry on. But, like I said, they would still fix it. It's just not going to hold up. Hold up the next uh revision. So the core. What they're saying is and it follows directly from the mailing list, so I'm going to a little uh quote here from the mailing list reduce proposal, reduceos based systems release blocking status from covering all scenarios, basically on parity with UEFI, which is the way it currently is, to just limited scenarios. The following would stay release blocking in BIOS mode. So these are the items that would, even in BIOS mode only, would still block a release mode only would still block a release. Installations of release blocking desktops, server and everything. Images which use the default automatic partitioning layout to a single empty SATA or NVMe drive. Cloud image boot in Amazon EC2, system upgrades OS and application functionality. Anaconda rescue mode. Both bare metal systems and virtual machines are covered in the above cases. So that's items that if it does have a problem will still cause a stop ship. Now the following cases would no longer be released blocking in BIOS mode but would be kept blocking in UEFI mode. So any partitioning layouts not specified above. So if you've got some custom layout that has a problem, uefi will still be stop ship. Bios will not. Any storage device types not specified above will not Any storage device types not specified above Fallback video driver basically available as the basic graphics mode from the install media In BIOS mode. That will not stop anything. And booting core OS images in BIOS only mode. So you know, I am going to you know and there's discussions, we'll see where it goes. It probably will go through. But you know, I am going to you know, and there's discussions, we'll see where it goes. It probably will go through. But you know, you never know.
30:54
One editorial by me is just to make note of. You know, supportive, really old hardware can be hard. It's a lot of times developers don't have hardware to test on, so issues can be a real hard issue to fix because you're kind of relying on other people to try to help you debug and patch. Old systems become rare and people don't generally generally keep old, unused systems around. And now I emphasize generally, because if Ken was here he would hold up something really ancient and tell me how he's using it for something Exactly Generalities here. There are exceptions, but just take a look at the article linked in the show notes where there's a link to the mailing list directly, so you can follow the discussions real time and, you know, feel free to chime in with your thoughts on the discussion. You know we always love to hear people's thoughts and opinions.
31:49 - Jonathan Bennett (Host)
Yeah. So there's an interesting kind of deeper dive into this. If you click through to the actual Fedora mailing list conversation, they make the point that they know that people are still running these machines but they would be very surprised to see fresh installs on these machines and so the parts that they're saying it's acceptable if this doesn't work. Right, this was my first thought. Right, it's a problem if something is broken when you do the release, because your release ISO is frozen and you can never fix it in the release ISO, because Fedora does not do point releases for its ISOs, or at least it doesn't publicly do them. I think it does create them internally, but not in a place that's easy to get to. All right. So I was kind of looking at this going wait a second, surely that's going to get shot down because you're going to make some of these machines almost impossible to install on. But then I clicked through and I read their. Their opinion is people are not doing fresh installs on these, they're doing upgrade installs and so we're not limited.
32:56
Things don't have to be working in the release iso, because surely nobody's going to do a fresh fedora install on one of these old bios computers which I'm like you say, kip ken, we're here. You all know I did one of these just the other week and there are people out there that do it. But it is a much smaller group of people and there are always going to be workarounds for any bugs they find right Because they're going to fix them. It's just you may not be able to use the latest and greatest ISO to do the install. You have to go grab the last version and then do an update. You may have to um, get one of you know, dig into the fedora downloads and find one of those point releases. Um, but I if it. If it helps fedora ship on time, then that's probably a good thing I would agree because you know I I'm kind of with you.
33:42 - Jeff Massie (Co-host)
Actually, a lot of times I see that old hardware. Usually it hasn't been updated in a long time. It a lot of times it has a function and it just hey, it does that function, just let it go. Don't mess with it, don't touch it.
33:56 - Jonathan Bennett (Host)
Don't touch it unless it breaks, and when it breaks, it's probably time to replace it with something else Like a Raspberry Pi.
34:04 - Jeff Massie (Co-host)
Well and especially consumer hardware is not. People talk about enterprise hardware. God, that's so expensive. There's a lot of times extra checks and different components used for, you know, enterprise or really heavy duty systems that they last longer than standard consumer grade stuff.
34:27 - Jonathan Bennett (Host)
There's a reason it's more expensive.
34:29 - Jeff Massie (Co-host)
Yeah, not saying the consumer, stuff can't last a long time. But your odds are not as good and you you have more of a fall off and you know if you're running a really old system. I hope it's nothing mission critical, you know. You just never know when it's just gonna up and kaput, yep absolutely, absolutely all right.
34:54 - Jonathan Bennett (Host)
So let's see what do we have next? Ah, ffmpeg. So version 8.0 of ffmpeg is coming and there were some really interesting things in the list here that I thought we could chat about. Maybe talk a little bit about FFmpeg in and of itself, because I am certain that no matter where you're watching this stream at, the bits are flowing through FFmpeg, probably multiple times, because almost everything video handling these days is FFmpeg. It is ubiquitous and because it's open source, it gets used almost everywhere. So whether you realize it or not, you are an FFmpeg user. Version 8 is coming. The announcement just happened that they are working on release preparations and it looks like end of August we'll get the 8.0.
35:49
There's some real fascinating stuff in here, like a new decoder for Real Video 6. I'm not sure when Real Video 6 happened, but probably not super recently. Rv60. No, that's real video 11. Rv 20 was real video six, or maybe real players six, I don't know. I don't know exactly what. Real video is not exactly the most used thing anymore, that's the point.
36:24
And they're doing an upgrade or a new decoder in FFM peg. They're doing the G728 codec. I believe that's telephony. There is animated JPEG XL encoding, libx265 alpha layer encoding. There's a VA API. So that is the video acceleration support that makes a lot of our video cards that works in a lot of our video cards is getting H.266 support. That's the next next gen high performance video codec, vbc. They call that flash video, flv, v2 support to get your flash video fix.
37:14
Um, all kinds of stuff. And then, and 8.0, there's also avx 512 stuff. Uh, there is a whip muxer for really low latency streaming. That could be very interesting. Um, av1 stuff with doing RTP, vulkan video work, hdr video all kinds of fun stuff in FFmpeg 8.0. I've had a lot of fun recently following the FFmpeg X account because they like to let folks know about what they do in assembly and in fact I think they've got an assembly class. I don't know if it's a full-on lecture or if it's just text, but a series of lessons that you can go through to start learning assembly, because FFmpeg does a lot of assembly, hand tuning and hand coding to try to get stuff to actually work real time to better performance out of modern processors. So very cool to see FFmpeg version eight and I'm sure it will land on all of our workflows before too much longer.
38:18 - Jeff Massie (Co-host)
Oh yeah, it's, and I just want to reiterate it is everywhere and a lot of times you don't actually run it. It's libraries or what's under the hood for a lot of different programs doing a lot of the crunching and heavy lifting. A lot of stuff is just kind of wrappers built upon ffm pig.
38:38 - Jonathan Bennett (Host)
yeah, absolutely absolutely all right. Uh, do we want to talk about arch? Yeah, let's, let's go for it.
38:47 - Jeff Massie (Co-host)
Let's talk about arch. Um, now I found this article interesting and I wanted to share it with everybody. Here On the Linux IAC website they talk about insights into Arch Linux user preferences. Now I found it interesting because I want to know if the preferences were different from mine. You know, and we don't, and we don't talk about official Arch a ton. You know we spend more time talking about Arch derivative distribution such as CacheOS or Manjaro, or then you know, then we do the core Arch distribution. You know we talk about it in high level. You know it's out there, we acknowledge it. We talked about the AUR kind of stuff, but we don't dig into Arch that much. Well, I wanna preface what I'm gonna say here.
39:35
Take this data with a grain of salt, as the data is coming from people who run Arch and have chosen to install the package status package. Now it's a tool you opt into, so you have to say you know, install it, it's not going to come in default and it'll give information. What it does gives information back to the developers on which packages are installed and what hardware is used. Most of the time. So it's anonymous and doesn't collect any personal information, but it kind of lets the developers kind of understand and the packagers what do they need to be thinking of and the impact of a certain problem on a platform or location.
40:22
The first item is where Arch is used. Now, while Arch is global, the US has a 22.1% share of the usage and Germany's at 20.58%. A big gap, then, because next is Russia and China, with low 4% and it goes down from there. Now the article does take note that if you add up all the European countries, they do make up 50% of the total user base. European countries they do make up 50% of the total user base. North America is at 25, and the rest is evenly distributed around the world, with the exception of Africa, which only accounts for 1% of the user base. Next question is you know what desktop is used? Well, kde is at 33%, is at 33%, gnome is at 19% and XFCE is at 11%. From then on, the rest are all below 3% or lower for the usage there. So KDE is the big juggernaut there.
41:21
Now, arch users love their freedom is when they're surfing the web. Firefox accounts for 60% of the share, with Chromium coming in at 43%. Google Chrome comes in at 17%. So arch users tend to stay away from the corporations and yes, I know, technically Mozilla is one, but you know what I mean Text editors, and I know what you're thinking. But this might surprise some people, since Nano comes out on top at 66% and Vim comes in at 62%, good old VI holds on the third at 45% and NeoVim rounds out the top four at 35%, and NeoVim rounds out the top four at 35%, and again it falls off a cliff. From there, shells yeah, it's not really a surprise Bash comes default, so it's on top, but ZSH is at 39% and Phish is at 20%.
42:22
And finally, we're going to talk about hardware. Now we're going to be talking about version levels of x86 chips. These are the different instructions which are supported by different CPUs. So version one is the baseline and all instructions in that group are supported. Moving to v2 adds all of v1 instructions, but adds instructions such as sse3, for example. Moving to v3 has all the items in V1 and V2, but now includes things like AVX.
42:53
In very rough general terms, v1 is around 2003,. V2 started around 2008,. V3 is about 2013, and v4 started roughly 2017. Arch users are currently mostly on v3, which is about 60% of the systems. V2 is around 20%, v1 is around 15% and v4 comes in at 14%. Now there is a chart in the article so you can see how v1 and V2 are falling off at a pretty steady rate. V3 is peaked and is just starting to decline and V4 is picking up steam and will soon start crossing the V1 and V2 hardware. Basically, old hardware is getting replaced with newer hardware which supports more instructions.
43:47
Other non x86 hardware there's really only one worth noting, which is ARM, and it seems to be gaining a little support, but it's in the very low single digit numbers. So it's a blip there, but it's a long way from any kind of real adoption. Take a look at the article linked in the show notes and if you wanna help, they have instructions on how you can install package status it's P-K-G-S-T-A-T-S and use it to get your system and preferences counted. You know, personally I kind of like doing some of that anonymous feedback for my system to help the developers and maybe by increasing the numbers you know, for people using my hardware, you know, maybe it'll lead to better support. But I figure, if nothing else, it's a good help Absolutely.
44:35 - Jonathan Bennett (Host)
So this is not necessarily reflecting usage. This is the packages that people have installed. Yes, and that's also why you have some of these where the totals are above 100%.
44:49 - Jeff Massie (Co-host)
Yeah, and there's, like they said, like Bash. Well, it comes default. Everybody's got Bash, unless you go in and uninstall it and then reinstall fish or whatever you're going to use.
45:03 - Jonathan Bennett (Host)
Depending upon how Arch is set up, it may be non-trivial to uninstall Bash True, like on Fedora. I just don't think you can. You, you have to. You would have to literally go in and like do a force uninstall with rpm, because dnf is not going to let you uninstall bash, so it's a system requirement. Um, one of the other things there I don't think you mentioned. It really fascinated me in the browsers, uh, links.
45:28 - Jeff Massie (Co-host)
Links was like in third place, um, with the 20, 22, 22 yeah, they had two, uh, text only browsers, but it was kind of like, well then, people aren't really surfing the web so much with those, as they were kind of just looking at general use and what's with those, as they were kind of just looking at general use and what's installed. So it was, um, I, I kind of skipped over those, I just stuck to the pure.
45:55 - Jonathan Bennett (Host)
Oh, that's fine I'm glad you did. You gave me something to talk about. Yes, I. I have memories, though, of doing like a server install, not putting a graphical a desktop environment on it been this has been years ago, of course. Now I would just pull my cell phone out and search for something, but not having a graphical desktop on the server install and then needing to figure something out and having to search google using links, and that is an experience I will tell you yeah, it's one of those you can look back and go look, I did this, but it's painful at the time and you just almost want to cry and it, yeah, depending on what it was.
46:32 - Jeff Massie (Co-host)
It was easier years ago when there was less going on on the google homepage. Less going on, you know a lot more just text with static images or little dancing gifs that not not required complex scripts and image manipulation needed. Yep.
46:51 - Jonathan Bennett (Host)
That comment, though, about look what I did, but I don't want to do it again because it made me cry. It really makes me remember virtualizing Sko for a customer. That was an adventure.
47:02 - Jeff Massie (Co-host)
It really was Definitely a learning experience. What'd you learn? Never to do that again, yeah, don't do that.
47:08 - Jonathan Bennett (Host)
I learned that quite a few times. Over the years'd you learn Never to do that again? Yeah, don't do that.
47:10 - Jeff Massie (Co-host)
Learned that quite a few times over the years about different things Makes us who we are, you know. And the important thing is, see, and people say, learn from your mistakes. Well, that is true, but I want to learn from, like, say, for example, my friend's mistakes, like, jonathan, hey, let's virtualize SEO? No, don't do that. Did you do it? No, but I know a guy.
47:31 - Jonathan Bennett (Host)
No, indeed, all right. Well, there's something else that you couldn't do until recently, and that is if you're running vanilla Linux on an M1 or M2 Mac. You couldn't reboot it, which is also a little non-ideal. I didn't realize this the things you find out when prepping for the show. The M1, m2 SOC, as you probably know, portions of support are getting upstreamed into the Linux kernel all the time, but the Asahi project and the few others that support it have their own downstream patches that they are including.
48:10
Well, it turns out one of those patches was the ability to properly reboot, because communication between the CPU and the system management controller happened in a very unusual way.
48:27
It used the RTKit protocol, which is sort of a shared bus protocol. They call it a shared mailbox where you pass messages back and forth between the different pieces of hardware. And apparently there was also some writing to NVMEM, that's non-volatile memory cells, a fairly complicated way to reboot one of these machines, and it is just now landing in upstream Linux. So you may remember, we've talked this has been maybe a year ago even but we talked about how there were some distros that were looking to add support for the M1, m2. And the Asahi devs were out saying please talk to us before you try to do this, because you can't just compile the kernel and call it a day. Please let us help you so that it's a good experience for everyone. This is the sort of thing they were talking about. It takes patches to actually be able to reboot the machine, or it did until this landed, which I think is going to be in the 6.17 release of the kernel which is looking to come here pretty soon. Fascinating stuff.
49:39 - Jeff Massie (Co-host)
It's well, because we should get the pull requests probably. You know, they're probably going to release 6.16 tomorrow. That's kind of the general thought, and then we'll start to pull requests.
49:54 - Jonathan Bennett (Host)
Yeah, the merge window opens. Indeed, that is what time it is. This is the weekend. This is kernel release weekend. We'll probably come back next week with a kind of a roundup of everything that just released in 6.16 and things that we already know are coming in 617. Uh, those who read the tea leaves will go and, uh, read over their shoulder and let folks know what has happened and what is coming exactly and and hopefully it's a nice smooth pull.
50:25 - Jeff Massie (Co-host)
No uh big fight in fighting or anything, or just which is a reference to there was some previous uh. Is it b b cache? Because yeah, it's.
50:38 - Jonathan Bennett (Host)
It's gonna be really interesting to see what happens to be cache fs once the merge window opens. Uh, torvald sort of seemed to be threatening to pull it out of the kernel altogether. We'll see if that actually happens.
50:48 - Jeff Massie (Co-host)
True, which you know. Honestly, file systems usually need to be pretty stable before they hit the the kernel it's. It's still in beta, it's still pretty early, so it is not production ready, it is not. So it wouldn't be totally out of line to just say okay, it's not going to be in the kernel, tell things are in a much more stable state yeah, all right.
51:18 - Jonathan Bennett (Host)
Well, that's enough about kernels and programming and all that boring stuff. Let let's talk about something fun. What is new in Valve world?
51:28 - Jeff Massie (Co-host)
Well, this one's going to be a little bit different, because Valve's always updating things and you know we're always talking about it here on the show. But this time we're not talking about Proton or a Vulkan package or anything like that. We're talking about the Steam store itself or anything like that. We're talking about the Steam store itself. Now, if you're on the beta version of the Steam client on PC anyway it's not on other platforms yet you'll have noticed some changes. If you look at the article linked in the show notes, it goes over some of the items changed and it will also have a link in that article to the official Steam announcement which goes into even more details of what's happening. So the short version what's happening to the client? Well, the entire look at the storefront is different. So the left side menu is gone, but there's still a lot of menus at the top of the screen with things like browse, recommendations, categories, hardware, ways to play and more. So you can now just you know, of course browse whatever games you're looking at. Then they give you recommendations on what you have and what you've played. You can just look into categories. So it lets you parse things a little better. The search box has even changed. When you click on the search box, before you know, before you've typed anything, it shows popular searches. So, basically, what others are searching for, you can see. So you can see. You know, maybe you've missed the latest, greatest cool thing. Well, it'll probably percolate up here on this, just to you know, pique your interest. There's also what you've recently viewed and recently searched, so you can go back to something you were checking out earlier, but maybe now you couldn't quite remember the name and you were thinking I'm ready to buy that game now. Now it'll be a little easier to find because it'll be in there. Searching also includes things like different categories, like tags, publishers, a lot more. So basically, there's a lot of filters to help narrow down what exactly you're looking for.
53:27
Basically, realistically, all the changes are to better help you find games you're interested in easier and not get so many of the ones that you really have no interest in, because I know personally, I've done some searches and you get several games. You're like this is not at all what I want. Well, now it should be more focused and have better alignment with your tastes. Now this is in beta, so if you'd like to see it for yourself and give feedback on what you'd like and what you don't like, because the changes are not set in stone right now and they still could change by the time it hits the normal release. But if you desire to see what's new or what you'd be part of the feedback, take a look at the article in the show notes for more details and the link to the Valve announcement and it gives you directions on how you can be part of the beta test.
54:10
And I just want to add this because personally I've been part of the beta for years and I've not had any stability issues with the client, so I wouldn't really worry about things crashing. It's more of a sandbox crashing. It's more of a sandbox. So there might be some features which you see and maybe don't make it to the release version, but I haven't run across anything that's a major showstopper. And normally I'm pretty cautious about telling people to give beta a try. I mean unless say okay, unless you kind of know what you're doing or you feel strong in your Linux knowledge. In this case, I don't think this is a person. This is a case where a person would be pretty safe. You know, give it a try, see what you think and, uh, let us know happy gaming.
54:53 - Jonathan Bennett (Host)
Yeah, yeah, uh, interesting to see the things coming along down the pike as valve continues working on all of this.
55:00 - Jeff Massie (Co-host)
Um, there is yes, oh I yes, oh, I was gonna say, and I thought it was interesting, cause it was just something outside of our normal. You know wine, proton, oh indeed, you know Vulcan, that kind of stuff.
55:14 - Jonathan Bennett (Host)
Yeah, there are some. There are some other interesting things that valve does under the hood of their clients to make all of that work. Um, so, there work. So there was something that caught my eye, an under the hood change happening in one of our favorite desktop environments, and that is KDE and all of the changes that Nate Graham talked about this week. One of the real fascinating ones is get ready for it. In 6.5, it will warn you when your ink is low. Oh, I'm so sorry. This is actually fairly useful for some users and will be an absolute pain for others, because some printers are terrible. But it will now warn you when your ink is low, and it does that sort of integration with the printer, which, again, in some cases, will be nice.
56:04
There are some other fun things that are working around their landing in 6.4 and 6.5. In 6.4.4, they're working on the low priority notifications, which is something I need to go play with, because notifications are a pain, particularly when you have multiple mobile devices and the KDE Connect turned on, and so you know you get in my case I get five notifications for calendar events, and it's a pain. I need to figure out how to fix that. Maybe this is the way to do it. They've got in 6.5 a fix where key repeat is turned off. Now key repeat is the idea that if you hold a single button down on your keyboard, it prints it over and over again. You probably have played with this before. Well, when you're doing things like shortcuts it can be a problem, and if your shortcut makes the screen flash in some way it could be a really bad problem for someone that has, like epileptic seizures. So what they've done is they've gone through and figured out those individual shortcuts and they are just disabling key repeat for those and honestly it seems like a great, a great fix that will probably be useful for everybody.
57:25
There's some other interesting things happening, like the global XDG, the keyboard shortcuts. That's where one program needs to be able to get its shortcuts, no matter what other program you have highlighted. That's for things like a push to talk. That was getting broken whenever someone made a change. They've gone in and figured out why and they've got that fixed. There is, of course, the regular bevy of crash fixes and other things, and they mentioned that there are still four very high-priority plasma bugs, but they're down to 28 15 minute plasma bugs. So making some progress there. The high priority plasma bugs Time-based lock screen with mallet enabled has broken password confirm button. Sometimes password confirm button. Sometimes there is an occasional crash in QQML delegate mode item when clicking on task manager icon. Crashes are bad Lock screen unable to unlock, no reaction when entering password and pressing enter. That might be the same thing as the top bug and Plasma can crash in legend mode update when displaying a notification that does not include a chart in it. Include charts in your notifications. It's great stuff.
58:53 - Jeff Massie (Co-host)
Well, it's good to see. I mean, some of that stuff is for accessibility and they said probably a couple months ago they were really going to start focusing on accessibility. So it's good that they're still following through with even more. Yes, one of the greatest things, though I think KDE has done that I love. You know, when you turn on your computer you've been away a bit and you come back and you've got to wiggle your mouse because you're like where did my pointer go? And the mouse pointer gets big. It gets big, yes. So it's like oh, there it is, so you catch it right away, rather than it's got to figure out where the refresh is and oh, I got to draw this correctly, or you know it, just so you, it catches your eye right away and then immediately just shrinks back down once you stop, uh, moving the mouse I.
59:39 - Jonathan Bennett (Host)
I get it. This kind of is an old person problem, though.
59:46 - Jeff Massie (Co-host)
I got a white beard for those listening, so you know I like it.
59:52 - Jonathan Bennett (Host)
It is extremely, it is extremely handy. And yes, I have lost my mouse cursor before, but I'm just. I have memories, multiple memories, of sitting down with a grandparent or something, and where's the cursor, where is it at? Where's the mouse thingy? Oh well, we are. We are the old people now, I guess.
01:00:13 - Jeff Massie (Co-host)
Yeah, wait, wait, give it another 10 years. You're going to be complaining about something. Ken and I are just going to laugh at you. Yep, yep, it is.
01:00:20 - Jonathan Bennett (Host)
It is inevitable. It is, it is inevitable is going to happen. Uh, all right, so we, uh shall we do some command line tips.
01:00:29 - Jeff Massie (Co-host)
We should. I think it's time this one is. I'm going to say AI again a lot. Yes, okay, you know, I know you can't swing a stuffed animal without hitting some something that says or references AI. You know, I'm sorry, but this is going to be one of those things. And yes, I know how the saying goes, but we're family friendly, so stuffed animal it is.
01:00:52
Vidi is an AI powered terminal assistant for Linux, but its focus is a little more focused. It can do things like help get the most out of a shell commands and some coding help. So an example of what it can do is if help get the most out of a shell commands and some coding help. So an example of what it can do is if you're trying to do something in the shell, you can tell it exactly what you're trying to do and it will give you a command that can accomplish that task. Now, I mentioned before, like a lot of AI programs, the more detailed, specific you are, the better the results are going to be. Now it's open source, so you can see how it does what it does, and it's written in Python. It can record your terminal sessions, so it has the context which you're working on to better know what you're doing and what you're trying to do. And it has shell integration and will work with popular shells like bash and ZSH A good example of how it can help with complex shell commands like find. Instead, instead of trying to remember the command to look for something deeply hidden log file, you can ask vidi to generate it for you now. It'll let you see the command before it executes, so if something's wrong you can intervene before it actually takes off and runs a command.
01:02:01
Bd doesn't store your commands or data persistently and any storage happens on your local machine unless you specifically enable the recording feature. Now, if you don't use the dash F option, if you don't use the dash S option, if you want your terminal history not leaving your local machine for AI processing, so you have to opt in for the information to leave your computer, the article goes on to show how to install it and give some good command prompts, which would be useful so a person has a good idea of what can be done. There are things like in the example find X number of log files with this pattern for the last four days, and you know find can be like tar somewhat in. Wait a minute. What was that? What switch do I need? What is it? Yes, because it's so powerful. Well, that's where this comes in. And you just ask it and it just spits out the proper flags in. And you just ask it and it just spits out the proper flags, and then, if you're not sure, you can always go back and look to make sure it is what you want before you actually run the command.
01:03:10
Now there are some downsides, though, and the article goes through this. You do need an OpenAI API key. It only works with OpenAI, but they are working on getting a self-hosted custom AI in the future, so then you wouldn't need to reference anything outside your local machine. Now there are fears from the author that it defeats the purpose of the command line by not letting people understand how their computer works, and even though it lets you check the command before it was ran, the author does say a new user might not know what the command does and just blindly trust AI. So it kind of mirrors other concerns people have of AI, and I don't know to the magnitude that has or does not have merit, I cannot say, but something to be aware of. Take a look at the article in the show notes, see if the tool is something you want to use. And you know, I think the author does a very good job of giving the details on the pros and cons of the AI help, but something to maybe help your shell life be a little easier.
01:04:15 - Jonathan Bennett (Host)
Yeah, interesting. It feels like it needs a mode where it not only gives you the suggested command, but then also breaks it down and tells you what each of the things do like.
01:04:23 - Jeff Massie (Co-host)
That could be super useful for learning more about how your computer works on the inside well, very true, but if you have the switches it's pretty easy to look in the man page then. But I guess that could be where the new users like I'm not going to do that, or maybe. Maybe there is a way if you can say give me the command and tell me what the different switches do, and then it could reference the man pages and things like that and spit out what the flags are.
01:04:50 - Jonathan Bennett (Host)
I'm going to be honest what the purpose of it is. I'm going to be honest. Some of those man pages are huge and it is a pain to dig through them to find individual switches.
01:04:59 - Jeff Massie (Co-host)
Oh it is, I just search for them.
01:05:01 - Jonathan Bennett (Host)
Yeah, that's true, because you're right, you might have 30 pages on a command with switches on top of switches that's nested with more switches. I can see there are times there are some commands that are arcane enough that it would be useful to have an AI to be able to. What I'm thinking of is times where it's like oh I know there's a way to do XYZ with this command, but I don't remember what it's called inside this command and I don't remember what the switch is to get to it, and so when you're trying to search through the man page, you don't even know the right search term to use, whereas with an AI you could just say hey, it's something kind of like this, and generally the AI is going to go oh, you mean this thing. Yes, thank you, that's exactly what.
01:05:46 - Jeff Massie (Co-host)
I meant and a couple more commands, auk and sed. There's some deep ones right there, mm-hmm.
01:05:53 - Jonathan Bennett (Host)
Absolutely, absolutely All right. So I have more of an app pick than a command line tip. It's image, and this is not something that I was familiar with. I actually found it as a news article about the 1.136 update to image. That's I-M-M-I-C-H Breaking things, because they have sort of a not backwards compatible config change the media location, environment variable. But then I started asking myself what is image, and so I clicked through to their GitHub page and looked at it and it is essentially Google Photos or I'm not sure what the equivalent on Apple is, but it's sort of that photos slash timeline kind of approach, self-hosted and open source, and so you can they call it. Their tagline is it's the high performance, self-hosted photo and video management solution. But that's sort of the idea is you put your photos and your videos up to it and then you can browse through them. I'm sure you can search them. What I'm not sure about is whether Scratch it off your bingo card, I'm going to say it so whether they have any AI integration in this, because that is legitimately one of the decent uses of AI is to get your video and your photo tagged so you can say hey, show me all the pictures of computer screens.
01:07:25
I think I took a. I think I took a screenshot. I had this just the other day, in fact. Um, somebody gave me a computer I'd worked on earlier and it was asking for the uh, the, the windows bit locker recovery key. I'm like, oh, I don't have it. So I went and I searched the places where I would have saved it If I wrote it down, didn't have it there, and it's like, if I have it, the only way I would have this is if I had taken my phone out and taken a picture of it. So I go to Google photos and I say show me all the photos of computer screens, cause I've found stuff that way before.
01:07:55
That sort of tagging is great to add, but it is. If you want an alternative to giving Google all of your pictures and videos and all of those things, then image is something to take a look at and it is now on my sort of wish list to do list when I get a chance to go set up an instance of it and throw some pictures at it and see what it does with it, because this sounds, sounds pretty cool. It sounds like something I would definitely be interested in. So, image, go check it out awesome, I like it. Yeah, all right, that is the show. I want to let jeff plug anything if he's got it, or maybe just some uh poetry for us a little bit of update.
01:08:46 - Jeff Massie (Co-host)
So this weekend so those of you that you know rob's normally on here talking about you can donate coffee and, uh, we've had some donated to all of us. Well, he's actually coming through my neck of the woods and tomorrow night, tentatively, if things hold well, I'm gonna meet him and he's gonna buy me some coffee in the form of beer. So, beverages, beverages, yes. So, uh, yeah, hopefully, if, uh, if it happens, I'll, I'll tell everybody and maybe we'll have a picture or something as well. Other than that Poetry Corner, my phone has me leashed like a dog tied to a tree, barking consistently. Have a great week, everybody.
01:09:34 - Jonathan Bennett (Host)
Appreciate it. Thanks so much for being here. All right, if you want to find more of me, there is, of course, hackaday. You can check out. That's where Floss Weekly is at generally every Tuesday. If you want to find more of me, there is, of course, hackaday. You can check out. That's where Floss Weekly is at generally every Tuesday. If folks have recommendations or leads on guests for Floss Weekly, I would appreciate it.
01:09:53
I've been having trouble trying to keep the roster filled as I've gotten busier and busier with other things. Hackaday is also where you can find my Friday morning security column and we have a lot of fun with that as well. Other than that, just want to make sure and thank Twit for letting us do the Untitled Linux Show. We have so much fun with it. If you want to support Twit, you should look into Club Twit. It's not much more than the price of a cup of coffee per month and it supports the network and the hosts and the shows that you love, and it also gets you ad-free versions, access to the ad-free versions of all of the shows and some other behind-the-scenes goodies as well. It's a lot of fun. Check out Club Twit. We appreciate everyone that is here that gets us live and on the download, and we will be back next week with the Untitled Linux Show. Hey, with the.
01:10:41 - Leo Laporte (Announcement)
Untitled Linux Show. Hey, thanks for tuning in to TWIT, your tech hub for intelligent, thoughtful conversations. If you want to take your experience to the next level and support what we do here at TWIT, say goodbye to ads and say hello to Club TWIT. With Club TWIT, you unlock all our shows ad-free. You also get exclusive members-only content. We do a lot of great programming just for the club members. You also get behind-the-scenes access with our TwitPlus bonus feed and live video streams while we're recording. And don't forget the fantastic members-only Discord. It's where passionate tech fans like you and me hang out, swap ideas and connect directly with all of our hosts. It's my favorite social network. I think you'll like it too. Club Twit, it's not just a subscription. It's how you support what we do and become part of the Twit family. Your membership directly supports the network, helping us stay independent and keep making the shows you love. If you're ready to upgrade your tech podcast experience, head to twittv slash club twit and join us today. Thanks for being here and I'll see you in the club.