Transcripts

Untitled Linux Show 158 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

00:00 - Jonathan Bennett (Host)
Hey folks, this week we're talking about PCI and processors. We're talking about the OpenShot release, the next version of Ubuntu, there's a big Pipewire release and the Raspberry Pi is emulating NUMA for big wins. You don't want to miss it, so stay tuned.

00:18 - David Ruggles (Co-host)
Podcasts you love From people you trust this is. Twit From people you trust this is.

00:25 - Jonathan Bennett (Host)
Twit. This is the Untitled Linux Show, episode 158, recorded Saturday, june 29th, processor Weeds. Hey folks, it is Saturday. You know what that means. It is time to get your geek on. We're going to talk about Linux and open source and all of that fun stuff. It's going to be good.

00:46
We're doing a little bit of geeking out before the show is telling everyone about my device tree woes, trying to get the mainline kernel, trying to get Fedora actually Fedora 41, working on the Turing Pi RK1. And I know that there is a kernel bug and I thought I had the kernel bug patched and it still doesn't work. And I was telling everybody about that. We're not gonna get into that, though that's not exactly what the show is about. Um, of course, it's not just me talking about my devices. I've got I've got the crew with me and uh, we're gonna have a lot of fun today. We've got stuff to talk about. Stuff's going on, um and uh to to get us kicked off about. Well, it's je's Jeff, and guess what? He's going to talk about hardware. I mean, who'd have thought we're going to let Jeff?

01:37 - Jeff Massie (Co-host)
take it away and give us maybe some history and a look to the future about what's going on with a particular bus. Yes, and you know Jonathan gives me heck sometimes about you know I don't always have command line tips that are strictly command line. Well, this week it is. But my first story is only kind of Linux-esque. So I'm going to talk about the PCIe Express roadmap. So while it doesn't directly deal with Linux, it's very closely tied to it and it's you know, the PCI interface is pretty core to all of our systems.

02:08
And I'm going to answer some questions. Like you know, why does it take so long to come up with revisions and what does the future look like? And I want to go over this because we cover a lot of other networking topics when we're talking about Linux. You know we have the NVLink, we've talked about the Infinity Fabrics, the IO Rings and you know many other software solutions to try to be more efficient at moving data. And as AI and other modern workloads come online, the more data we can move faster and better becomes very important. So I thought it'd be fun to take a little dip into the hardware side just to kind of see what's going on there, since it ties in quite a bit with what we do. And when you look at it and you say CPU and GPU vendors have settled into around a two-year cadence between product releases, roughly, and PCIe has settled into a three-year, and this is actually defined by the PCIE-SIG organization and they're an organization that controls the development of the PCIE Express specification. So if you look at the article linked in the show notes, jeffrey Burt has an interview with Richard Solomon, who's the head of the PCIE-SIG organization, and he talks about why it does take so long to get to the next version of PCIE and he talks about why it does take so long to get to the next version of PCIe. What he says is it takes a while between the specs are completed until they can get silicon and develop a compliance program. So the PCIe SIG group, you know, try to hurry as fast as they can, but until someone can actually make something that uses the PCIe Express, you know, with the version they're checking, you know actual hardware they have to wait to verify that everything's okay.

03:53
Now we've talked about this in the past where there's a silicon pipeline, you know, and once you know what you're going to do, it takes time to actually draw out the design and in the computer you know, create simulations that actually create reticles and print the patterns on wafers electrically, test everything, package it, get it to a level it can fully be utilized, you know, for testing in a compliance scenario. So it just takes time to get all this done Now. For example, right now we're in the middle of 2024. When we're recording this, the PCIe 6.0 specification was released in January of 2022, but the preliminary compliance program isn't expected until the second quarter of 2024, which basically is about now, and the integrators list isn't going to be out until 2025. Until 2025. So not only get a compliance program, then find out who actually is able to meet the standard. So, for example, pcie 7 is projected for a standards release in 2025, but they're not expecting compliance program until 2028.

05:02
So even though the standard's out, it doesn't mean people can just start making hardware right away. They could, but then you run the risk of having a issue with something you've designed not being compatible, and then you're going to have to start over and then you're behind and you're worse off than waiting for a full compatibility program so you can verify everything Now. Sometimes there's hiccups. For example, pci Express 4 had a little bit of a hiccup and it took a little longer to get out, but in general they are staying on that three-year cadence. But one thing to keep in mind while maybe the standards aren't coming out fast enough for you, every version you know, upgrade or every next version, turns into a doubling of data rate. So PCIe 5, which we have now is 32 gigabit transfers a second, or giga transfers a second, and has an NRZ signaling. The 6.0 standard is going to come with 64 giga transfers a second and it's going to have PAM4 signaling.

06:07
Nrz stands for non-return to zero, and this is how most people think of digital signaling works. You know, a low voltage value maybe zero volts, for example represents a zero, and a high voltage value maybe one volt, for example is a data bit of one, a direct correlation. Pam4 is just where an electrical signal can have four values and the lowest voltage would equal zero-zero in bits, and the next, say 25% up in voltage, is zero-one, and so on until the highest voltage equals one-one. So instead of one bit from a low voltage to a high voltage, you now have more defined voltage levels and you get two bits in that same signaling range. So you've effectively doubled your data transfer while not speeding up your signal. You haven't had to double your frequency.

06:58
There's a lot more details about signaling and different PAM levels, but that's the real high level kind of layman's understanding of it. That's all you need to know for this. And Richard Solomon does go on to admit that PCIe is not the fastest protocol that you could have. But it is very flexible by changing how many lanes you can use and by the fact it's backwards compatible. So you can plug in a PCIe Gen 1 card into a PCIe Gen 6 slot and it will work just fine. Or if you have a Bi-16 slot it'll work with a Bi-1 card. So you know.

07:39
I suggest everyone take a look at the article linked in the show notes. It goes into a lot of detail with a lot of charts on transfer speeds, what the future is, what the standards look like. There's a nice chart in there that compares, for example, each generation and how many lanes you're using, so that you can see that right now even one lane on PC PCIe 5.0 is faster than 16 lanes in PCIE 1. And it's kind of interesting. And they talk about the future and the projected needs of data rates, and so there's a lot of detail in there and it would take me an hour to go over it all. But you know, I wanted to pique everybody's interest and take a look at the article and see what's the backbone of our computer, what's behind it. Yeah, PCIe.

08:28 - Jonathan Bennett (Host)
A couple of thoughts here. One PCIe has really taken over as the data bus in modern computers because your chiplet interconnect is built on top of PCIe. When AMD first built their chips, glued together their chi chip design, it was basically based on PCIe. It has become the standardized link. And then the other interesting thing that comes to mind is with these specifications just because a specification comes out does not mean that every engineering challenge about it has been solved. And so when, when the industry group makes the specification, then the engineers at your various companies start doing the engineering work to solve the problems, to actually be able to make the things like. That's not a solved problem. It's that problem is not a solved problem until it is until they figure it out, um, which also sort of explains why there's such a lag time between the spec being announced and people actually getting hardware in their hands.

09:28 - Jeff Massie (Co-host)
It's fascinating stuff, oh yeah, and another example is DDR5. When it was first in development, there were certain parameters that they were like how do we test this? Because of how the system, the chip operated, the system, the chip operated, and so there was a lot of people having to figure out how they can verify that the signal and the chip is doing what it's supposed to do when you didn't have direct access to certain signals. Yep, so it's a common problem in engineering. Yeah, yeah, for sure.

09:58 - Jonathan Bennett (Host)
Very interesting though. Yeah, it is, let's take, let's go to David and let's talk open shot. I've I've been thinking about trying out open shot. I usually use caden, caden live for my video editing and it's just, it's a little clunky at times and, uh, really, the the the open shot siren has been calling to me. I think I may have to give it a try. So sell, sell me on OpenShot, david.

10:24 - David Ruggles (Co-host)
All right. Well, it's funny, unlike sometimes, there was no attempt at a segue there. But I was actually going to start with KD in live because that's what I've used in the past. Now I may get stoned for saying this and I apologize, but you know, because we are a Linux show, most recently for my limited video editing needs, I've been using Blackmagic's Resolve because I've been living in the Blackmagic ecosystem on the hardware side and stuff. But it's closed source, it's freely available, pros and cons. Obviously they're trying to sell their hardware and their upgrades and stuff. So anyway, when I came across this article which I've linked in the show notes, um, and I linked to a uh pharonix article and then I actually went through that into OpenShot's press release on it.

11:29
But they've released OpenShot 3.2.0, and it looks pretty, that is. You know, new themes are one of their key selling points on this. But enhanced timeline, improved performance they have a whole list of bullet points, of things that they've fixed on there, that they've touched, that they've improved. Their TLDR is. It's a game changer with new themes, improved features and enhanced performance to supercharge your free, open source video editing on Linux, mac and Windows sorry, windows and Chrome OS. They've fixed issues with key bindings. They've improved high DPI display support, one of the things that they pull out well. First, they've got a new cosmic dust theme. It's a preview, but you know if you're not using dark mode what are you doing. But one of the cool things that they've done in the timeline is, if you've got gaps maybe you cut a little bit of video out or just some alignment issues and stuff you can go and right-click on a space between elements in your video, click Remove Gap, and it automatically brings everything to the right of that in time, tightening it up, instead of having to go individually select everything and move it over, even preserving any other gaps in case you still need those for some reason.

13:03
So just a lot of performance enhancements, a lot of usability enhancements and something that I'm probably going to try out. It is cross platform, as I already mentioned, so there's no reason not to use it. And on the Linux side, inside, as was mentioned, the KDN Live is a little clunky. It's functional, but it is clunky. It's what I have the most experience with, so this looks quite nice. So I'm interested to try it out when I get a chance. But I was excited to see ongoing development there.

13:47 - Jonathan Bennett (Host)
Yeah, so I Actually it looks a lot like kden live. I went ahead and grabbed it and uh the the new version of it and have it here installed. It's surprising how much it's laid out like kden live it. It may be because I don't know. I don't know which um older project you know older program. It's kind of uh emulating here, whether it's like the old Vegas or or one of those, but they obviously they're inspired by the same ideas of you know where the clip library should be and where your preview should be, and got the timelines at the bottom. Um, but I'll have to give it a try. I find myself doing a little bit more video editing than I did there for a while, so take a look at it.

14:25 - Jeff Massie (Co-host)
Yeah, I'd really like to hear kind of a head to head and get people's opinions of what they like. Seems like those are the two big heavy hitters for Linux. Yeah, non-professional.

14:39 - Jonathan Bennett (Host)
Yeah, those are the two. Those are the two big, I think, open source source video editors, at least the ones that most people know about. Um, there are some others out there, uh, but, like you say, at least some of them are closed source and or paid, so I don't know. There's room for, there's room for somebody to come in and make a really good one, like our door, I think, or maybe one of these will continue to improve and become really really good. We will see All right.

15:08
Speaking of video and audio on Linux, I've got a story here. Pipewire 1.2 is out and I think we've talked about this a bit, but it's got some neat stuff in it, and one of the things that I'm most excited about is asynchronous processing. So one of the problems with Pipewire and I believe Jack before it was that everything had to march along at the same time, and so if you had one audio device that had problems, it would cause those problems for everything. So, for example, if you had a USB device that was lagging, then the rest of the system would have to wait for each of those packets to come in to be able to update, and with asynchronous processing, it basically says that if you've got a problematic source, you can just ignore it. You don't have to worry about it. The sound will come in and it'll get processed as it comes in, but you don't have to wait for it. The whole system doesn't have to wait for it, and so that is neat.

16:08
And then the other thing that really interests me is the oh, where did it go? It's being able to cast audio and you sort of discover audio for and of course now I'm not seeing the name of it, but it's a, it's a, it's a system to be able. It's like opencast, maybe it's opencast, is that what they call it? Um, but anyway it's. It's the system to be able to cast audio all around the house and pipewire. Now, in 1.2 it can automatically discover those, which is really pretty interesting.

16:45
So I've got some Raspberry Pi sitting around that I think I need to install that on. I think that'll be fun. So anyway, if you want it. So the place that I figured it would come to first is kind of interesting. I figured, oh, fedora is going to get it first, because Fedora is kind of like the premier place. I had to go and pull it out of Rawhide. So essentially, the Fedora is kind of like the premier place.

17:06
I had to go and pull it out of Rawhide. So essentially the Fedora 41 repositories, where 41 is still heavily under development. Apparently it's already in Debbie and Sid, so if you're on that side of the fence you can go grab it. We've got it installed on the desktop behind me, but I've not rebooted it yet to make it live, so I'll be playing with it this week. We'll see how that goes. You don't want to test on air. See what happens. I am well. This is on the on-air machine is pop os, and so I imagine that trying to get that bleeding edge package on pop os is going to be just a little more um involved. I know how to do it on fedora oh, I'll say testing in production.

17:42 - Jeff Massie (Co-host)
That's the way we run, right, I mean?

17:43 - Jonathan Bennett (Host)
sometimes, yes, absolutely sometimes. That is exactly how we run. Uh, what you're gonna do, all right?

17:51 - Jeff Massie (Co-host)
yes, jeff, oh, I just comment. I just love how much pipe wires come along and really matured oh, it's great, it's um, it's mostly great.

18:00 - Jonathan Bennett (Host)
Every once in a while I run into problems, problems still, but on the whole it's pretty good. You can do some cool stuff with it. The fact that when they made Pipewire they said let's make it replace the Pulse Audio. They made it to transparently replace the Pulse Audio backend and transparently replace the jack backend. That was a stroke of genius, that was masterful, because if you didn't do either of those then it would have just been another standard. It would be the XKCD comic of we have 57 competing standards. We should make one to replace all of them. We have 58 competing standards now. But because it integrated, like it literally, it didn't just replace, but it actually integrated the old standards and you could continue using the old standards with a new program. At that point it did actually replace them. So you still have some holdouts, of course, just like there are people that get off my lawn with this pulse audio, it's also or nothing. You have the same thing now with people sticking with jack or pulse audio, some of them legitimate reasons, um, less so.

19:06 - David Ruggles (Co-host)
But and then you're like well, don't look, but it's actually pulse under the covers. You're just using your interface, you're happy with yeah, that's true.

19:18 - Jeff Massie (Co-host)
It's really easy these days to use pipeware and not know about it the only thing I know that directly talks to also these days is and I've talked about it before a few times on that directly talks to also these days is um, and I've talked about it before a few times on the show is that, uh, dead beef music player and it will connect directly to also. But it's a little tricky because to make it work you can't have anything else talking to also, so you got to make sure you don't have like youtube pause in the background or anything else, but because it and if it's playing, that's the only thing is playing, because it also plays one one thing all the mixing and everything, all the magic happens outside of also.

19:54 - Jonathan Bennett (Host)
Yeah, yeah all right, so let's chat about intel. This is interesting. Um, jeff is going to tell us about the sierra forest, cpus, and I'm curious. This is this is sort of a spider web issue, like there's a lot of things that are that are connected to this, and, uh, I'm going to talk, maybe, maybe, once you finish. I'm going to talk real quick about some arm stuff that I just discovered this this week. Uh, but take it away, sir, and tell us about Sierra Forest.

20:26 - Jeff Massie (Co-host)
Yeah. So our friend Michael Larable over at Phronix is back for more benchmarking. This time he's testing Clear Linux, which is optimized for industrial loads. You know it's one of the fastest distributions out there, but it's not meant for daily driving. So it's optimized for things like handling large databases and computation, things like that. So when we're going through here, don't think that, oh, I found my new gaming distribution. It's not what it's for.

20:52
When Michael benchmarks Clear Linux, normally he's using one of Intel's high core count Xeon processors and you know they have features like AVX 512 support and you know other features that normal customer chips, like the 14900K, 14700s, those, don't have. They've got features set up for those enterprise workloads and that's where Clear Linux kind of firmly is set. We've shown in the past that. You know, while Intel's Clear Linux is focused on Intel, you know AMD does get some speed advantages as well, albeit not quite as much, because they're not. You know Intel's Clear Linux is specifically tuned for Intel. So, but AMD gets some fringe benefits once in a while. But this time is a little different though. So Michael is testing the Xeon 6700E Sierra Forest series of processors. These are all E-core server processors and do not have the higher level functions that the other performance core chips have, or mixed chips where they have P and E cores, where they have P and E cores. For those who didn't remember, p is the performance cores, e is the efficiency cores that are cut down and they're more focused on power savings and things like that. So it's kind of interesting and those efficiency cores usually don't have like the 512 AVX instructions, things like that. So he wanted to see what would Clear Linux do if it was missing some of those features in the hardware. So the platform was two 6780E processors providing a combined 288 cores, and it had 512 gigabytes of RAM and the OS was Clear Linux 41900.

22:54
As the latest version. They had the latest version of Arch using the 6.9 kernel and GCC 14.1.1. 6.9 kernel and GCC 14.1.1. They had Ubuntu 24.04 LTS out of the box, so meaning no special features, software tweaks, anything like that, just how it installs. But he also then had Ubuntu 24.04 LTS but he turned on the P-State performance governor which we've talked about in the past. It speeds things up a little bit at the cost of some power efficiency. When looking at the geometric mean of all the test results, intel's ClearLinux was around 27% faster than Ubuntu 24.04 out of the box and the same amount for Arch Linux, but versus Ubuntu 24.04 with the P-State performance governor, it was only about 14% faster. So using the performance governor actually closed the gap somewhat, but not totally.

23:56
If you look at the power, clear Linux did use the most power you know use more than other distributions. But based on the performance and the small amount of power roughly 10 watts Clear Linux gave a really strong performance per watt value. So yes, it used more power, but the performance was so much greater that it actually was more efficient. So what does this tell us? It tells us that Intel's Clear Linux performs very well. Even when the hardware is missing optimizations like the AVX 512 and other server specific features. It still gives a great value if you have the workloads that can take advantage of it.

24:33
For example, in the benchmarking tests they consisted of a lot of. You know compilation. You know code compiling, simulation, encryption, compression, encoding. You know code compiling, simulation, encryption, compression, encoding and decoding. So there's no gaming benchmarks in here. There's nothing. No blender runs. It's all heavy-duty scientific engineering tasks. Take a look at the article in the show notes and you can look over the vast array of benchmark tests to find out what closely matches your workload.

25:01
Now I will be honest and say I was a little surprised that, even though there were only E-cores in the hardware, clear Linux still had a good advantage over the other distributions. Now I do want to say why don't the other distributions optimize like Clear Linux does? Well, it's because Clear Linux makes a lot of assumptions and requires certain minimums of hardware. That would exclude some older processors In the past shows when we've talked about Fedora and Ubuntu looking at going to a higher level of minimum CPU, that's what Clear Linux already does. So they say you've got to have a much more modern processor with a lot more features. So while Fedora and Ubuntu, for example, need to make sure they work on a wide range of hardware, fedora and Ubuntu, for example, need to make sure they work on a wide range of hardware. Clear Linux doesn't care as much about that and they're really optimizing only for very modern hardware. So, like I said, take a look at the article and see what you think.

26:01 - Jonathan Bennett (Host)
Yeah, what I was referring to earlier is this piece of ARM hardware that I'm working on. It's an eight core chip and I realized belatedly that those eight cores, it is the. It's the successor to ARM's big dot little. They now call it either dynamic or dynamic IQ, but it's still still it's the same big dot little idea and it's fascinating to me that that started over on the arm chip side and now we've got intel using it in their efficiency cores, their, their, their blend of p and e cores on x86 chips. It's just it's. It's interesting that it is that concept is migrated in that direction.

26:44 - Jeff Massie (Co-host)
Yeah, it's. You know there's so much of that that. You know everybody borrows from everybody else. Sure, of course.

26:52 - Jonathan Bennett (Host)
Absolutely. Um, let's talk about Ubuntu, David. Are we going to talk about Ubuntu? We are. What are we going to talk about with Ubuntu?

27:04 - David Ruggles (Co-host)
Uh, they might finally be catching up to Fedora. Oh, in a couple of months, eventually. So I've got an article here linked in the show notes from it's Foss, and it is Ubuntu 24.10 finally makes Wayland default for Nvidia users. Um, as we've mentioned before, I am team green in my daily driver and it's not necessarily because I'm like really a big Nvidia fan boy or anything, it's just the price point I had and the hardware that was available. It was the best option. Anything, it's just the price point I had and the hardware that was available. It was the best option. Um, so I've always had as many people have always had challenges, um, trying to get nvidia to work well in linux, and we've discussed that at length, so I'm not going to rehash all that. But the important part is that wayland support for nVIDIA continues to move forward. So with Ubuntu 24.10, which is their intermediate release or interim release it's a non-LTS release that is supposed to be coming out in October this year, barring any unforeseen delays they are finally going to default to Wayland.

28:33
So far, even with the LTS it was just released 24.04, and we even spoke at length about that in the past. They are sticking with the X11. And they mentioned a few specific things and said why not do it 24-04? At that time, the developers felt that there were still a few known issues with the implementation when it came to heavy use cases like VFX, virtual effects, ai, ml, artificial intelligence, machine learning and a few other things, so they felt like it was too early for a switch, but they're confident that they can do it this interim release. It's going to be stable enough to function and it's also going to give them the opportunity to dig into any unknown issues, bugs and really give it a good test run before the upcoming 2604 LTS release, which is less than two years away. So it's definitely a longer release cycle than a rolling release or anything that's derived from a rolling release, like Fedora and Rawhide, which we've talked about. But it's exciting to see that the development is continuing and everything is migrating to Wayland and all the advantages that are there.

29:57 - Jonathan Bennett (Host)
Yeah, and so this is of course their non-LTS. So this is where they can get a little crazier.

30:04 - David Ruggles (Co-host)
A little crazier, they still are not quite as not quite as aggressive about things as Fedora is. Yes, I was going to say crazy, but that's not there. Aggressive, I like aggressive, that's the better way to describe it.

30:20 - Jonathan Bennett (Host)
We're aggressive about being on the edge.

30:23 - Jeff Massie (Co-host)
Well, the LTS kind of threw a bit of a wrench in that, so they were a little more nervous. But the .10 releases they could put the pedal to the metal because they're all short releases and go nuts.

30:40 - David Ruggles (Co-host)
I'm looking forward to installing that. I'm looking forward to installing that. I've been using the Kubuntu for a little while and it's worked. Of course I haven't switched back recently, so no shade now, but at the time it worked better out of the. I don't remember what it, I don't remember the technology, but it's the usbc connection, alt, alt, display port, that's what it was. I couldn't get that to work in fedora and it worked out of the box with the ubuntu. Interesting video stuff anyway, yeah, matt. So good.

31:26 - Jeff Massie (Co-host)
I'm actually looking at trying the daily build of uh 24 10 to see if they've added wayland 6 in there. Yet last I knew they hadn't, but because it hadn't hit uh some of the debian testing yet and that's usually where they pull our packages, but I don't know. I look at the package lists and I don't see it. But I don't know if that's all inclusive or yeah, I mean we've already got 6-1-1 out.

31:52 - Jonathan Bennett (Host)
What's what's up about two?

31:55 - Jeff Massie (Co-host)
way behind.

31:55 - Jonathan Bennett (Host)
I just hit 6-2 yeah jump to the end, I guess uh, 6-1, it's got some, got some fun stuff in it. Something really surprised me. So I'm running kde and uh, I just updated and did a reboot for kde purposes just the other day and uh, I noticed the weirdest thing my mouse. Because I have two monitors. You get to the edge right between the two and the mouse would stop for a second and then you have to keep going and then it would pop over and I realized it's got. It's got a sticky spot like I'm sure there's a way to tune it or turn it off, but it's sticky there, so the mouse will stop for just a moment.

32:31
Going between screens, it's like, oh, that's really interesting. And because KDE has the hotspots where you can run a window up to the corner and it'll put it in the top quarter of the screen, or you can run it all the way to the side and it'll put it in the corner, the top, you know, the top quarter of the screen. Or you can run it all the way to the side and it'll put it, you know, half screen. Um, it makes it a whole lot easier to hit those now that there's these sticky spots between the monitor. It's like I had never, I never, I never knew I needed it, but now that it's there it's like oh, that's, that's pretty nice, it's just a little. It's just a little too sticky. I need to go in and tweak the stickiness down just a tiny bit.

33:06 - Jeff Massie (Co-host)
It's an accessibility option for that, like you said, just being able to hit those corner edge bits of the screen.

33:13 - David Ruggles (Co-host)
Yep, as you said, that's the beauty of KDE You'll be able to go find some settings and tweak it.

33:18 - Jonathan Bennett (Host)
Somewhere. Yeah, it may not be in the settings app yet, but there'll be a command line setting to do it for sure. Yeah, it's there somewhere.

33:29 - Jeff Massie (Co-host)
I don't need the super glue, I just need the Elmer's school glue.

33:32 - Jonathan Bennett (Host)
Yes, yes, yes, let's have it about the consistency of maple syrup Not honey, but maple syrup. That'd be perfect, oh, all right, perfect, oh, all right. So, speaking of there is uh, there is another something that is just getting done baking, and that is gnome 47. Gnome 47 is finally going to make x11 optional, and so you can now compile gnome with no x11 support in it. It's not going to be a dependency, optional, it's now an optional dependency. I guess the correct way to put that. Um and uh, it's, it's really that's. That's really something that's really interesting, especially when you start looking at different distros that are moving well, moving from x11 over to wayland, but also are trying to move to only supporting wayland, and I think this is yet another step in that direction, and we've talked about this before.

34:37
But something really big that is about to happen is the uh, the sent to us and rel support for, for CentOS, rhel 7. So CentOS 7. By the way, those of you running CentOS 7, you need to have a game plan, because I think this is the week that stopped getting security updates for that. But one of the thing and it was one of the Fedora maintainers that really got me started on thinking about this. The fact that X 11 has security support at all is because it is still actively used in Red Hat products and that's about to end.

35:13
So you know it's um, yeah, it's, it's going to be, it's going to be something. It's. It's probably time to move away from X. It really is, and it's funny. In one of the Fedora chat rooms I'm a part of, I asked somebody I'm like he wanted to stick with X11. I'm like what are you guys going to do the next time, inevitably, when there's the big security problem found in X11 and nobody's there to fix it? He's like, oh well, I'm pretty sure that all the big security problems have been found in X11 already. Okay, bud.

35:49 - David Ruggles (Co-host)
Okay, every security problem has been found, until the next one's discovered. If you really have to have X11, you just got to switch to a BSD derivative.

36:00 - Jonathan Bennett (Host)
Yeah, I guess that's going to be the option, that's going to be what you do. No, there will continue. Obviously there will be like the same thing with SystemD. There will continue to be distros that like that's their thing, they are X11 distros and like that's fine. The only thing that really worries me about that is we are very quickly moving into this world where there's nobody that understands the X11 code base, that is still doing maintenance on it and it's going to cause problems.

36:31 - Jeff Massie (Co-host)
Well, it could be that, okay, you got that old distro out there that, by gosh, we're going to stay X11. We're going to keep system D. We're only going to keep system d. We're going to, you know, we're only going to have. Also, we're not going to have pulse or pipe wire. We're going to and it's going to be so archaic that even if you hacked in, oh my gosh, I can't do anything if anything does go wrong, it's going to be like cobalt in 1999 and you'll be pulling programmers out of retirement.

37:00 - Jonathan Bennett (Host)
Yeah, that happens, that does happen. Yeah, it's going to be. The next couple of years of Linux is going to be really interesting, especially with the Wayland thing.

37:11 - Jeff Massie (Co-host)
Yeah, the old security through non-functionality.

37:15 - Jonathan Bennett (Host)
Yeah, I mean there is something to that, but at the same time, I like having functionality. Yeah.

37:26 - Jeff Massie (Co-host)
It's kind of nice. Turn on my computer, look how safe it is, you know.

37:32 - Jonathan Bennett (Host)
It's true. It's true, not very useful Also, true, all right, so, jeff, it's continuation on a theme.

37:46 - Jeff Massie (Co-host)
Yeah, all right. So, jeff, it's continuation on a theme. Yeah, I'm uh man. You know my next story is about intel and and peace states.

37:50 - Jonathan Bennett (Host)
Yeah, and I, I always give a lot of love to team red, but you know, sometimes you got to give love to team blue as well, you know so so before before you say that I've got it, I've got to say, like intel does a lot for open source, like genuinely they do, and if you go and you look at, um, the, the companies that are submitting to the kernel, intel is always way up there. Um, we give amd a lot of credit for making their graphics drivers open source. Intel, intel did it first. So, like, I'm sure I'm not going to sit here and tell you that they are absolutely the good guys and have perfect ethics in everything, because there's been some stories in the past of Intel versus AMD. But, like Intel is one of the better open source citizens out there. Yeah, they're almost the good guys.

38:46 - Jeff Massie (Co-host)
This is true and actually my machine I'm on right now is running an AMD, but I was a sliver away from running a. I think it's time. It was either a 13700 or a 14700K, and the only reason I didn't was because at the time Linux was still working out the P versus E scheduling cores, which they've got sorted out. But here's where my next story comes in. This is going to be a little different from my last story Because you know, last story we only talked about E-Cores and now we're going to talk about P-State patches to better tune Linux so that scheduling is better in hybrid CPUs. Now, again, hybrid, that's the P and E cores, performance and efficiency cores, and you know a lot of that goes to. Not only do we want performance, these days, we want power efficiency. You know some of the heat that our computers are. You know the power they're pulling, the heat they're giving off. It just gets crazy. That's where E cores come in, so that you know your heavy workloads go to the p's and your ecores take care of all the background stuff that you know you don't care about as much it it if it takes a millisecond longer to update your weather app. Yeah, who cares? You know it, it can handle, it can run in the background on something more efficient. Now the article in the show notes has a direct quote from Raphael Wysocki Now, on how all this works in much, much greater detail, and I'm just going to give you the summarized version because I didn't want to go into all of it and there's parts of it that I'm like kind of over my head.

40:37
I'm not the programmer here, but this patch is coming because future Intel CPUs are going to have hyper-threading removed, at least on some of them. Now, removing hyper-threading does have an impact on performance, though you know when I've seen the difference, you know it might be 25% to 35% performance difference in heavy workloads, you know. But things like gaming, if you have like six cores, it doesn't really matter. If you've got six cores or six cores that have 12 threads, yeah, you won't really see a difference, nothing that is human observable. Now you might think, well, that's actually terrible. Why is Intel going backwards? Well, it then comes in simplification of your cores. You can have greater IPC instructions per clock, faster clocks, better power efficiency. So the idea is, overall it's going to be a win in performance, because it's, you know, and there's some discussion in the Discord in the background or the show about CISC complex instruction set chips and RISC reduced instruction set chips and RISC reduced instruction set chips. Well, this is heading a little more towards RISC because the instruction set's staying somewhat the same but they're removing a lot of complexity from the core.

42:12
To handle all that hyper-threading you have to be able to schedule things, predict things. So it streamlines stuff. It streamlines stuff. So, going back to the patch, it adds the idea of asymmetric CPUs. So that means it's going to calculate the maximum turbo frequency instructions per cycle and it's not going to assume symmetry. So basically it's going to better understand the performance of each CPU in multiple CPU systems. So it will be able to set workloads based on the performance of each individual CPU. And there's features in there. So if you add a CPU or you take one offline, it can update and help adjust the scheduler to make the workloads the most efficient. So it's going to recalculate things when things go offline, when things come online, and rebalance the loads. And this also means that because it's not assuming symmetrical, you might have a core CPU with hyperthreading and you might have one without. So it's going to better match the hardware. So this is kind of back like when they first introduced the P and E cores and you have to figure out how do you understand what load goes to which core. Well, this is doing the same thing, only it's now taking into account what if hyper-threading isn't there? And what if it is there on some and not there other places.

43:47
So this patch is basically for Lunar Lake, which has yet to be released. So Intel's putting the optimizations in for their next set of CPUs, which will be out in a few months. There's kind of rumors later this fall, maybe early 2025. They kind of bounce around. We'll see when they show up. You know which Internet source you listen to has a little bit of variance and when things are going to happen. But you know it's not in this release candidate, it's not in the 610 kernel, which is going through the release candidates right now. They're pretty sure it's probably going to hit the 611, which should be about next month, which, you know, another month or two after that should be ready for the hardware when it comes out. So just want to give a heads up here. The scheduling is going to be ahead of the actual release and you know I would love to hear people on the discord and give us your thoughts on the Intel CPU roadmap, how it's looking and how you think Linux is going to run on this new version of CPUs so the the.

45:03 - Jonathan Bennett (Host)
The really interesting thing that I see is that there is a movement in a lot of different places to this um, multi, multi-core, multi-thread but not all the cores are the same. That is starting to become a thing in a lot of different places. I I'm assuming amd is going to have to pick it up too just to be able to compete. Um, or they'll feel like they need to. At least I don't know. There's necessarily something they have to do, but uh I I don't know.

45:30 - Jeff Massie (Co-host)
You know it's possible they're. They're doing pretty good gains. The rumors are that the next chips coming out are going to be quite a bit faster and they're really going into the cache, because there's now rumors that the uh 12 and 16 cores, when they add the 3d cache, is going to be across all the cores. So right now the 3d on the 12 and 16 cores chips on the 7000 series only some of the cores have the 3D cache, some do and some don't. So they have high performance ones that don't have the cache. They have ones that run slower, that do have the cache and they're running slower because of the cache and how they operate with the memory and the cores.

46:26
So that's kind of like the P versus E and there were some scheduling issues there where they were trying to sort out you know how do you predict which load. You know I want my gaming to go to my 3D, x3d cache and I want my science computation to go to the performance cores and it's it's sorting that out. So they had the same problems that intel did. So they've got that pretty much squared away. But that's also the reason why I'm running a 12 core. I've got a 7900. I chose one without the 3d cache because I didn't want to run into that scheduling issue. I wanted everything symmetrical.

47:06 - Jonathan Bennett (Host)
Yeah, there's, there's there's weird wrinkles to this too. Like, uh, I was troubleshooting a live audio problem trying to do audio processing on a desktop just been years ago. But one of the solutions for the problem I was having was to go in and turn off SMT, because there is sort of inherently a bit of contention over the real core when you're doing SMT, because essentially what SMT does is it takes two process threads and feeds them into one at the bottom physical core, so that when you're doing audio processing and it's very latency specific that was one of the solutions was to turn off that SMT so that you knew that your instruction thread was going to always be there right when you expected it to. So it's a very interesting thing to me that Intel is looking in this direction of being able to do some parts SMT. So I'm assuming what's going to happen is probably some of their efficiency cores are going to be non-SMT enabled.

48:11 - Jeff Massie (Co-host)
I believe they're not now. I believe the E cores are.

48:17 - Jonathan Bennett (Host)
Oh, is that already the case?

48:18 - Jeff Massie (Co-host)
I believe so. Maybe somebody in the audience can verify that, but I believe they're already that way. Yeah, okay, because that's part of the efficiency is they run slower, they don't have the hyper-threading, they don't have a lot of the features. They're just a simpler core. But when you look at what runs on your machine, a lot of times you don't need the fastest, greatest cores. Look at what runs on your machine a lot of times you don't need the fastest, greatest cores it.

48:44 - Jonathan Bennett (Host)
So this patch is even, is even going to help with potentially those you know, current gen cores. Then, um, there's something else there that I'm thinking about that that could be useful for intel, and that is that the uh, the, the user space uh scheduler, is about to get merged. I think that may also be coming with 6.11. That's the one where Torvalds came out and said look, I know people still have problems with this, but dang it, we're going to merge it because I said so as the BDFL, the benevolent dictator for life. He came out and said we're going to do it. And it seems to me that you could see in the future Intel, rather than doing things right in the kernel, they could opt to say well, we're going to make a little user space driver that is going to be the scheduler and put some of their secret sauce in there. I don't know, we'll see.

49:36 - Jeff Massie (Co-host)
Well, user space would have a lot of information on the load, so they can feed a lot of information to the kernel. A couple things I wanted to call out here from the audience Keys 512, you know he said there is only hyper-threading on P cores. And I did also want to call out AE4KO and said asymmetrical will also help in the cloud computing where it may be possible to scale up and down processors dynamically. No, that's true, yeah, yeah.

50:03 - Jonathan Bennett (Host)
All kinds of interesting stuff there's. There are things that you do on Linux machines in data centers that you we just wouldn't think about doing on your desktop. Like I'm pretty sure you can hot swap processors in the data center, like that's a thing that Linux supports, being able to turn a processor off, to pull it out and put another one in it's just craziness.

50:23 - Jeff Massie (Co-host)
And I do want to say I've had some processor design classes.

50:30
So when people say, well, they shouldn't do this or they shouldn't do that or should. The smallest thing you can think of people have written papers on, they've had huge discussions, they've run simulations, they've run. So whenever they're doing something, I will guarantee you there are a thousand powerpoint slides, there are gigs of data, there's probably terabytes of simulation data all behind how it's going to work and and even then, and even then, some of those things don't necessarily work the way that they think they're going to.

51:03 - Jonathan Bennett (Host)
They still get surprised.

51:05 - Jeff Massie (Co-host)
Yes, In the real world. Simulation to silicon is not a perfect thing. There's some art in there and there's some voodoo, and ideally you want it as close as you can, because you're going to find all these problems in your simulations before you actually decide you're going to make silicon.

51:30 - David Ruggles (Co-host)
but it doesn't always work that way yeah, yeah but so I've just been sitting here because I mean I I love this conversation but I don't really have anything to contribute. That point about the difference between simulation and silicon goes back to your first story about why those PCIe standards take so long to come from paper to something you can hold in your hand.

52:01 - Jonathan Bennett (Host)
Yeah, yeah, I'm. I'm reminded of watching a certain space company down in Texas that has a habit of blowing their rockets up to learn more about them and uh, you know, that's kind of a an absolute object lesson in. Look, they simulated it, but it still surprised them.

52:19 - Jeff Massie (Co-host)
Oh yeah, and and when you, when you make something work, you learn, but when something fails, you learn as well.

52:26 - Jonathan Bennett (Host)
If you're doing it right, lessons yeah.

52:27 - Jeff Massie (Co-host)
If you're doing it right, you learn something when it fails, yeah, so so yeah you take all that and you feed it back in and you keep all this data and you just keep building on it to in and you keep all this data and you just keep building on it to to create this knowledge base. So ideally it comes out right the first time. That's, that's the goal. But but I just want to make the point yeah, anything you see, especially you know an amd and intel and nvidia, you know any of those. If you see them do something. There has been a tremendous amount of thought by rooms full of people way smarter than any of us.

53:03 - Jonathan Bennett (Host)
Yeah, particularly I mean maybe particularly if, if they do something to the point that they actually fab it right, because, like taping out and building something on a fab, that's a big, expensive deal and it costs. It costs a lot of money to buy fab time and actually built, to actually build something, something silicon, um it's.

53:32 - Jeff Massie (Co-host)
Yeah, you get a lot, even if you have to have a lot of buy-in yeah, if you own the fab, you own the equipment, which I won't even you know. Ignore all the depreciation and ongoing upkeep, ignore all that, just saying let's make this new part in silicon. We want to make even one wafer.

53:50 - Jonathan Bennett (Host)
You're, you're into it, you know a million dollars that's, that's what's really interesting about um I think it's google that does. They call it tiny tape out maybe, and it's where they they hold, like this open competition. For, hey, send us your you know your integrated circuit ideas, things that you want to actually build on on silicon, and they'll pick like 40 or 50 of them and they'll do a single silicon run where they put all these designs on a single wafer and they do the run of it and then I think I think, when they actually send them back to people, they all have every design on it and they just connect it, depending upon which one you want, I think, is how it works. But that's, that's a pretty cool project in and of itself because it lets sort of mortal people have a chance at doing a tape out, which is something that you just don't get to do. So that's pretty neat.

54:47 - Jeff Massie (Co-host)
Came to mind. Oh, yeah, but I do want to preface. So I should say you know, it costs a million dollars. It's just my random estimation, not any kind of actual inside knowledge or because I'm I don't, uh, that's not where I deal with, but, uh, but also keep in mind some of those projects where they do that, people probably go well, how can they afford to do that? They're doing that on much older technology. That is not what they're fabbing. Yeah, that's the latest, greatest cpus, gpus, that kind of stuff they're. They're not running that at two, two nan of stuff they're. They're not running that at two, two nanometer. Yeah, they're. They're much larger. So, if you're, if you're running, okay, we put it on an eight inch wafer. We're running eyeline process. You know we're using a 0.5 uh micron geometry. Okay, then it's, it's not that you know, know it's, it's a lot cheaper, yeah.

55:44 - Jonathan Bennett (Host)
Um, I pulled it up real quick and I I don't see immediately what, what node size they're on for tiny tape out. Um, but it is.

55:53 - Jeff Massie (Co-host)
it is a pretty cool project Um yes, I'll say I bet you it's Isla, I would. I would bet dollars, it's Isla or something even bigger than that.

56:03 - Jonathan Bennett (Host)
I will find out because now I'm curious, but in the meantime we are going to go to the Eclipse, ide, the Eclipse.

56:12 - David Ruggles (Co-host)
Theia, I think, so your guess is as good as mine.

56:16 - Jonathan Bennett (Host)
We're going to let David tell us about it.

56:19 - David Ruggles (Co-host)
I'm reading it so I get to butcher the poor name. So another interesting thing. So I mentioned my first story was about open source video editing versus closed source, but free. And now I keep that same theme and talk about integrated development environments, or IDEs, which are very common. They're one of the best ways for coders to write code, because an IDE integrates development and debugging, running opportunities not opportunities, but basically your whole environment. You can run it and debug it and you get feedback. You have linters and things. So IDEs are the way programmers would write programs.

57:15
Well, probably the big boy in the room right now in the IDE space is Microsoft's VS code. It is based on open source technology but it is closed source. They just release it for free. There are parts of it that are open source. There are parts of it that are closed because it's Microsoft's you know, secret sauce. Parts of it that are closed because it's Microsoft's secret sauce. It's another example of Microsoft's embrace and extend persona.

57:47
I would say so with that background and that foundation, the Eclipse, theia I'm not sure it's T-H-E-I-A, pronounce it as you wish IDE has come out of beta and it is challenging Visual Studio Code. So it's a very interesting project because it is compatible. It uses the same Monaco editor, which is what powers VS Code, because that is an open source editor that they're both sharing. It supports the same language server protocol, lsp, and the same debug adapter protocol, dap, that provide the IntelliSense code completions, the error checking, the linting that I mentioned, things like that, so it can support the same extensions. Now, obviously, microsoft has their own Visual Studio Code marketplace, which is closed and specific to just Visual Studio, so all of their cool toys are in there, but there is an open VSX registry that is visual or VS Code compatible editors, so it's extensions for VS Code compatible editors, and so anything that is written to that LSP and or DAP protocols can be run on any VS Code compatible editor, which this Eclipse IDE is one of those. So therefore, unless you're using something, like you know, microsoft's Copilot or something that is specific to Microsoft, is actually a pretty simple way to move from one IDE to another, because they share those relationships.

59:43
Another interesting thing about this is that, because this is completely open source, some of the examples of use cases for it and what they are pushing it for is cases where you need to integrate an IDE into another project, whether that is company specific.

01:00:06
So maybe you need to build an IDE for your programmers that has things specific to something that you as a company are doing that you don't necessarily want out there. So instead of building a plugin for an existing IDE, you can actually take this open source IDE and just build off of it directly. They've got quite a bit of support out there. They've mentioned contributions from Ericsson, eclipse, source, stm, microelectronics, typefox and more, and then the contributors and adopters of the platform include Broadcom, arm, ibm, red Hat, sap, samsung, google, gitpod and many others. So for something that's just coming out of beta, there is still a huge community around it, and they call it a vibrant community underpinned by a license that champions commercial use, and it sets the stage for development environment that is not only powerful and flexible but also inclusive and forward looking. So if you're tired of being stuck with Microsoft but you don't want to have to destroy your entire toolchain, this is a very interesting looking option.

01:01:27 - Jonathan Bennett (Host)
So a couple of things. One, it is possible to use the VS Codium project, which is all of the open source parts of VS Code, without any of the closed source stuff and without the Microsoft spyware I mean analytics telemetry built into it the non-optional telemetry yes, the non-optional telemetry, but, um, a lot of us have just chosen to run vs code and, uh, that's sort of hypocritical but it makes life easier.

01:01:58
Um, but I will say I am glad that there are other options out there, and good for Eclipse. I will have to take a look at this and see how easy or difficult it is to get into and actually use for some of the projects I work on. It's cool. Maybe I should interview him. Maybe we should have him on floss weekly. I can ask him how to say the name of it that's what.

01:02:26 - David Ruggles (Co-host)
It's what I need. That's a good plan, yeah, yeah make a note.

01:02:30 - Jonathan Bennett (Host)
All right, so we boy, we are really in the kind of processor weeds this week. Um, because I've got a story about numa emulation on the Raspberry Pi 5. And this fascinates me. So NUMA, by the way, and it stands for Non-Uniform Memory Access. Numa is the idea that you have some RAM that is closer to one CPU core than another, and so this originally came from back in the day. I guess this is still a thing in some cases, where you have multiple CPUs, like physically on a motherboard, and then you have a bank of RAM for each of those CPUs, and when you so one of the things to keep in mind is your scheduler.

01:03:14
We talked about schedulers earlier. They move processes from one core to another of these thread units. You could say they move them around and it's just your processes, and so this is. This is what's happening inside your operating system. You know, execution jumps to a thread and then it jumps back to the operating system and runs for a little while, and then it jumps to a thread and then back to the operating system, to a thread and back to that, like that's what your computer is doing always. It is just, it is juggling, it's trying to keep all the plates in the air all at the same time, um, and for reasons like to keep your thermal load spread out, the the scheduler will move those processors around physically on the chip. Well, when you get to the point where you have like two chips in a system or two processors, scheduler will move those processors around physically on the chip. Well, when you get to the point where you have like two chips in a system, or two processors in a system and a Ram bank for each of them, moving a process from one CPU to the other can cause problems. Namely, it is really slow because, you know, in some cases you actually have to physically copy your memory, your process memory, from one bank of ram to the other, or you you end up having to copy it, if not from the like physical bank to physical bank, you have to copy it from one cache to another cache. And, to put that simpler, um, moving to the wrong processor can cause cache misses where you have to recache things. And so the idea that people came up with to fix this was the idea of NUMA non-uniform memory access, and it's basically saying okay, look, we're going to divide the system up into NUMA nodes is usually the way that it's said and essentially that's saying okay, these four cores are together because they all can access the same cache, and these four cores are together because they all access this other bit of cache, and so, on a NUMA aware scheduler, the scheduler will keep the processes on the same NUMA node as much as possible to keep things running quickly so that the system does not have to wait to move memory around. All right, that's Numa.

01:05:24
Now the Raspberry Pi someone figured out and it was actually engineers from Egalia, which those guys do, guys and girls do some excellent work these days Um discovered that there is a quirk on the raspberry pi 5. That means that if you split the physical ram into chunks and then you use an allocation policy of interleaving and you turn this on with numa emulation, you actually get, uh, in some cases up to an 18% uplift in system speeds, which is just. It's incredible. That's like that's a lot for this little hack. And it's because the, the memory controller, can use parallelism in moving memory around. And so if you turn on this fake NUMA in the Pi 5, you get this rather impressive speed uplift, which is great. It's 100 lines of new code Like an 18% speed bump in just 100 lines of new code is really impressive. It's not been merged yet. We will see if it's too much of a hack or if it actually gets merged, but I was just impressed. I think it's really clever and it's something.

01:06:43 - Jeff Massie (Co-host)
So that is really cool. And another analogy, a way to think about it, is cache levels and memory. Think about it You're sitting at your desk and you want to drink a water. Well, level one cache is like oh, your glass is right there. Level two cache is maybe oh, it's over on the coffee table, so you got to get up and go get it. Level three is it's in the kitchen. So now you got to go all the way to the kitchen and get it, and the more you can just say I only want to drink what's on the desk or on the coffee table, so it's close. Versus oh, I got to go to the kitchen or oh, I don't have this, I got to go to the store. That's, you know, hitting your ram, so it takes a lot longer.

01:07:19 - Jonathan Bennett (Host)
And yeah, and you could. You could think of numa as being I only want to drink from my glass, or I only want to drink from the glasses that are on this desk, not the glasses that are on the desk in the room next door, right, yeah, yeah david, every, every level of memory adds delays.

01:07:40 - Jeff Massie (Co-host)
Sorry, david.

01:07:41 - David Ruggles (Co-host)
I was just going to say using that scenario, pre-fetching operations is where it goes. There's some part of the CPU saying hmm, you might want milk, water or tea, so I'm going to go ahead and put those on the table, so you don't have to go to the kitchen to get them.

01:07:59 - Jonathan Bennett (Host)
Yep, yep, caching is interesting. Um, so I will say that, looking at the actual kernel mailing list about these, these little hacky patches, uh, greg crow hartman answered and uh was not exceptionally enthusiastic about this approach. He said why don't you just properly do this in your bootloader or your device tree and not do fake stuff like this at all? Also, you're now asking me to maintain these new files. That's not something I'm okay with. So let's just say this approach will have to get reworked before it lands up the kernel.

01:08:35 - Jeff Massie (Co-host)
Well, I think it makes a good proof of concept. So it shows that the potential is there. It shows that there's performance to be had. Okay, now go and do it actually the right way, yeah.

01:08:46 - Jonathan Bennett (Host)
And you know who knows they may rather than do it with fake Pneuma there may just be eventually, there may be like another subsystem or a subsystem modified similar to the scheduler. Maybe you do it in the scheduler. I'm not sure that makes the kernel aware that, hey, there is parallelization that can be taken advantage of. So, rather than calling it NUMA, rather than doing it the fake way, maybe at some point there will be a scheduler patch to make the scheduler itself aware that this will speed things up, like that would be the right way to do it the scheduler itself aware that this, this will speed things up, like that would be the right way to do it.

01:09:23 - Jeff Massie (Co-host)
And, using your analogy of cash miss is when you get brought milk and it's like oh, I'm lactose intolerant yes, yes, that's.

01:09:33 - Jonathan Bennett (Host)
That's when the the waitress gets, gets your uh order wrong and brings you coke when you ask for pepsi. Yeah, oh, fun stuff. All right, um, that is. Oh. No, I have a second. I have a second raspberry pi store. I was doing a twofer there and I forgot I'm a second raspberry pi story. This one's quick though. Uh, raspberry pi connect was so so far it has been remote desktop for the raspberry pi. It is now remote shell access. They have added to it, and the remote shell access goes all the way back to the Raspberry Pi 1 and all of the Lite versions. So if you Now personally, I'm just going to use a VPN, but if this is something that makes sense for you and you either don't want to or can't easily set up a VPN, you can get remote access into any Raspberry Pi now and get command line access into it, which is cool, like being able to SSH into things, or, in this case, use RPI-Connect to get into things, like.

01:10:38 - Jeff Massie (Co-host)
That's just cool, it's a win, it's great that they're doing it so they they had a remote desktop before they had a remote shell. Yes, isn't that kind of backwards, usually only a month or two, but oh okay, I mean usually that's. The text interface is usually the first, I would think it will.

01:10:57 - Jonathan Bennett (Host)
So apparently they built, they built the VPN infrastructure and, for whatever reason, the actual remote desktop was the thing that came up, came up working first. Um, I don't. I don't know why that ended up being the case, but that's where we are, but still it's. It's going to be useful for people that, like I said, either don't want to or can't set up a VPN to be able to get into things and is, like I said, either don't want to or can't set up a vpn to be able to get into things. And is it vpn or vnc? Well, so it is the. The remote desktop is actually windows remote desktop protocol. It's rdp, okay, and then the remote shell. I don't think it's actually ssh. I think it uses their own binary rpi-connect, but on the back end it's some sort of VPN. Let me put it this way I would be surprised if it's not built on top of WireGuard, just because that is the solution these days.

01:11:55
So the comments here I have a link to the RaspberryPicom article about it, the blog post about it, and the top comment is hey, it would be great if it did this. And one of the Raspberry Pi guys says we have a huge list of capabilities that we think would be useful. All the things you just mentioned are on the list, so maybe, hopefully, eventually. All right, those are the stories. That is what we wanted to cover. Let's talk about some command line tips. Uh, I think, I think we're all command line tips this week, command line purity, and we're gonna go to final. Finally, we're gonna go to jeff first and he is going to take off the mask.

01:12:41 - Jeff Massie (Co-host)
I am you mask. So you mask is my command line tip for the week. It's short for user created mode mask and it sets permissions for a user, so you mask. The mask is most useful when you have several people that are sharing an environment and they're creating new files, and that way you'll make it so that each person creates a file that has the same permissions, like, for example, everybody could see it. So now it kind of acts a lot like chmod, but it's different, because chmod affects a specific file or directory.

01:13:17
Using umask, once you've set it, it creates those permissions all the time, so you're never going to go back to fix files or directories. So UMask is kind of the ongoing permissions you're going to do, where chmod fixes previous permissions. Now, the way it changes permissions, it works a lot like chmod. So I won't go over all the setting, the read and write and execution bits and all that, but it basically works the same way. Now I will say, though it's for when you're logged in. So if you would like your UMass settings to stay longer than your current session, you can either edit and put it in your bash underscore profile or your bash rc file and then you add the UMass command with the permissions that you want. So every file you create in directory will have that same mask of permissions.

01:14:13
Take a look at the article in the show notes and it goes into much more detail about using UMask and it shows a lot of tables with the bits. So if you have a problem remembering how do I set the read bit, which one is, that, it's got it all charted out and listed for you. If you're an old hand and you already know chmod, it's going to be very easy. Look at the man page and there's not a ton of switches to it because it works pretty much the same way. So you know and this tip is a little more specialized and it's not something that gets used day to day for most people, but I file it under one of those another handy, you know. Oh my gosh, I really needed this and it saved my bacon type of command, yep.

01:15:01 - Jonathan Bennett (Host)
Very cool. All right, and David.

01:15:04 - David Ruggles (Co-host)
All right. So we have mentioned the shebang line in files before, but I realized by going back and looking at our carefully curated list of command tips put together by Ken, that doesn't look like we've ever actually dove into it. So I wanted to take a few minutes and do that Now. I won't give my specific age, but I was a teenager at the end of the 90s, so every time I say shebang it makes me think of the Ricky Martin song Shebangs but that's beside the point, the shebang line starts with the sh, which is the hash symbol, and then bang, which is an exclamation mark.

01:15:55
So it's hash or pound sign, or what are the kids calling it these days, a hashtag and then the exclamation mark, followed by an interpreter and I do have this code in the show notes followed by an interpreter, and then an optional arg um, and I'll get into that in a little bit. So basically, if you put that at the beginning of a file, what that does is it is used by the shell, and well, you then make the file executable and you execute it. It is used by the shell to determine how to interpret the file. Now, if it's a binary file, obviously you wouldn't have an interpreter in there, but anything that's interpreted. So basically, if you can type on your command line interpreter like Python or Go or any other interpreted programming language, space and this file, you can run the file directly without having to do that extra step by adding a shebang line to the top of it. It's pretty interesting. You need to specify specifically where it is. If you do not specify a shebang line, then it will be interpreted by the shell, so you can do simple scripts. Well, actually it can get very complicated, but you can write a script that would be exactly the same thing as if you were typing those commands directly in without using a shebang line. However, it's still recommended that you specify a shebang line, because you don't necessarily know which shell your script will be executed under and there would be differences between those. So even in cases where it might work without it, it's still better to have it than not have it.

01:17:46
The other interesting thing that you can run into is portability. So take Python, for example. If you wanted to add a shebang line for Python, you could put hash sign, exclamation mark, slash bin, slash Python, and then it would run. But what if you copied that Python program to another machine where Python wasn't located in that location? It would fail. So you can actually specify an interpreter that you then pass the argument to. One of the interpreters that is often used is slash bin, slash env, which actually loads your environment. So you can do slash bin B not necessarily appropriate, but the available Python interpreter for that system. And so now you're no longer tied to a specific location of Python.

01:18:50
Maybe on one system it was compiled from source and it's in a different location.

01:18:56
Another one was installed using the built-in repos, so you can use slash bin, slash env and then what you actually want to interpret your file with, to build in some portability into the scripts that you write. But the other thing that you need to do is you need to tell the operating system that this file is now executable. And so I don't dive into Chumad I'm pretty sure it's been covered in the past but you do a Chumad plus X to set the execute bit on the file after adding the shebang line. Now there's a Wikipedia article on it which I don't link to, but it goes into quite a bit of depth on it if you're interested in delving into it. But one of the key reasons that we have a shebang line in Unix, linux versus like Windows where it depends on the extension, is they did not want to have to manage and potentially overload a large list of tying special names or bits or things to how to interpret it. So this is extremely flexible and it's file-specific.

01:20:15 - Jonathan Bennett (Host)
So I just learned something in poking around about this the shebang is actually a printable magic number. The way that things get executed in Linux and Unix before is the first couple, two, three, four bytes are the magic byte string and, I think, the file command you can use to check what those are. So the magic byte of shebang is OX 23, ox 21, which is the pound sign of the exclamation mark, and uh, that is the magic, uh, the magic number that says this is an executable script, which is where the whole shebang came from. That's that. That's pretty cool.

01:21:00 - David Ruggles (Co-host)
And then, on top of that, most programming languages treat the hash sign as a comment, specifically because they don't want to break shebang.

01:21:10 - Jonathan Bennett (Host)
Yep, yep. The other thing that is real fun about this is the absolutely cursed list of languages that you can use shebang with. Like you can write system scripts in PHP. You can write system scripts in Ruby. I think there's a way to write system scripts in C if you really want to, and run them with Shebang. It's special. There's probably the BFL language, the white space language. You shouldn't, but you probably. You know the white space language if you really want. You shouldn't, but you could if you really wanted to.

01:21:43 - David Ruggles (Co-host)
You can also run into some interesting challenges nesting shebangs. So running a script by passing it to that has a shebang, by passing it to another script that has a shebang, to another script that has a shebang. I didn't dive into the details of it, so that's as far as I can go without talking out the side of my mouth, but it was just interesting that it a supports it. It doesn't necessarily break, but b what it does do, is it?

01:22:11 - Jonathan Bennett (Host)
just there was some weird behavior yeah, um, ae4ko is telling me that there's a file in slash Etsy that defines the magic numbers and actually I didn't know that. I need to go look into that. I thought it was baked into the system.

01:22:27 - Jeff Massie (Co-host)
I'm still hung up on. David was a teenager in the late 90s. Man, I'm getting old here. Bring Ken back. We got to up the age of the average of the show here.

01:22:38 - David Ruggles (Co-host)
Okay, fine, you know I said I was going to talk about my age, but my claim to fame is I was born almost exactly a month before IBM released their PC in 81.

01:22:49 - Jonathan Bennett (Host)
I see, I see, all right, well, I've got a command line tip. This is something I just discovered and I already really like it. It's going to save me hard drive space, if nothing else. So it is the command archive mount all one word, and it is a fuse utility file system and user space and it just does a fuse mount of whatever is in your archive.

01:23:19
So if you've got a example, that target GZ, you can just run archive mount space, example, that target GZ space, and then a directory like dot slash, mnt, and it doesn't fully extract everything, but you can change directory into MNT and the files are there just like they were real files. You can move them around, you can, I'm sure you can use dd, and so, like one of the things this is gonna be it's really gonna be nice for me is you're going to say you need to dd an image onto a uh, onto an sd card or something. Well, before this and I know there's other ways to do this, but generally speaking, before this you would extract the image and then it would give you another bigger file and then you would have to DD that over and so you're using up twice as much of this amount of space you need to on your hard drive. Well, now you can just do an archive mount. It's a it's, it's a fuse mounts, and you can go in there and you've got the image that you need to DD over without actually using all the hard drive space. So I mean, that's, that's cool. I find that really neat.

01:24:23
Um, archive mounts, I'm going to use it. I've already used it. It's fun.

01:24:28 - Jeff Massie (Co-host)
Nice yeah.

01:24:30 - Jonathan Bennett (Host)
All right it has been. It has been a good show, it's been great. I'm going to let David, go first.

01:24:46 - David Ruggles (Co-host)
Okay, I have a link in the show notes and I did not put it as a tip, I put it as an ending note because it is time sensitive. But I'm sure if you watch any Twitch or probably any technology podcast of any sort, Hum, humble bundles show up occasionally and there is one out there right now. I think there's eight days left on it and it is Linux for Seasoned Admins by O'Reilly. So pay what you want, Help charity the minimum. So it's a total of 15 books. But there are three different levels. You can do 5, 10, or 15 book bundles. The minimum for the 15 is $25, and of course, you are encouraged to pay more than that. I took advantage of this myself and so I just wanted to throw it out there. If you do any sort of admin work, they've got books covering Git, covering Ansible, I think, Kubernetes, Python, some C. There's just well 15 O'Reilly books and O'Reilly's always been known as a really good source of technical wisdom. So I would say, go, take a look at it.

01:26:01 - Jeff Massie (Co-host)
But do it this week.

01:26:03 - David Ruggles (Co-host)
So if you wind up listening to this a year from now, I'm sorry All right, very cool and Jeff.

01:26:11 - Jeff Massie (Co-host)
Well, no coffee, no fancy things like that. But I do have a poem. Wind catches lily, scattering petals to the wind Segmentation fault. Have a great week everybody.

01:26:30 - Jonathan Bennett (Host)
I wondered where that was going to go. It sounded like a very traditional haiku. There's got to be a tech side to it and there it was, came out of nowhere and smacked us across the face. I feel that one too. All right.

01:26:46
So I do want to point out a couple of things real quick. First off, we've got the club. The club. It's down that way, down that way. Scan the QR code, join the club, support Twit, make the things happen, keep us on the air, and that gets you into the members-only Discord. That gets you the ad-free versions of all the shows and it helps to support the network and we sure appreciate folks doing that.

01:27:11
If you want to find me, you have my work over on Hackaday. There's the security column goes live every Friday morning. And then we've got Floss Weekly and we are, at least for July and August we are moving the recording of Floss Weekly to Tuesday at 9.30am Pacific time, 11.30 my time, and that's so that I don't have to do two big projects every week in two days. That'll let me do them in three days instead, and hopefully I can get to bed before three in the morning on those nights again, because three in the morning gets to be a pain, but yeah, that's what I'm up to. So follow me over on Hackaday and don't forget to support the club at Club Twit. We sure appreciate it, and we will see everybody next time on the Untitled Linux Show.
 

All Transcripts posts