Transcripts

Untitled Linux Show 193 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.


00:00 - Jonathan Bennett (Host)
Hey, this week we're talking about the new AMD and NVIDIA GPU releases and which ones make sense on Linux. And then we're talking about AMD's entry sign vulnerability a really interesting study in using the wrong cryptography. And then Mono is back, Linux is officially coming to Android what? And oh yes, goodbye Skype. It's been fun. You don't want to miss it, so stay tuned.

00:27 - Leo  (Announcement)
Podcasts you love From people you trust. This is Twit.

00:36 - Jonathan Bennett (Host)
This is the Untitled Linux Show, episode 193, recorded Saturday, march 8th. Unrolled my Froot Loops. Hey folks, it is is Saturday and you know what that means. It's time to get geeky with Linux and open source, all kinds of fun stuff. It's not just me, but it's not the whole group either. We've got Mr Jeff Massey with us today. Hey Jeff, how you doing? I'm good, I'm good. My wife asked me before the show. She said with just the two of you, does that mean it's going to be a shorter show than normal? I said not necessarily.

01:06 - Jeff Massie (Co-host)
And to Jonathan's wife. I mean, come on, you've seen me for a while now. You know I can be pretty verbose. Yes, yes, We'll be fine.

01:17 - Jonathan Bennett (Host)
Oh yes, I'm not worried about that. I'm not worried about going too short. That is not going to be the problem. So there was a thing that happened this week that we knew was coming and uh, it's. It's sort of a thing that we're both very interested in, uh, and that is amd released, or yeah, I think the actual release they announced. We have, we have um, we have benchmarks. Is there any actual hardware on the ground yet?

01:37 - Jeff Massie (Co-host)
yes, and, and there was a couple of things actually technically four things.

01:45 - Jonathan Bennett (Host)
Well, take it away and tell us about all of them.

01:47 - Jeff Massie (Co-host)
Yeah yeah. So it's been an exciting week for graphic card releases, with multiple cards from both NVIDIA and AMD launching just a couple of days apart. So first up we have the GeForce RTX 5070 and the RTX 5070 Ti. Michael Larable over at Phronix received a 5070 card from NVIDIA, but he did not get the 5070 Ti. Michael Larable over at Phronix received a 5070 card from NVIDIA, but he did not get the 5070 Ti. So in the benchmarks we're going to be talking about the data he generated. There's no 5070 Ti data.

02:18
As with previous releases, he didn't have a game ready driver, so the article linked in the show notes on the 5070 is focused solely on compute workloads. These include tasks like Blender, v-ray 6.0, which is a graphical rendering program, shoc or Scalable Heterogeneous Computing, some financial and fluid workloads and a few synthetic benchmarks in there. What did Michael find when it came to power? The RTX 5070 consumed roughly the same amount as the previous generation 4070 and 4070 Super graphics cards. As for temperature, it didn't run quite as cool as these cards you know it was hovering around the RTX 2070 Super and the RTX 2080 Super While the average temperature was more similar to modern cards, you know, like the 4000 series 4070s. But during the workloads the 5070 had a much larger variation in temperature than some of the newer NVIDIA cards. That's why it was higher on the temperature ranking, because while the average wasn't too bad, it had much higher swings than previous generation cards. If you look at the geometric mean of all the test results, the 5070 was just slightly faster than the RTX 4070 Super in computational tasks, but came in a bit behind the RTX 3090 and RTX 4070 Ti Super. Now I'd be remiss if I didn't comment.

03:44
For those who may not follow hardware closely, the overall view of NVIDIA's 5000 series is, simply put don't buy them. These cards are considered overpriced. Some have melting connector issues. They have chips missing part of their capabilities. This entire launch has largely been considered a paper launch due to the rarity of the cards. Some major retailers have had only single digit quantities to sell, some had zero. So right now most reviewers, or at least the ones I've checked, are very negative about the 5000 series and the consensus is wait for the AMD cards. Well, that was easy, because just a couple of days later AMD has entered the chat.

04:29
In the second article linked in the show notes, michael takes a look at the gaming performance of AMD's cards. Now he comments that he was able to do pre-launch testing with upstream code in the kernel and Mesa, but no need to have upstream patches which have not been merged yet. So basically everything was already in there. He didn't have to like pull from places and merge things in and try to make it all work. It was, it was there so, which is much better than previous launches. For anyone considering an RX 970 card, amd recommends the 6.12 or newer kernel and you know, as they usually say, you know, the newer your kernel and Mesa version, the better.

05:11
Now Michael compared both current and previous generations of AMD cards, along with several NVIDIA cards, across the range of games like Counter-Strike 2, grand Theft Auto 5, hitman 3, cyberpunk Dirt Rally 2, and others Grand Theft Auto 5, hitman 3, cyberpunk Dirt Rally 2, and others. Now, notably the RTX 5070 wasn't part of this comparison because of the driver, but the 5080 and 5090 were. Now, if you look at the geometric mean of all the tests, the RX 9070 came in just above an RTX 3090, and a little below an RTX 4070 Ti Super. Now the 9070 XT performs slightly below the RX 7900 XT but above the RTX 4070 Ti Super. Power-wise, the RX 970 averaged 205 watts with a peak of 251 watts, while the RX 9070 XT averaged 264 watts with a peak of 311 watts. In terms of temperature, all the cards Michael tested, the RX 970 was the coolest, with the RX 970 XT just a few slots up. Basically, the coolers on these cards are working really well.

06:28
The third link that I have in the show notes covers the RX 970 and RX 970 XT in compute tasks, and here the 5070 is included. Now, in compute the AMD cards fell behind NVIDIA. The RX 970 and 970 XT were about mid-pack compared to existing AMD cards. Michael also noted that he couldn't get Blender working with the HIP backend. Now, if you remember, hip is part of the Rock M and part of the compute stack that AMD is working on and he's still waiting to see how Rock M support will work out for these new cards. So he mentions in the article expect more benchmarks in the future as the compute stack is refined and we get better support for these new cards. So right now they don't have the full rock m and the hip support for the for the 9070 and the 9070 xt. Some of the up the 7000 series do have that support, so keep that in mind. It it's not everything missing support, just the new ones that were launched. Now I do want to mention that AMD is not above playing the same games NVIDIA has.

07:36
I've already mentioned some of the NVIDIA issues, but you'll notice that AMD was initially touted for their manufacturing. Suggested retail price of $599 for the RX 9070 XT. The non-XT was $550. It now looks that that was just a launch day price and it's hard to find that deal. Most of the cards are now priced between roughly $650 and $750. So while they came out sounding like a great value, it looks like the rug was pulled out from under everyone.

08:09
Amd has also had a bit of its own snake oil situation where cards seem somewhat hard to find, even though there's supposedly been a much larger stock. Now the demand supposedly has been so great that they're selling out really, really fast. You know, in all fairness, many retailers did say they had 10 times or more cards available than they did for the you know 10X of what they had for the NVIDIA releases. But you know, 10 times zero is still a pretty low number. So you know, keep that in mind. But supposedly they've had months to work on this and I don't know. Just before I started doing the show here, I checked the Now In Stock website and it lists everything is out of stock and it hits like Amazon, best Buy, newegg, b&h Photo. Now it just hits the us retailer. So I cannot comment on anybody listening overseas, you know.

09:11
If you so, if you need a new video card right now, I'm sorry to say it's not a good time. If you can wait, you know it's better to let the fervor cool down, let the scalpers back off. You know better prices or at least availability you know should come along in the future. You should be able to at least have a choice of brands and models. And you know, maybe three, four months or maybe six months. You know, as always, check out the articles linked in the show notes for details on all the tests. So if you're really data-driven, everything will be right at your fingertips. You'll have all the information. But you know here's hoping to see more competition in the future.

09:56 - Jonathan Bennett (Host)
Yeah, a couple of thoughts come to mind. One it looks like this is actually going to be a decent card. This is specifically the XT version, right, like the 79, the 9070 XT. If you can get it for the 599, that actually seems like a pretty reasonable deal. Now, obviously, it does not seem to be available right now for that price, so we're just going to have to all wait if we really want it. I would not yeah, I just I would not suggest going out and spending much more than that it.

10:24 - Jeff Massie (Co-host)
It is a solid performer, though. I mean. It's gotten really good reviews. Everybody loved it. Part of what they loved, though, too, was the price. Right right, but I don't know. There still might be some 599s trickling in, but realistically, I think you're going going to budget and you need a card. You better plan on probably saving up like 700 bucks or so just to be able to get one. It's still a good card. It's still doing great.

10:56
There's still more of them than NVIDIA. I mean NVIDIA, this time around there was a, let's just say, the normal hardware vendors. I was surprised at the amount of just blatant, unbleaped profanity towards NVIDIA that I heard. I mean, just, people were really the reviewers were really upset, and people are loving the AMD cards. Now, the thing to keep in mind, too, is the AMD cards have more VRAM, and and video got really slammed for the amount of VRAM they had in there. Uh, like they're 50, 70. It certain games wouldn't play at four. If you want to run them at 4k, it would not do it, though you could argue that the 50, 70 isn't supposed run them at 4k, it would not do it, though you could argue that the 5070 isn't supposed to be a 4k card yeah, um, the the other thing that really comes, a couple of things was there an 8000 series?

11:58 - Jonathan Bennett (Host)
were there any 8000 series cards from amd? No they just skipped over the 8000 they just skipped over now.

12:05 - Jeff Massie (Co-host)
Well, I mean, unless they had some mobile thing that I missed, but not discrete graphics card. They went from seven to nine yeah, that's weird.

12:17 - Jonathan Bennett (Host)
Um, that's, that's really odd. Uh, yeah, I was thinking, I was thinking about that. I'm like, oh well, that's like three generations. Wait a second, it's only two generations. They didn't make a card in between there.

12:30 - Jeff Massie (Co-host)
Right, at least the processors, when they skipped there was some mobile in there and I think they kind of went well. Okay, we don't want to confuse some of our mobile CPUs with the main desktop ones, but the GPUs, as far as I know, they just said we're going to rank up the number so we can be cooler than NVIDIA.

12:50 - Jonathan Bennett (Host)
Yeah, that's probably about what happened. Let's see. There was one more thing. Oh, it's a 90-70. So when you look at these comparisons, one of the things that stuck out to me is the previous generation 70-90, was winning in a lot of these competitions. And then it struck me, the 7090 is more of a flagship card, whereas this one is more of a kind of mid-range, low-end 4k card, which is also interesting.

13:18 - Jeff Massie (Co-host)
Do we anticipate a flagship in the 9000 or do we think that's going to wait for the next there's some speculation on that because there are some rumors that amd might have a flagship probably not 50 90 flagship, but something a little more uh, 50 80 level. But there's some rumors that one might be coming, but it's pretty shaky so I don't know. It could be one that they tried to build and didn't perform like they wanted, or something, or maybe low yields, or maybe it was just a rumor that won't appear because they're trying to just go after the mid-range market and not chase the high end. But I'm curious how much market share they can maybe win back with some of this. I do want to comment too.

14:12
Keys512 in the Discord says that you know, overclockers UK had thousands of one model alone and they showed the picture stacked in the warehouse. That actually isn't quite correct. There was a lot of cards that the ones you could see were the same make and model, but they had others in back that were not the same and there was some talk of like, well, maybe it wasn't a full cube, it was kind of. I mean, they had a lot of them. Now, to be fair, there was a lot there that you could visually see, but it was kind of debunked a little bit as far as only one brand there were. There were multiple brands involved. You just could see one, and the actual core and backside might not have been completely full, but you could probably see a couple hundred, at least on the front. So I mean it is definitely more than what AMD had, or I mean NVIDIA had.

15:13 - Jonathan Bennett (Host)
Yeah, interesting, interesting. So what do we think for using this for Linux? So obviously the performance is going to get better over time. Right, because it's the initial release. Certainly there's going to be better over time. Right, because it's the initial release. Certainly there's going to be things where people find, oh hey, we weren't doing this or we were doing this five times inside the driver and we only had to do it twice. That sort of thing always happens. The performance will get better. So would you say at this point, particularly the XT version, if you can find it for MSRP, is a buy for Linux use, particularly the XT version, if you can find it for MSRP is a buy for Linux use.

15:47 - Jeff Massie (Co-host)
Yeah, oh, totally, totally. The one case might be compute. If you're heavy into compute, now, if it'll still compute, it's just not as fast. So if you're not rendering a lot of stuff all the time or doing heavy duty computational workloads, it's going to be just fine. It's just going to take you a little longer.

16:04
But if you're a gamer, I totally right now would say I'm AMD all the way. I would not go NVIDIA. You know their driver stack is there, I mean it's in the current kernel. They're not pulling stuff ahead, they're not that extra time to really bake stuff in. And there's some of the reviewers that were liking the FXR4 better than the DLSS. So the frame generation on AMD was considered better in some cases and they only generate one frame in between versus NVIDIA doing four frames. And some of that would be like oh, it looks beautiful, but then it can add some lag in there. Now, some of that's going to depend on how sensitive you are to that. But right now if I was going to buy a new card, I would totally buy 90, 70 XT and honestly I would even go a little over the MSRP, just because I think that's kind of what it's going to be is is those little bit higher prices, but there's still a better value than the Nvidia cards and I think with the more more memory in them they're going to be future-proof for longer.

17:15
Uh, they take most of them that. There's a few third-party exceptions, but almost all of them take the regular uh eight pin connector that we've all used for years and have had no problems with. They don't melt. They don't melt. Now, is it melting on NVIDIA?

17:33
You know there's some stories out there. They are, and sometimes now I don't think it's the connector because certain sites have gone through and measured the temperature and certain wires on that uh mini connector that 12 was a 12 pin, 12, yeah, 12 volt, whatever they call it are carrying a lot more current than others. So it's not a balanced load. So there's some possible issues there. You know I see no reason to get uh an nv card, with the exception of, like I said, heavy-duty compute or if you truly need, or you can't sleep at night because the money is just piled under your mattress too hard and you're just like God. I got to get rid of some of this, maybe a 50-90. But good luck finding it. Yeah, and never buy from a scalper. I don't care how impatient you are, don't encourage the scalpers, just walk away. It's not worth it.

18:34 - Jonathan Bennett (Host)
One quick note on compute, and that is that there is one case where compute for AMD makes sense, and that is if you really don't want to install the proprietary drivers from NVIDIA.

18:45 - Jeff Massie (Co-host)
Oh, that is true.

18:47 - Jonathan Bennett (Host)
There is a way to do compute now without having to install anything proprietary, and that you know for some of us that that matters, that means a lot. So AMD is making that possible with HIP and Ruck M.

18:58 - Jeff Massie (Co-host)
And AMD and HIP. We've had stories this past year about AMD is investing a lot into that. They've been getting a lot of programmers going on it there. Amd is pretty much all in on this because it enables them in the professional side as well. So they're very interested to make sure that this compute stack is going to work and it's going to help bridge the gap with CUDA.

19:23 - Jonathan Bennett (Host)
Yep, all right. Well, let's talk about, interestingly, something else that AMD has done. This is Linux adjacent, but I figured our audience would be very interested in it, and that is EntrySign. This is a vulnerability in AMD processors that was actually discovered by researchers at Google, tavis Ormandy being one of them, and I've just sort of learned over the years that anytime Tavis's name is attached to something, it's gonna be just wonderfully done, well-written and really impressive, and this is absolutely the same. So Intrisign it's all about doing microcode updates for AMD processors, particularly the Zen line of processors. It's all the Ryzen and all of the Epic processors Microcode is.

20:12
So we have to step back and say that Intel has not made an actual native x86 processor since the 90s. Amd has not made an actual x86 processor since like 2004. What they actually make are RISC processors, reduced instruction set computer processors, and then they have this little tiny shim of firmware on top of that, which is microcode, that essentially emulates the x86 and x86-64 instruction set instruction set right. So like that is sort of the background to this. We're not literally running x64 assembly directly on the real cpu, it's. It's got this microcode in the middle and so that microcode is sort of the it's sitting is sort of god mode on your processor and it makes everything work Because it is software essentially. It does need to get updated from time to time and both AMD and Intel will push out updates for their microcode. In the Linux world, the Linux kernel will actually load those microcode updates for you during boot, which is pretty cool. And, as you might imagine, amd is and Intel are particularly interested in making sure that you only run legitimate microcode on your processor. They don't want people to be able to run their own custom microcode for security reasons. If nothing else, like to be able to do secure, encrypted virtualization and some of those things.

21:46
And AMD has this really interesting method of making sure that you're running signed microcode. And so the way this works is inside the microcode, like the binary blob, inside of it you've got, a public key is included, and then the blob itself is signed using that public key that's included in it, and then you know the signature if it matches. You know that, you know that public key. It was legitimate. But then you might ask well, how does your processor know that that public key is the one that it cares about, right? Like, how do we know that this is actually signed by AMD and not just signed by this random public key. Well, the scheme that AMD has used is they take that public key and they hash it. So the public key itself is like 2048 bits and they yeah, 2048 bit, it's an RSA public key they hash it down to a 128 bit value, which, on one hand, you might look at that and go, oh my goodness, you're losing so many bits. When you do that, you're reducing the security so much. Well, because it's RSA, that is not exactly the same as being 2048 bits of, like you know, an AES key. Let's say Bits are not necessarily one-to-one as to how much security they give you. So it's generally considered that taking a 2048 bit RSA key and hashing it, so long as you're doing the hashing securely down to 128-bit value, you do not lose any security.

23:21
All right, now we have to talk about hashing functions. To understand A good hashing function is well, it's one way. It takes a whole bunch of data on one side, you run all that data through the hash and then you get a small value on the other side. Every time you run the same data, you should get the same output and it should be essentially impossible to be able to engineer a scenario where you put two different inputs in and get the same output. So there shouldn't be any way to reverse the process. There shouldn't be any way to game the process and, in fact, if, essentially, if anyone ever discovered it's called a hash collision, where two different inputs give you the same output, if anyone ever discovers a hashing collision for a given hash, then that hash algorithm is basically busted, broken. You should not use it anymore. It's done for a what we call cryptographic hash, and those are for for this sort of use case. Those are sort of the um, the assurances that you want your cryptographic hash to give you.

24:33
Amd used a, a version of aes. It's the uh, the aes. Aes is the advanced encryption standard. That's the. That's the encryption, like the encryption primitive that the US government has signed off on, and lots and lots of people around the world have looked at this and come to the conclusion that, yes, yes, what AES is doing is secure, but there's like five or six or seven different variations of AES because you do different things with it, and one of these is the cipher message authentication code, the AES, cmaq. All right.

25:11
So what this does is you give it an input and you give it a key and then it gives you an output hash and the assurance like the thing that it sort of guarantees is that if an attacker does not know the key then it's going to be impossible to change that input message and get the same output hash if the attacker does not know the key. Well, amd used this to do that public key signature. The problem is that for that hashing step to happen on your CPU, the key for this hash also has to be burned into the CPU. The researchers from Google were able to reverse engineer that and pull that key out and figure out what it is. It so happens that it was the example key from NIST, from the National Institutes of Standards and Technology. Interestingly enough, that shouldn't matter for what they're doing, but it kind of does.

26:17
Remember, I mentioned briefly this idea of what assurance does it give you? I mentioned briefly this idea of what assurance does it give you. Well, again, the assurance for AES-CMAC is if you don't know the key, you can't do collisions. But because of the way AES-CMAC works, if you do know the key like if you know the input and the key you can do the calculation all the way through and you can sort of pause it partway through and then see the input that's going to matter, the input that's going to be used next, and you can just sit there and twiddle that input until it gives you the output that you want.

26:51
And so, because of this hashing algorithm that was used, but it's possible to generate an RSA public key that will then match when they run it through this hashing algorithm because it's not really what it's intended for you can generate the public key that will match your hash on the outside, on the other side of it and for those of you that are like security nerds, you're thinking about this and going wait, wait, wait, wait, wait you can sort of pseudo randomly generate an RSA public key and that doesn't give you anything because you still don't have a private key that goes along with it. And that was my thought process too. It's like wait a second, this idea of randomly generating a public key just doesn't even make sense. Key is you generate two very large primes, you multiply those together and then you know that's essentially that is your public key is the product of multiplying those two very large primes together. That means that a good, a secure RSA public key is the product of two large primes multiplied together.

28:13
But if you're just generating this number randomly, it may not be the product of two large primes. It may be the product of multiple primes and if that's the case, you can actually factor it, which means that from a bad public key you can generate a legitimate private key, which means that you can then sign your microcode update with this bad public key and bad private key. Because you can get back to it and because it's a public key that you generated specifically to match this not-quite-the-right hashing algorithm, you can generate valid AMD microcode updates, validate AMD microcode updates, and the thing that really is scary about that is potentially, if you're on a hypervisor, you could potentially use that to then break the encrypted virtualization stuff. For most of us, for the vast majority of us, we don't have to care about this. It's not going to affect us. But there are some enterprise use cases where this is a big deal. But the way that the Google engineers, the Google researchers, got there was so fascinating to me and I thought hopefully everybody else would enjoy hearing the story too.

29:33 - Jeff Massie (Co-host)
Yeah, I just that's kind of wild. I was thinking you know some of the hash stuff. I was thinking that sounds just like a bloom filter to me that's what I was thinking where you take a big amount of data and you hash it so you can try to see if your pattern is in there that you're looking for. Yeah, but yeah, how you get through all that. I mean that is.

29:56 - Jonathan Bennett (Host)
I think probably what happened is they were trying to understand how AMD's microcode updates worked and they went. Wait a second. They're using this CBAC thing. That's not the right cryptographic hash. I bet we can break this. Yeah, it's very cool.

30:13 - Jeff Massie (Co-host)
In your own personal microcode updates. What could possibly go wrong?

30:20 - Jonathan Bennett (Host)
Yeah, I mean. So like there's the whole potential and this would be extremely complicated to do. I don't know that there would be many people that are really up to this, but like you could feasibly do things like core unlocks using a microcode update, right. So like if you were sold an AMD processor that was a six core but maybe it's actually the same die as the eight core, this might be the sort of thing that you could go in and do a core online. So like AMD wants to get rid of it for that reason.

30:45
But you know more than that the ability to go in and flash. So if you can get to a server you can take. You know you can take the hard drive out and put your own hard drive in, you can boot it and potentially do a microcode update and then you take that hard drive out and put the other hard drive in. Well, if the old hard drive had something like system encryption using the TPM, well then, if you're inside the microcode, you can break into that and you can bypass all of that stuff. So like there's some, there's some legitimate security, big problems and again, it's not something that most of us, regular people use, but inside the data center and enterprises. That is definitely a thing. That is a thing that the government is trying to require certain businesses to use. It's kind of a pain, but that is something that they're trying to roll out.

31:43 - Jeff Massie (Co-host)
Yeah, they're not going to care about my minecraft game.

31:45 - Jonathan Bennett (Host)
They're going after the big the big bucks, yes, yes, or political, yeah, yeah.

31:47 - Jeff Massie (Co-host)
All right, let's talk about uh, let's talk about ubuntu and the o3 optimizations yeah, well, we're well into the release cycle for Ubuntu 25.04, which will be out in about another month. Canonical engineer Matthew Clemenzu sorry if I mispronounced that has posted a status update on the things he talked about in the Ubuntu discourse. He talked about in the Ubuntu discourse. They've updated some versions of programs. You know he talks about what they've landed on. For example, ubuntu 25.04 will use GLIB-C 2.41. Systemd is going to 257.2. Openssl 3.4.1. They are going to continue using GCC 14 by default and they'll be going with Python 3.13 along with OpenJDK Java 21. Also, there's Golang 1.24.net version 9 and LLVM 20.

32:57
Now one of the big changes coming is they've decided not to use the dash O3 compiler optimization level by default. Now we've covered this before, where they were testing it originally thinking it would help boost performance. Now, when I say this is not just the kernel, they were going to do the entire distribution compiled with that dash O3 flag. What they found, however, is that while some workloads did see improvements, overall the system performance slightly declined and binary sizes actually increased. So compiling the distribution with the higher optimization didn't really bring much benefit, except for very specific use cases. Now, in case anyone's forgotten, the Dash O3 optimization level has always been a bit of a divisive topic, I know in past stories I personally have tested compiling kernels with different optimization levels. I don't remember the exact size differences that the O3 level produced, but it didn't really give any performance benefit in my benchmarking, other than in very specific situations. Nothing you'd notice in day-to-day use and basically it's a small blip on a benchmarking chart. You know, I think it's good that Canonical is looking at the data rather than just going for higher optimizations to seem flashy. You know they're doing what they feel is best for the users and at least they're taking a very data-driven approach. Also, I should note that they've been experimenting with LLVM Clang-based builds as an alternative to GCC, but they found about 12% of the packages wouldn't build, so converting to LLVM Clang by default wouldn't be trivial. So there's still work to be done to make it a true drop-in replacement. So we're going to have to wait for that option for a bit longer.

34:49
Ubuntu 25.04 is also doing a lot more work for ARM64 processors, specifically around the Qualcomm Snapdragon X1. Now they're still committed to having a single Ubuntu ARM64 ISO that works across new ARM laptops and ARM64 servers. A quote from Matthew. That's a direct quote. We remain committed to our goal to keep Ubuntu beginner-friendly and accessible for everyone. So we're breaking with today's ARM64 default of device-specific installers and images. Instead, we will provide a single official Ubuntu ARM64 desktop ISO that just works, be it on an Ampere-powered workstation, a Snapdragon laptop or even a virtual machine on your Apple Silicon Mac. So they're really pushing hard for that. Now there are several other decisions and program versions mentioned in the article, linked in the show notes, along with the actual Ubuntu discourse post. So take a look at that. Get all the details. See the versions of everything. There's a lot of stuff I skipped over. See the versions of everything. There's. There's a lot of stuff I skipped over, but take a look and I'll say, personally, I'm looking forward to the 25.04 release.

36:02 - Jonathan Bennett (Host)
I think it's going to be a good one yeah, I, I'm trying to remember what the um, what what they call that. Uh, it's like run ready, ready to run. It's where you've got uh uefi um doing uh boot for arm 64 and it's really become, it's really become popular and there's a. There's a term that I just cannot remember exactly what it is. Yeah, yeah, I'll come to it or somebody will tell me, but anyway it is. It is really the new thing in actually making arm six, arm and arm64 ready to go, because it used to be system ready, arm system ready that's what it is. It used to be so terrible Because you would have to manipulate some device tree and find the right one and there's a decent chance that it wasn't written yet, and so you had to mess with device tree variables yourself, and I've spent far too many hours of my life trying to get obscure arms, arm and arm 64 devices to run. It's terrible. System.

37:00 - Jeff Massie (Co-host)
It system ready makes it so much better yeah, I think it and it just you worry less about what what you have, especially if you have multiple uh rm64 64 devices and one one uh USB to rule them all yeah.

37:19 - Jonathan Bennett (Host)
Um, the the deal with oh three. I'm not terribly surprised, because we've seen other distros look into this and they all came to some essentially the same conclusion it doesn't make everything faster enough and it makes something slower and it's just sort of all over the place and it's not. It's not worth it, which is interesting.

37:36 - Jeff Massie (Co-host)
Yeah, it's just never at least the place, and it's not. It's not worth it, which is interesting. Yeah, it's just never, at least my experience and what I've read, it's it. Oh, three was kind of always somewhat experimental anyway, and it just never really.

37:49 - Jonathan Bennett (Host)
yeah, it never proved itself on anything really other than you had maybe a very specific thing you were doing I I wonder honestly, I wonder if there's like a um, if you could do uh profiling and then do like a partial dash o3 based on the profiling. So o3, o3 does things like well, I don't know, I don't know specifically, but I can talk in kind of general terms about what compiler optimizations do. And it will do things like it will take little functions and it will just make them inline so that it can cut out a couple of instructions, because otherwise it would jump and then jump back, and so you can eliminate the jumps by just making that inline. Or it'll do things like unrolling loops, and so you might have a loop where it's a for loop and so that's that's a loop to do a for loop. There's actual like in cpu instructions where you increment a counter and then you check the counter. But if you know how many times ahead of time you're going to go through that for loop, you can do it's what's called unrolling the loop, which essentially just means you cut out all the stuff related to the for loop and then you paste the instructions that are left that many times. It's called unrolling the loops. Those are the sorts of things that your compiler optimizations will do.

39:08
The problem is that some of those optimizations also add instructions to be able to do things. And so you know, will the program in general usage, will it go through this loop one time, two times, 15 times, a thousand times? And so the right answer for what the optimization should do is different depending upon how many times it's going to go through it and maybe how much memory it's going to access when it does so. Or you know three or four other things like compiler optimization is really really hard.

39:41
It's the reason why the ffmp guys write so much assembly code by hand, because compiler optimization, as cool as it is, is still not, you know, as good as somebody that knows what they're doing, writing assembly. Um, I I wonder, I hate to be this guy um, I wonder if you could put some ai in the compiler and go tell me the ones that are going to be most useful to do with dash 03 and see if that would do anything. Um, or the other side of that is compile it and then profile it and then figure out where the hot parts are you know where it's going to be spending a lot of time and just do the dash or three on those parts and see what would happen, but the current approach of just doing a dash or three on everything doesn't really seem to help very much.

40:27 - Jeff Massie (Co-host)
I think you could totally do that, because if you took a distribution and said, okay, I figure this out, it could just kind of bisect it. You know, it could just keep doing different parts, looking at the results, do different parts and look at it and just figure out, because it has a very closed loop of change result, change result, and it could do it. I don't know if it's worth the pain, though, yeah, it could do it. I don't know if it's worth the pain, though. You know. Okay, we spent, you know, six months doing this, 24 7, and it came out with oh, we can get two and a half percent yeah, yeah, we.

41:06 - Jonathan Bennett (Host)
We spent three million dollars on amazon compute. We got the thing two two percent faster. Some yeah, some programs that might be worth it, but probably not all of them.

41:15 - Jeff Massie (Co-host)
Yeah, it's, there's an ROI in there somewhere. And yes, yes, All right.

41:21 - Jonathan Bennett (Host)
Well, let's talk about Mono and Wine. So you may remember you may not remember, because it's been a long time that Mono had its last release. Oh, mono is the Linux version of legacynet. Net now is open source and runs everywhere and runs quite well on Linux, and it is now possible to be an open source NET developer as crazy as that is. But in the bad old days NET was a painful process, a painful framework that was on Windows only and it was required for doing Linux games, among other things. And there was a project to try to port NET to Linux. That project was Mono, and Mono sort of has fallen for anything other than use under gaming. Mono has really fallen out of out of favor. Uh, so much so that the mono project and we covered this when it happened, I believe was donated to wine, and so wine is now running mono and mono has not had a release in like five years. Well, now that it's part of wine, we now have the mono 6.14 release. It's out, it's the first major release in like five years, interestingly, because wine already has a couple of other mono things. Uh, they are calling it framework mono. So mono is now framework mono and, uh, the release is out. Um, very, very interesting. It's probably going to help like, honestly, getting the release of this is probably going to help a lot of things start working better in wine, things that use the oldnet framework, um. But, uh, yeah, so the announcement says that they made the name change to distinguish it from Mono, vm and Wine Mono, which are different products, different projects. So we now have Framework, mono, um.

43:27
And then the other thing I wanted to mention with this in Wine, we also got a Wine Staging release. Wine Staging is sort of the experimental branch of Wine. It's where all of the patches branch of wine. It's where all of the patches go that they otherwise would not accept. But they'll put them in here just to be able to get some more more testing done on them. And, interestingly, some distros actually just pull directly, like fedora, for instance, for the longest time. I assume it's still the case. When fedora packages up wine, they actually just package up wine staging.

43:55
Uh, so 10.3 um is out and it's got 347 experimental or testing packages at patches. Excuse me, on top of the upstream wine, a bunch of things actually got upstreamed and so that is much fewer than we've had in the past. Michael Lerbo is talking about this and the OLEAUT32 and the setup API code finally got upstream into mainline one, which is also very interesting, the 10.3,. The interesting thing here is that it has a fix for a bug report from 2010, a 15-year-old bug, and so now you can get your fix playing Rise of Nations, rides of Legends, which was a Microsoft Game Studio game from 2006. It now works Woo-hoo being a little sarcastic, but it is always fun to see the old stuff getting fixed and new stuff getting fixed and, uh, new stuff getting fixed too oh yeah, and just so everybody's aware.

45:00 - Jeff Massie (Co-host)
So you have, if you, if you go to install wine, you have wine, which is your stable development, or I mean just your stable um, they, they patch bugs and whatnot. Then you have development, where they run a lot of the experimental, they put newer code in and then on top of that, for patches that do not necessarily meet uh, wine's criteria they're not quite there yet, they're testing them then you have staging. So staging is actually the most experimental branch and now a lot of that gets into development, finally into wine. But just so, people, I just want everybody to be aware that if you're like I want the most cutting edge, that's staging. If I want to step back from the edge a little bit, that would be development and I need something solid.

45:48 - Jonathan Bennett (Host)
It's wine so there is actually another step that you can get to, and that is there's proton as well, which is valve's fork of wine.

45:59
And then you also have things like glorious egg roll, which that is another developer that's sort of uh, sort of independent, and he has a version of wine and what he actually does is he takes the wine staging patches and the proton patches and puts them together on the latest version of wine and, uh, there's actually some things that'll work inside of glorious egg roll that won't work anywhere else, um, and so that's pretty interesting to know about too. So there's this trying to get windows games to run on linux. Sometimes there's like so many different versions of wine that you can try, and the nice thing is there are there are so many of us out there also trying to get the same games to run that you can go to places like wine db and proton db and the conversation page on like the, the steam page, um, and someone will probably say, hey, I got this to work with, you know, glorious egg roll version, yadaada, yada, yada. And here's the things that I install with wine tricks to make it work. It is sometimes so esoteric and niche and you feel like a wizard trying to walk in the paths of you know, the dark ages and trying to get some of these Windows programs to run on Linux.

47:13 - Jeff Massie (Co-host)
It's fun, it's infuriating, it's an experience, oh yeah, but the the nice thing is, if you're trying to make a modern game run, it's more likely to work than an old game, because the old ones were the ones that had a lot of custom libraries, had a lot of special, unique things, and the newer ones tend to follow the standards a lot better and have less custom libraries or custom dlls that, for the most part, that is often the case with modern games.

47:47 - Jonathan Bennett (Host)
What we really see, though, is things like the anti-cheat will throw you off that's, yeah, the punk busters, and that that's where you run into problems.

47:56 - Jeff Massie (Co-host)
But if you don't have anti-cheat, there's a good chance. It's just going to work out of the box, yep yep, absolutely.

48:04 - Jonathan Bennett (Host)
Uh to rachel says the last time I looked at wine, db alders gate three garbage less. I yes, things like that happen hopefully, but how long ago.

48:14 - Jeff Massie (Co-host)
And did you look at glorious egg roll?

48:16 - Jonathan Bennett (Host)
yeah, go look at proton db. There might be. There might be a way we just sent him down and a rabbit hole of research. So hours and hours.

48:24 - Jeff Massie (Co-host)
I'm so sorry, enjoy or you know, honestly, proton db, you can filter on your graphics card so you can say, I only want to look at amd, or I only want to look at nvidia, and then look, and then you're like, oh, paste this into the program, start, because it'll force some mode. And it's like, oh, there you go, it works. It works now, yeah, yeah.

48:47 - Leo  (Announcement)
Yeah.

48:48 - Jonathan Bennett (Host)
All right. So, oh, there is some interesting news. We sometimes cover things like CloneZilla when it's just an update and it's like okay, that's nice, it's an update. And I was looking at the update notes here and no, no, there's something actually fairly important in CloneZilla 3.2.1.

49:04 - Jeff Massie (Co-host)
There are probably a few things.

49:07 - Jonathan Bennett (Host)
One very much in particular.

49:09 - Jeff Massie (Co-host)
You're probably thinking the same thing I am. So CloneZilla Live 3.2.1, it's like the disk cloning and imaging tool has been released with Linux kernel 6.12 LTS. Now this release might spark some debate, cause some kerfuffle, because it removes support for 32-bit systems. Now the reason for this change is upstream modification in Debian's repositories. So if you're running a 32-bit system, you need to use an earlier version of Clonezilla. It's not going to work with this one.

49:48
Version 3.2 was released about four and a half months ago and with this version 3.21, the kernel went like I said, it was on 6.11, now it's 6.12 LTS. Now this improves hardware support but also rebases to the SID Debian repository as of March 3rd 2025. Now the SID repositories are where the removal of the 32-bit came in. So this was not a Clonezilla decision. This was because they pull so much out of Debian repositories.

50:29
Debian made this decision, but everybody needs to be aware of it. They did also have a you know, one big simplification they did is merging the LZ4 and LZ4MT, with LZR-T0 now being the default. So this change makes it easier for users to customize boot parameters. Plus, it adds the LZ4 related boot parameters in there automatically. So if you're interested in the technical details. Check out the link in the show notes where you can find the official release with the list of now available variables for LZ4. You know I'm not going to bore everybody by reading them all out, but they have a specific list of what is supported. Now You're going to say something, jonathan.

51:23 - Jonathan Bennett (Host)
I thought you were done. No, continue, I got more.

51:26 - Jeff Massie (Co-host)
Go. Okay. For system checks, memtest86 has been updated to version 7.2.0. So if you're using CloneZilla Live in expert mode, there's also now a feature that allows users to copy CloneZilla-related log files to the CloneZilla Live USB drive. A new file, ocslog-usb, is used for this purpose. Now the checksum mechanism for CloneZilla images has also been improved. It now reads once and passes multiple checksum programs. Plus, there's a bug fix that corrects the total number of chosen checksum methods. So that's good to get that bug fixed.

52:10
The official release and several other programs included in the live image have been updated, and you can download the new image right now from the official website, which is also linked in the article in the show notes. Remember, this release is for all, is only for 64-bit systems. Now, once downloaded, you'll load it into a usb drive and boot from there. I mean, that's how it was designed to work, so it's running off the usb drive. Um, and yes, I know I've mentioned this a couple of times already, but definitely check out the article in the show notes for all the details, because there's some of the more technical stuff.

52:43
I left out just one for because I didn't fully understand it, and two nobody wants to hear me rattle off, you know, archaic libraries and versions of those. I really recommend taking a look at the CloneZilla, though, because it is a handy tool to have around for a variety of useful Linux tasks. You know it's one of those toolbox things that you don't need it every day, but, gosh, it's really handy when you need. You know to have it there when you do need it.

53:12 - Jonathan Bennett (Host)
Yes, I'm. I'm looking at the, the CloneZilla at the download page actually, and they say all versions of CloneZilla Live support machines with legacy BIOS. But if you want to use CloneZilla on a machine that is 32-bit only, you now have to make sure and get the 3.2.0 release or earlier, but not 3.2.1. And that's going to be the big one to know about, and I'm sure there are some people that still have machines running around that are 32-bit and not 64-bit, and maybe that's the sort of machine that you really need to be able to go and back up with Clonezilla. So definitely something to know about because it could catch you up otherwise.

53:55 - Jeff Massie (Co-host)
Very true, though, if you do have a 32-bit machine, you are not going to need any new hardware support that this version is containing. That's true Because you're probably 10 years old, maybe a little more.

54:10 - Jonathan Bennett (Host)
Yeah, I'm trying to remember when the x86-64 first came out. It's been longer than 10 years ago, hasn't it?

54:17 - Jeff Massie (Co-host)
yeah, but there was. I thought there was like a pannium or celeron that hung around for a while. I mean it was. It was one of those where most everything was 64 bit for quite a while, but there was like one or two chips that kind of hung around yeah, that's very, that's very possible.

54:35 - Jonathan Bennett (Host)
Like the, maybe the, the chips they put in the netbooks, like the intel atom. Maybe it was 32books like the Intel Atom, maybe it was 32-bit only, something like that.

54:40 - Jeff Massie (Co-host)
Yeah, it was something that was like the latest 32-bit chip, x86, that finally died. If you say, when did most things convert to 64-bit? It was, yeah, it's probably 15 years ago, maybe 20 years ago. Now it's been quite a while.

55:03 - Jonathan Bennett (Host)
I'm trying to find when it actually happened, but I don't see it. The Wikipedia article is failing me and giving me an easy date.

55:12 - Jeff Massie (Co-host)
Oh well, I just remember from previous conversations we we've had, and there was some. There was oh, you don't forget about xyz, because it hung around till aught five or whatever.

55:26 - Jonathan Bennett (Host)
It was that, um for quite a while looks like uh, 2003, 2004 is sort of when the 64 bit came along, so we're talking about 20 years ago now. Yeah, I feel that feels about right. Uh, all right. Oh, there's a. There's a cool thing for android that just got announced. That I saw. So, uh, this is something that, um, I, uh, I.

55:54
I have done things like this with termux before, but apparently Google is going to officially roll out Debian Linux and they're doing it on Pixel phones. In fact, some Pixel phones already have this, if you're on the latest. Hmm, I need to go bug my wife to get her to update her Pixel phone to the latest Android release and then go play with it and see if this is in there. Um, you have to go. At the moment you have to go through developer tools. So you know, go look up your kernel version and tap on that multiple times until android finally asks you hey, do you want to be a developer? Yes, I want to be a developer. Go to the developer options.

56:27
On some of these phones, there will now be an option to turn on the linux terminal native, the native lin terminal application, and this is essentially what we've had with, like Crouton, in Chrome OS devices for the longest time.

56:44
This is running a virtual machine to get a full Linux install inside of VM.

56:53
It's unclear whether this is going to give like full video support, so can you run? So, for example, in chrome os you can run applications under kind of this virtual x11 and get it to actually display on the screen so you can run full-on applications. Uh, it's not entirely clear whether this is going to do that, do that or not, um, but there's a lot that this will let you do, and it, in theory, should be a little bit more powerful than Termux, because, well, it should be more powerful than Termux on a non-rooted phone. Now, if you're running Termux on a rooted phone, then the sky's the limit because you can own the system. But I would expect that you get to do a few more things with the Linux terminal app than just Turmux on an unrooted phone, and this is also the sort of thing that I could see Google continue to work on and make it more and more powerful. So our Android phones are getting just a little bit more Linux-y, and that is fun. I look forward to playing with this.

57:59 - Jeff Massie (Co-host)
I thought it was very cool. I almost did this story because I'm like, oh, that's cool, it is cool. But I thought I was competing for other stuff and I was pretty sure you were going to pick it up, Jonathan.

58:14 - Jonathan Bennett (Host)
Yeah, so I see here in the article it does say that it currently lacks support for GUI applications, but the rumor is that that's coming in Android 16. So that'll be when this gets really interesting. Now, where does that leave Chrome OS?

58:32 - Jeff Massie (Co-host)
Yeah, I mean, and maybe they're going to step even closer to Linux, just because then there's less development they have to do. You know, just put their shiny wrapper around it and make it easy.

58:48 - Jonathan Bennett (Host)
Yeah, it's very intriguing. It's very interesting to see. It has always been kind of weird that they have Chrome OS and Android and it sort of solves the same problem and it does it in two very different ways, because Chrome OS is not much more than just native Linux with a desktop environment. It's really not much more than that. And Android is its whole.

59:12 - Jeff Massie (Co-host)
Thing and how, how special do you have to have android? As phones keep getting more memory, stronger processors, you know, they're becoming more and more closer to you know, little desktops, little desktops, you know, yeah, weak laptops I mean a lot of, a lot of the stuff in android.

59:33 - Jonathan Bennett (Host)
So android obviously is the linux kernel. It's the linux kernel with a bunch of patches on top of it. Um, a lot of those patches have been for things like power management and to try to squeeze little bits of more performance out of those systems. Um, and a lot of those patches have actually landed in the upstream kernel. So you know, you could, you could see a, you could see a future and in fact, you can do this.

59:58
Now you can actually run uh I don't know, I don't know if it's the latest like I don't know if you can, if you can run android 15 this way, but you can run android on a vanilla linux kernel. Uh, and I've seen, I've seen people doing it on some of the oneplus phones is really where it was, um, which is really, which is really pretty interesting. I I don't know that's ever going to be like the default, because the hardware manufacturers love having custom patches in the kernel. I don't know why, but it's just, it's like it's candy to them, um, but we're getting closer and closer to that not necessarily being a thing that we need.

01:00:34 - Jeff Massie (Co-host)
Well, I'll be honest, I see some companies that it wasn't invented here, so it's definitely not good enough. We've got to reinvent the wheel because our wheel is rounder and even though at the end of the day, it might be like, well, there's no difference in this, why are you putting, yeah, but it's not ours. We had to do ours, and there are some companies that think that way yep, yep, that's true, that's true.

01:00:57 - Jonathan Bennett (Host)
But regardless of all of that, I'm excited to see the uh, the official terminal app, and I look forward to playing with it oh, it's gonna be a blast. Yeah, it's gonna be fun. Uh, let's see. Oh, jeff has a story about a language that isn't Rust.

01:01:13 - Jeff Massie (Co-host)
We are not.

01:01:14 - Jonathan Bennett (Host)
we are not necessarily Rust fanatics around here, but we we do follow it and we do find it interesting.

01:01:20 - Jeff Massie (Co-host)
But uh, jeff's just going to break with tradition and talk about something else lot of ground on rust, both things it can do and challenge it faces in the kernel. But today we're not going to talk about rust. Instead we're we're going to dive into something a bit more niche, maybe a lot more niche, and this is going to be a topic that's for the engineers and scientists in the audience. The two articles linked in the show notes are about Morpho. Probably most people are like what it's? An open source, programmable environment that allows researchers and engineers to tackle shape optimization problems for soft materials. It's designed to be easy to use and, best of all, it's free. So if you work with soft materials, this could really benefit your research or problems that need solving.

01:02:17
Now the first article is a link to the GitHub page for the Morpho language. They highlight some key features like how it's familiar because it uses syntax similar to other C family languages. They also say it's fast running, effectively, like well other implemented dynamic languages. It's class. It's fast, fast running, effectively, like well other implemented dynamic languages. It's class based, which means you can make it. You know highly object oriented approach to it and you know it promotes code usability, reusability by make it object oriented, and I know object oriented in programmer circles gets. You know, some love it, some hate it, but it is what it is. You don't have to use it and it's extensible, making it simple to add packages in both Marfo and C or other compiled languages. And on the GitHub page you'll find a wealth of resources, links to documentation, developer guides, slack communities, tutorials, youtube videos. So there's plenty of support to help you get started. It also includes instructions on installation. So if you whether you prefer Homebrew or for different, you know, and it does that through a different, different distributions or if you want, you can install it from source. And if you're on a non-Linux OS I know it's out there they support it through the Windows subsystem for Linux or WSL. So you're still running Linux. Yeah, we still win.

01:03:48
Now I won't get too deep in this, but the second article touches on how modern modeling packages are. Basically are primarily designed for hard materials and they're not really optimized for soft materials. Traditional software is built to calculate and optimize structures made from materials that don't deform much. You can solve shape optimization materials for much. You can solve shape optimization problems for much softer materials. Now it can also handle hard materials as well. So you can. You can use it in a mixed environment. So don't think that it can't do hard materials.

01:04:29
Now an example I use is if you're working with soft material modeling. Say you have a stent used in a cardiovascular procedure which is basically it's going to be a metal mesh surrounded by soft tissue in the heart and arteries. Morpho can help you predict how that stent will perform, because it can handle the stiffness of the mesh and it could handle the softness of the heart tissue. Or maybe you're trying to optimize space and minimize packing materials for shipping. This could also help there as well. So there's a vast amount of possibilities you can do with this programming language. So if this sounds interesting to you and I'm sure there's at least a few of you out there going, hey, I could really use this Take a look at the article in the show notes.

01:05:17
There's a lot more detail there. And if you want to dive deep into soft material modeling, don't worry, you don't need a ton of complex training to solve these problems. You can solve very hard problems with just a bit of training to solve these problems. You know, you can solve very hard problems with just a bit of training. They said uh, you know, they'd see undergrad students using this within a couple of weeks they were solving extremely complex programs. So it's it's built to be able to kind of jump in and take off. But you know, I just thought I'd share it with the audience and if anybody ends up using Morpho, I would love to hear about it in the Discord. Or if you're using it now, I would love to hear about it in the Discord. So happy modeling.

01:05:56 - Jonathan Bennett (Host)
Yeah, this is fun. I wonder if there's a future where you take this and you plug it into Blender and you use it for 3D rendering, for doing simulations.

01:06:08 - Jeff Massie (Co-host)
Oh, yeah, I could.

01:06:11 - Jonathan Bennett (Host)
That would be cool cool.

01:06:12 - Jeff Massie (Co-host)
Well, it's pretty extensible and seems very uh like you could plug it into various things. So yeah, it might uh might be possible to help make, uh, some of your soft materials in blender more lifelike and really that's the.

01:06:27 - Jonathan Bennett (Host)
That's the cool thing about taking something niche like this and open sourcing it. You kind of give it to the world. You have no idea what people are going to use it for. Right like, so it's right one. What it's one of the use cases they talked about is is working with heart disease and modeling stents that that is what they put into people's arteries to open the arteries back up, so so a patient doesn't have a heart attack. Um, and then we just talked about you can make prettier pictures and blender with it. It's just crazy. When you make something open, source the various radically different directions people can take it. It's always fascinating to me to see that.

01:07:05 - Jeff Massie (Co-host)
And where it's a language environment, I mean it plugs in about anywhere. You're not limited to just a GUI interface from a traditional modeling program. It's where can you put a language About anywhere and that's where it'll fit in. You're just going to have to have a C-type communication with it.

01:07:29 - Jonathan Bennett (Host)
All right. Well, we're going to talk about something for just a minute that doesn't fit in anywhere any longer and, uh, for those of us here at twit, that's kind of sad, because we're talking about skype and you may not have thought about skype at all for a very long time, but especially on places like floss, weekly skype used to be the solution for doing interviews because it was about the only platform that had decent audio for years and years, and Microsoft, of course, purchased Skype and they have finally done the deed and Microsoft is finally killing Skype. Now, it's not gone yet, but May 5 of this year is when Skype will be no more. There have been some calls already. Open source it release the sources. I don't know if that's going to happen or not. Microsoft might they actually entertain those sorts of ideas a little bit more now than they used to? Open source is no longer cancer to Microsoft. All that said, skype is finally going away.

01:08:32
Don't be too sad, because there are lots and lots of both open and closed source solutions out there, uh, from jitsi to video ninja to um, of course, solutions like um, the closed source ones that I don't ever use and I can't remember what they're called um, I mean, goodness, the microsoft teams. The microsoft teams is the big one from microsoft, yeah. And then you've got, you know, google meet, um, apple facetime, um, yeah, there's, there's so many of them out there, like the thing that skype did, particularly after covid. So many different places wanted to do the same thing, um, but skype. Skype will always have kind of a a soft place in my heart because that's what we use for so long to make things work and you have certain things like zoom and discord.

01:09:21 - Jeff Massie (Co-host)
You can say kind of filled in some of that niche too.

01:09:23 - Jonathan Bennett (Host)
That venn diagram had had a lot of overlap I think zoom was the big one I was trying to come up with and failing, but yeah, yeah, there's a lot of them doing the same things now.

01:09:34 - Jeff Massie (Co-host)
At one time I could rattle off Because just for work I had you know Team Zoom. Meet Me.

01:09:42 - Jonathan Bennett (Host)
Hey, Meet Me is another one.

01:09:43 - Jeff Massie (Co-host)
Yeah, I'm trying. I used to be able to rattle off about five of them during COVID. Every company had their own, and now they've kind of settled down to Teams and Zoom seem to have kind of at least in the business world taken over a lot of it. Yeah, yeah.

01:09:58 - Jonathan Bennett (Host)
Yeah, so I figured we would briefly touch on this and Skype had a decent Linux client there for a while. I remember the first couple of times I did Floss Weekly. I had this crazy setup where I was running Linux and then I had Windows in a virtual machine and I had my camera connected through with a USB pass-through and I was running Skype inside the VM because something about it wouldn't work under Linux. I don't remember what it was, so wonky it was great um, it was.

01:10:34 - Jeff Massie (Co-host)
It was so wonky, it was great, but but remember we can go back even further it, because it kind of took over where uh team speak uh was used and that was a little clunky. Yeah, yep uh fun times. Kids today have no idea how easy they got it.

01:10:47 - Jonathan Bennett (Host)
It's true, it's true, it's true, uh, all right, shall we cover some command line tips.

01:10:54 - Jeff Massie (Co-host)
Yeah, for all my pontification and ramblings today, this one's going to be probably the shortest tip on the show that we've ever had Message M-E-S-G. So today's command line tip is about the M-E-S-G message command and it's used to control whether other users can send you messages in your terminal or not. Super easy to use If you type M-E-S-G, by itself it will show you the current terminal setting, whether you can or can't receive messages. If you want to block messages, just type M-E-S-G, space N, and if you want to allow them again or initially allow them, because maybe it defaults to off type M-E-S-G Y. That's really all the program does, but next week I'll introduce another tool that ties into this.

01:11:46 - Jonathan Bennett (Host)
Hopefully lets you do something. Yes, yes, that'll be fun. I look forward to that. All right, I have a command line tip and this is something I have been using a lot this week. Uh, my command line tip is virsh, and that is the shell command, the command line interface for the uh, for vertd, for um, the virtualization system that I know. Fedora and centos and rel make use of it. You can use it in Debian. I don't know that it's the only option in Debian um, but vert, vert manager and all of that, and it is, it's my preference for doing virtualization Um.

01:12:25
And so if you use vert manager to set things up, you might wonder well, is there a command line tool that does the same thing? And that is VerSH and the thing that I have been doing a lot with this. I got a phone call a couple of months ago that said my co-location provider, where I've got a couple of servers set up, we're not going to do co-location anymore for small businesses like you, fine. So to find a different provider, I moved one server over and left the other server there with the majority of the live customers on the old server, and then I did live migration. This is crazy. I did live migrations over the internet to move the VMs over and I thought it'd be very interesting to talk about how that works. And so you've got to have SSH, you've got to be able to SSH in as root from one machine to the other and the entire command is ver sh migrate, dash, dash, live, because you want to do it live. And so what it's going to do is it's going to move, it's going to move everything over, it's going to leave the old machine, the machine in the old place running, move everything over and then it will then do another snapshot comparison. If there's any differences, it moves those over and finally it gets to the point to where those two things are the same. It freezes the old one and then kicks on the new one and it'll be bit for bit the same. You don't have any actual downtime. The virtual machine stays up and then at that point, depending upon which of these commands you use, it drops the old one altogether.

01:14:00
So the IRSH migrate, dash, dash, live, dash dash persistent. That means where we put it, I want it to be there even after a reboot. Undefined source that's the one that says I want the old one to go away. Copy storage. All that's saying I want you to move the disk image over Now. If you're doing something like booting off of a NAS, you may not have to do that, but in my case I'm not.

01:14:21
So I do Dash, dash verbose, of course, is because I want to see more information about what's going on. Then you give it the virtual machine name and then the sort of magic thing at the end is QEMU plus SSH, colon, slash, slash, and then the IP address that you're moving it to slash system. And yes, it took me quite a while to come up with this entire bit of magic to make this actually work. One thing that you might be interested in and I had to use this trick a few times is to leave out the undefined source, and so if you do that, it will leave the VM paused on the old machine and running live on the new machine, and I did that a couple of times for things where I didn't care about state. So, like my VPN provider, for instance, I have a virtual machine that just does VPNs. I wanted it to be accessible at the old IP address, but I wanted it to be ready to go with the new IP address. So once we made a change. We could just click the button and it would come up run. So that one's pretty interesting.

01:15:26
One other thing to be aware of is if you don't have the same CPU between your two servers, you may have to go inside of vert manager and specify like an emulated cpu model.

01:15:41
Now, that's not doing emulation it what what it does is it just masks off certain um cpu uh capabilities, right, so you, for example, avx 512, you may have one CPU that can use AVX 512 and another one that doesn't. You have to go in and pick a CPU model that both CPUs support to be able to do the migration back and forth, otherwise it's going to complain that it's not a compatible CPU. Um and so, and so mine. I had to do that because the old machine was an Optron and the new machine was an Epic, and so they were just different enough that I couldn't just do like CPU pass-through. So it's one more thing to be aware of when you go to do this. But all that said, if you could put two machines side by side and be able to do migrations from one to the other, it's extremely useful for being able to do reboots and come back up from problems and all of that good stuff. It was very nice to have those two servers sitting there side by side for years now.

01:16:46 - Jeff Massie (Co-host)
That's pretty awesome. And just a little more on the cpu stuff. Uh, we've talked about this in the past, but in general cpus are grouped into families based on the features they support, what commands and, as john said, avx 512 is one of them. And was it sso commands or sse?

01:17:08 - Jonathan Bennett (Host)
or sse. That's what it is. Yeah, the different, the different sse.

01:17:11 - Jeff Massie (Co-host)
Yeah, so they'll take. They'll take the options like that and they'll group them into specific families of okay, this is tier two, tier three, tier four, and you can kind of have a general grouping of okay, here's what is supported in this group.

01:17:30 - Jonathan Bennett (Host)
Yep, yep, you can actually go a step further with that. Inside of the virtual machine manager, um, and you can't hear I'll, I'll. I won't be able to show it easily, but I'll pull it up so, like you can, you can either tell it to just copy, you do what's called a host pass through, but invert manager you've got. You can say pretend like this is a four 86, or pretend Conroe Diana, epic, epic, rome, ivy Bridge, kvm32, kvm64, nahalem, optron G1, optron G2, g3, penrin, pentium, pentium 2, pentium 3.

01:18:06
There's a bunch of different options here that you can choose and obviously you're going to get the best performance from just doing a host pass-through, like everything that my real CPU can do. Let the virtual machine do that. But if you run into a bug or you need to be able to transfer between two, you kind of have to set that to something that both of them support, and the fun thing there is that you may have some trial and error to be able to figure out what exactly that is uh, keys 512 says can't you use dash cpu host and then the cpu model yes.

01:18:41
So I believe. I believe that is doing the same thing as setting it inside uh, vert manager. But if you're going to do a live migration, you have to have that done before you go to migrate, because you your your cpu like your virtual cpu. You've got linux running on it. You can't pull out valid instructions and say you're not allowed to use these instructions anymore, like linux in the virtual machine, just it will not let you. Let you do that. It has to be done ahead of time. So you have to go in there and you have to set it up ahead of time. Use the cpu model and essentially it's that same, that same command. Use the cpu model that both of these machines machines support. That way you can actually do a live migration. I'm telling you these things that took me weeks and weeks to figure out, to be able to actually do this in the real world.

01:19:33
Learned them the hard way a lot of trial and error and profanity I try to avoid the profanity, but there was very much trial and error. Um, fun stuff, all right. I think we've just about hit the bottom of the show and I will let jeff plug whatever he wants to. He may ask for coffee, he may have haiku it is his.

01:19:54 - Jeff Massie (Co-host)
Uh, I don't have anything for coffee. Uh, I do have a poem, but it's not a haiku. I will slightly plug and say uh, look at the extras section for the twittv subscribers, if you're in the club. Uh, there's an untitled silicon show where we talk about some uh, some of my uh experience in the uh fabrication industry. So you know just a little, not not real long, but just just a little blip. So take a look at that. Um, other than that, I'm going to leave you with a poem. It's kind of more of a limerick here, but a, a sysadmin named Beth equates win 11 to meth when you let it inside. Critical process died and suddenly blue screen of death. Have a great week everybody. That's great Bravo.

01:20:46 - Jonathan Bennett (Host)
Bravo, that's great. Thank you, man, for being here. I appreciate it so much. Bravo, that's great. Thank you, man, for being here. I appreciate it so much. Thank you. All right, if you want to find more of my stuff, of course you've got Hackaday. One of my stories was taken directly from my security column there. It goes live every Friday. We've got Floss Weekly. That happens over at Hackaday now it records on Tuesdays, goes live on Wednesdays. You can watch for that too. Records on Tuesdays goes live on Wednesdays. You can watch for that too. And that is about it for today. We appreciate everybody that watched and listened, those that got us live, those that get us on the download, and we will see you next week with more shenanigans, I'm sure, on next week's Untitled Linux Show.

 

 

All Transcripts posts