

I always just booted the old kernel when I ran into the issue, but it was less than ideal, which is why I would prefer to run a stable distro in this case.
Also, isn’t ElementaryOS a stable distro anyway due to being Ubuntu-based?
“Life forms. You precious little lifeforms. You tiny little lifeforms. Where are you?”
- Lt. Cmdr Data, Star Trek: Generations
I always just booted the old kernel when I ran into the issue, but it was less than ideal, which is why I would prefer to run a stable distro in this case.
Also, isn’t ElementaryOS a stable distro anyway due to being Ubuntu-based?
I’ve never run an installfest, but I’ve been to my university’s Linux Users Group installfests, and here’s what they did:
Also, I’d recommend you bring extra USB peripherals in case the internal devices need a little bit of work; bring some extra mice, keyboards, and ethernet adapters. You hopefully won’t need any of them, but they’ll certainly make life easier if you do.
As for time, I’d imagine doing the basic install and ironing out some (not all) of the kinks probably takes less than it takes for a group to stat D & D characters, if that’s a helpful comparison for you.
It’s pre-T2, so it should be very easy to install a Linux distro on it. The only bit of misery you’re going to encounter, as others have said, is the Broadcom drivers. Except for a select few distros, you’ll probably need a USB Ethernet adapter for installing the operating system and adding the drivers.
Also, I’d rather put my hand in the circle saw than try running a rolling release on this laptop because the driver uses DKMS, meaning that kernel updates sometimes break it.
I only know this because the desktop I’m typing this on has a Broadcom Wi-Fi card from when I used to bare metal Hackintosh this machine. I’ve since moved to a nice house with an Ethernet port in every room; also, I just use macOS in a VM these days anyways.
As others have said, OCLP is a thing and a well-oiled machine from what I hear, but also, the oath I have made to the Church of Linuxology demands that I at least recommend Linux.
As said by @iii@mander.xyz, bog standard Debian Stable.
You really don’t want a rolling release distro for something like this - major software updates might change the behavior of your software, break your configs, etcetera. Stable distros do as much as they can to make sure that software behaves the same, only porting security fixes.
This way, you don’t really have to touch it except for updates with a nearly nonexistent chance of going wrong (and there’s stuff like unattended-upgrades so updates are automatic) and major upgrades.
You can go several years without a major upgrade just fine - Debian versions are supported for 5 years, and we’re only a few days from getting Trixie, which will last into 2030. New versions come out every two years, and it’s not that hard to upgrade between consecutive ones; I don’t think sitting down on a weekend every two years is that bad.
I kind of hate Ubuntu, but it’s pretty based in this case due to really long support. This might be a really great case for Rocky Linux though, as it also gets 10 years support.
Luckily, I can probably live with using mine a few more years. Mine’s an early AM4 system with a Ryzen 5 2600 in it. My CPU performance isn’t a huge bottleneck (although I’d like a couple more cores for faster compilation).
Really, it’s my graphics card. The 580’s fine for some basic gaming, but it sort of got left in the dust with ROCm support - it’s kind-of-sort-of supported, but not well enough for Blender to work with it.
I think the situation’s improved with ROCm on consumer GPUs enough now that so long as I buy a newer card, I should be fine. Debian support’s improved a lot as well - for many GPUs, it should just be a matter of sudo apt install hipcc
now. However, Debian is still a few versions behind in experimental and doesn’t support the latest AMD cards, but I suspect that getting it packaged was the hard part, and that once Trixie releases, Forky/Testing will catch up in a few months.
I didn’t even know there were still cases bundled with power supplies! But yes, in general, throughout the history of PC building, I’m pretty sure included power supplies in any brand tend to be very low wattage. The power supply probably isn’t even broken - I’m just guessing the PC’s was upgraded to an RX 580, and the RX 580 was more power hungry than the original graphics card and the power supply just wasn’t designed for it.
Just a tip - next time you build or upgrade a PC, use this tool to estimate what power supply you need; https://www.newegg.com/tools/power-supply-calculator
You can get a 700 watt PSU that should work in the $50-70 range, although honestly, it might be worth it to go a bit bigger so you can cannibalize it for a future build when the time comes - even the RX 580, which is newer than your CPU, is getting a bit old and I hope to replace it if I build a new PC in 2028.
Just to clarify, this almost certainly won’t be better on Mint for several reasons. One, PopOS! and Mint are both based on Ubuntu, so they would likely run into a lot of the same issues. I also have an RX 580, and while I haven’t used either of these distros on that machine, I have run Debian Testing for several years, and since both these distros descend from Debian, I have run similar package versions and would likely have known years ago if a major bug occurred for my GPU.
As said by @Mordikan@kbin.earth below, I would be inclined to check the power supply, and maybe even make sure the PCIe card is properly seated.
I’ve been running with an RX 580 on my desktop with Debian Testing for three years, and I’ve had no problems like this.
I’m running with a 750W power supply, so I’m inclined to agree that the the OP should pop open their PC case and check their wattage. Assuming this is an ATX box, it’s probably just a matter of removing two screws and sliding off the side of the case and reading the wattage. If it’s a reasonable wattage and it’s still giving issues, then try the aforementioned undervolting.
I think that’s true, but permissions might come into play and really cause pain; it’s probably best to just reinstall.
On a more serious note, as others have said, you’ll probably burn through these weird storage limitations quickly.
Also, what do you mean by “sensitive matters” on Mint? Because almost any way you spin it, I feel like it’s not a great idea:
Also, as I said in another comment here, please upgrade that drive before you put a lot of data on it. If you don’t and you run out of storage later (a near-certainty on 256GB), you’ll have to go through the effort of getting everything copied, which may include equipment purchases and several hours of your time when you could jut do it right now while your important files are still small enough to fit on a flash drive right now. Save yourself the future trouble.
Anyhow, I wish you happy Linux usage.
This is less like buying a bigger car and more like upgrading the stereo in the car - 256GB in 2025 is somewhat akin to having only AM radio, and I’ve found it gets annoying real fast when doing anything serious.
I would hesitate to put anything smaller than 1 TB in something that’s supposed to be a daily driver.
Assuming she hasn’t bought it yet, please research that Yoga first. It might work fine, but it could also end up being a miserable experience.
You can check https://linux-hardware.org/ for the model or a similar one.
No, I mean TinyCore literally would run out of RAM during boot.
Like others have said, Debian probably isn’t a bad idea.
I feel like it would be kind of stupid to run a full-on desktop environment even though technically possible, though - I think this is a good use for IceWM.
Also, at worst, you might have a really low power server.
I think less than 64MB is difficult these days - a few years ago, I was backing up a laptop with 48MB of RAM, and to get a minimal Linux terminal running on it, I had to create a custom Buildroot image and throw it on a CD. TinyCore was too much for it.
A more apt comparison would be using the Windows guest to remote into the Linux host via xorg piping, waypipe, VNC, RDP, etcetera, which conveys your feeling of weirdness while being a closer approximation of what this really does.
It’ll definitely be a difficult undertaking, but I plan on really trying to have a 5.25” bay when I build another PC.
That probably won’t be for a couple more years, though. I’m on a Ryzen 5 2600 and RX 580, and I really don’t do that much intense gaming; a GPU upgrade is tempting so I can actually use ROCm for some casual Blender Cycles renders, though. I hope that the already dismal supply of those 5.25 cases doesn’t dwindle even more.
Besides the corrections others have said, I really can’t think of any reason people would intentionally use legacy BIOS on a machine with UEFI for a new install.
Like, I could get doing it for an old install - I know someone who installed Windows 7 in 2015 on their then-new desktop build and later upgraded to 10 but is stuck on legacy BIOS for now with that machine because 7 only ran on that.
I could see something similarly jank happening to someone in the Linux world and then decide not to address it for “if it ain’t broke, don’t fix it reasons”, but certainly not for no reason.
I just ripped the Blu-Ray drive from my father’s PC since he wasn’t using it.
Since my machine doesn’t have 5.25” bays, I just have SATA cables dangling out the side of the case. I’ve probably ripped more CDs than Blu-Rays, though.
Someone else brought up Virt Manager here, which is my preferred; if you’ve ever used VirtualBox, you’ll probably be fine on Virt Manager. I like Virt Manager for using GTK3, as I’m in XFCE. I wouldn’t be surprised if both applications have similar settings, as they’re both LibVirt front ends, it seems.
Also, DistroBox, while a different sort of thing, is great for the sort of thing OP mentioned in that last paragraph. I usually just use command line, but there seems to be an unofficial GUI out there.