Honestly I’ll just send it back at this point. I have kernel panics that point to at least two of the cores being bad. Which would explain the sporadic nature of the errors. Also why memcheck ran fine because it only uses the first core by default. Too bad I haven’t thought about it when running memtest because it lets you select cores explicitly.
lemmyvore
- 1 Post
- 16 Comments
Welp no change. I’m guessing the motherboard firmware already contained the latest microcode. Oh well, was worth a try, thank you.
It’s a pain in the butt to swap CPUs one more time but that may pale in comparison to trying to convince the shop that a core is bad and having intermittent faults. 🤪
This sounds like my best shot, thank you.
I’ve installed the
amd-ucodepackage. It already addsmicrocodeto theHOOKSarray in/etc/mkinitcpio.confand runsmkinitcpio -Pbut I’ve movedmicrocodebeforeautodetectso it bundles code for all CPUs not just for the current one (to have it ready when I swap) and re-ranmkinitcpio -P. Also had to re-rungrub-mkconfig -o /boot/grub/grub.cfg.I’ve seen the message “Early uncompressed CPIO image generation successful” pass by, and
lsinitcpio --early /boot/initramfs-6.12-x86_64.img|grep microshowskernel/x86/microcode/AuthenticAMD.bin, there’s a/boot/amd-ucode.img, and aninitrdparameter for it ingrub.cfg. I’ve also confirmed that/usr/lib/firmware/amd-ucode/READMElists an update for that new CPU (and for the current one, speaking of which).Now from what I understand all I have to do is reboot and the early stage will apply the update?
Any idea what it looks like when it applies the microcode? Will it appear in
dmesgafter boot or is it something that happens too early in the boot process?
BIOS is up to date, CPU model explicitly listed as supported, memtest ran fine, not using XMP profiles.
All hardware is the same, I’m trying to upgrade from a Ryzen 3100 so everything should be compatible. Both old and new CPU have a 65W TDP.
I’m on Manjaro, everything is up to date, kernel is 6.12.17.
Memory runs at 2133 MHz, same as for the other CPU. I usually don’t tweak BIOS much if at all from the default settings, just change the boot drive and stuff like “don’t show full logo at startup”.
I’ve add some voltage readings in the post and answered some other posts here.
Everything is up to date as far as I can tell, I did Windows too.
memtest ran fine for a couple of hours, CPU stress test hang up partway through though, while CPU temp was around 75C.
Yep, it’s explicitly listed in the supported list and BIOS is up to date.
RAM is indeed at 2133 MHz and the cooling is great, got a tower cooler (Scythe Kotetsu mark II), idle temps are in the low 30’s C, stress temp was 76C.
Motherboard is a Gigabyte B450 Aorus M. It’s fully updated and support for this particular CPU is explicitly listed in a past revision of the mobo firmware.
Manual doesn’t list any specific CPU settings but their website says stepping
A0, and that’s what the defaults were setting. Also I got “core speed: 400 MHz”, “multiplier: x 4.0 (14-36)”.even some normal batch cpus might sometimes require a bit more (or less) juice or a system tweak
What does that involve? I wouldn’t know where to begin changing voltages or other parameters. I suspect I shouldn’t just faff about in the BIOS and hope for the best. :/
lemmyvore@feddit.nlto
Linux@lemmy.ml•Understanding Linux and choosing your first Linux distro, v2.0English
1·1 year agoWow, basically everything you wrote about Manjaro was wrong:
- It doesn’t need constant maintenance, and it doesn’t break. The whole point of it is to be a stable variation of Arch.
- It doesn’t have a highly irregular update schedule, it’s quite regular — every two weeks. There are also updates for outstanding security issues which can come faster (as needed). Occasionally very large updates can take longer in order to weed out all the issues, such as the recent example with Plasma 6 — those are announced in advance.
- AUR doesn’t “expect” anything, it’s a dumping ground where anybody can put anything. I successfully run about 100 AUR packages on Manjaro without any issues, but nobody can guarantee anything when it comes to AUR. It’s officially unsupported on Arch and every Arch-based distro. If you want to call it dangerous that’s fine (if a bit hyperbolic) but don’t blame that on Manjaro, it just shows that you don’t understand how AUR works.
lemmyvore@feddit.nlto
Linux@lemmy.ml•RustDesk (FOSS easy to use TeamViewer alternative) has experimental Wayland support that needs testing!English
1·2 years agoOnly the Tailscale pairing server is proprietary but there’s a FOSS self-hostable alternative called Headscale.
The Tailscale clients are FOSS.
There isn’t much of a guide, you install the Tailscale clients and make an account on their website. After you enroll your devices to the account with a code they’ll be able to access each other via private IPs on an encrypted network based on WireGuard.
You can connect among devices with unsecured protocols like VNC because they’ll be inside the encrypted network. And this works with any app and any protocol not just remote desktop — you can use Syncthing, access files, access any services you want securely etc.
lemmyvore@feddit.nlto
Linux@lemmy.ml•Audacity 3.5 Released with Cloud Saving, Beat Detection, Pitch Shifting, and MoreEnglish
1·2 years agoCan it also record from Pulse?
lemmyvore@feddit.nlto
Linux@lemmy.ml•Audacity 3.5 Released with Cloud Saving, Beat Detection, Pitch Shifting, and MoreEnglish
1·2 years agoDoes Audacity still only work with ALSA? Wish they’d use at least pulse if not pipewire…
Perhaps we could suggest OP other things to try before we suggest they should rip out their GPU. I don’t know, basic problem-solving approach, like using the Nouveau or generic Vesa driver to rule out the proprietary Nvidia driver, or a different screen-sharing method to rule out RDP. Which is a proprietary Windows protocol so it may not work perfectly from Linux and with an unusual hardware configuration.
Things like desktop automation, screen sharing, screen recording, remote desktop etc. are incredibly broken, with no hope in sight because the core design of Wayland simply didn’t account for them(!?), apparently.
Add to that the decision to push everything downstream into compositors, which led to widespread feature fragmentation and duplicated effort.
Add to that antagonizing the largest graphics chipset manufacturer (by usage among Linux desktop users) for no good reason. Nvidia has never had an incentive to cater to the Linux desktop, so Linux desktop users sending them bad vibes is… neither here nor there. It certainly won’t make them move faster.
Add to that the million little bugs that crop up when you try to use Wayland with any of the desktop apps whose developers aren’t snorting the Koolaid and not dedicating oustanding effort to catching up to Wayland – which is most of them.
I cannot use Wayland.
I’m an average Linux desktop user, who has an Nvidia card, has no need for Wayland “security”, doesn’t have multiple monitors with different refresh rates, uses desktop automation, screen sharing, screen recording, remote desktop on a daily basis, and uses lots of apps which don’t work perfectly with Wayland.
…how and why would I subject myself to it? I’d have to be a masochist.