

It has an LTS kernel. Not a separate version. This does not make everything LTS. This is very different from LTS distros.


It has an LTS kernel. Not a separate version. This does not make everything LTS. This is very different from LTS distros.


It is not just about your pc hardware. I much prefer running the latest software on it as I regularly use features from tools added in the last version of something. I would hate to have to wait 6 months to a year to be able to use new features that make my life easier. That might not be every bit of software I use but enough core things that I would notice.


I was not trying to brush away the differences for GPL 2 vs 3. My point was just that I don’t think a more permissive license on Coreutils would have caused every company to want to steal the code, get everyone using it and force out the GPLed version. But a more restrictive license (say one that infects other binaries on the system) would have meant fewer companies using it and thus fewer distros and everyone else using it.
But for other projects the balance is different and a more permissive license would cause issues. There are some projects that even the GPLv2 or even v3 is too permissive for.


sudo is not GPL3. It is not even GPL2. It is an old license that is just as permissive as the MIT license. It has never had any big problems with that being the case. I don’t think that coreutils being GPL has really done anything to force companies to contribute back to it. It is mostly fixed in its function and does not really have much room for companies taking and modifying it to a point where others will favor the closed version over the open on. And what it provides is fairly trivial functions overall that if someone did want to take part of it then it is not terribly hard to rewrite it from scratch.
GNU Coreutils is not the only implementation of those POSIX features - just the most popular one. FreeBSD has its own, there is busybox, the rust ports and loads of other rewrites of the same functionality to various degrees. None of that really matters though as they dont really add much if any value to what coreutils provides as there is just not that much more value to add to these utilities now.
And it is not like the GPL license of coreutils affects other binaries on the system. So if you dont need to modify it and it does not infect other things there is little point in trying to take it over or use an alternative.
MacOS does not use a later version because they cannot. But also they don’t care enough to even try to maintain their own.
GPL is important on other larger/more complex bits of software. But on coreutils/sudo IMO it does not matter nearly as much as people think it does.


Core Android and ChromOS don’t need to be FOSS because they use the GPL. You can use the Linux kernel without having to make everything that runs on it GPL as well. Things that run on the kernel are not derivative works of the kernel. These projects are FOSS because google at the time thought it would give them an advantage to make the FOSS.
If you add too many restrictions to a license it does not force companies to give their stuff away for free, it just means they wont use your project which can drastically stunt the growth of your project. If Linux had a more restrictive license to start with all that would of happened is no one would have heard of it today as companies would have created something else that they can use.


There is no one size fits all safest option. Details matter and each project needs to read the licenses and decide on which suits their needs best.
MIT is probably the safest option for a company creating a library wrapping their service where there is no real value in others taking that code. Or for simpler libraries that are fairly easy to reproduce so the need to steal the code is low. Or you just don’t care what others do with the code.
GPL is probably safest for some hobbies that does not care about companies and just wants everyone that is using their project to not bake it into a product they distribute. But also means companies likely wont want to use your project if it is a library.
LGPL might be a good option for library code if you want other companies to use and contribute back to some complex library you are using that is hard to reproduce in isolation.
Other licenses are needed if you want to prevent other hosted services from using your project without contributing back.
Different licenses exist for different reasons and it all depends on what you want for your project.


I don’t think there is a good license for that. The ones MongoDB used turned the open source community against them. But that is not really my point. I just mean that some projects using MIT won’t suddenly mean every company will start stealing and closing that software. Some things like coreutils and sudo just don’t have the commercial value to make that worth the effort. So there is no real need to worry about these two projects IMO. Other projects are a different story altogether though. Each project needs to make its own decision on what licence best suits it. The GPL is not the one and only license that is worth using.


Coreutils has little commercial value to take can create a proprietary fork of. There is little value that can be added to it to make it worthwhile. The same is for sudo - which has had a permissive licence from the start. In all that time no one has cared enough to fork it for profit.
Not saying that is true of every project. But at the same time even GPL software has issues with large companies profiting off it and not contributing back. Since unless you are distributing binaries the GPL does not force you to do anything really. See mongodb and their move to even more restrictive licences.
The GPL is not the only thing that stops companies from taking open software. Nor does it fully protect against that.
Not does everything need to be GPL. It makes sense for some projects and less sense for others. Especially libraries as that basically forces no company from using them for anything. Which is also not what you want from a library.


You can profit from GPL software. The only restriction is if you distribute it you also need to distribute modifications under the GPL.
GPL also does nothing for software as a service since it is never distributed.
GPL even explicitly allows selling GPL software. This is effectively what redhat do. They just need to distribute the source to those that they sell it to.
I personally hate global menu bars. They do not work with focus follows mouse. The way menus currently work is fine for me and I would not want to lose that to, IMO, a much worst system. Any global menu implementation would need to be able to be disabled and better to have it off by default. And I would rather see effort in developing other features personally - though mostly as I would never use this feature.
* in your commands is expanded by the shell before tar sees them. It also does not expand hidden files.
So when you do admin/* the shell expands to all non hidden files inside admin. Which does not include admin/.htaccess. So tar is never told to archive this file, only the other non hidden files and folders. It will still archive hidden files and folders nested deeper though.
In the second example * expands to admin and the other does which are not hidden at that level. Then tar can open these dirs and recursivly archive all files and folders including the hidden ones.
You can see what commands actually get executed after any shell expansions if you run set -x first. Then set +x to turn that off again.
Here is an example using ls:
$ set -x; ls -A foo/*; ls -A *; set +x
+ ls --color=tty -A foo/baz
foo/baz
+ ls --color=tty -A foo
.bar baz
+ set +x


You should have a live USB of the distro you want to use and ensure you have backups of all the data you care about. Then the easiest/quickest/least error prone way is to just wipe the whole drive and reinstall the distro from the live USB. They typically have an option to wipe and install things from an empty drive. Then just restore your data from your backups.
You could also, after creating backups, from a live USB environment delete the windows partitions and resize the linux ones - being careful not to delete the EFI partition as that is where the boot loader lives. You can optionally delete the windows boot loader from the EFI partition as well. If done right you should still be able to boot into your linux system afterwards though when missing with partitions like this, especially when you don’t know what you are doing, it can be easy to break the boot systems. These can be fixed from a live environment and there are many guides out there on how to do that.
You can always just reinstall the system again if you mess things up and cannot figure out how to fix them - so always prep for that case by backing up everything you care about first.
--asdeps also works when installing something to immediately mark it as a dep. Can be useful for non dep packages if you only need it temporarily as it will be removed the next time you purge unused deps.
Clean orphaned dependencies:
sudo pacman -Rs $(pacman -Qtdq)
In addition to this, or rather before, you can run pacman -D --asdeps package_name to mark a package as a dep. If it is no longer required by something else it will be removed with the above. This can be useful for things that are deps that you installed manually at some point for some reason.
And remember that you can recover from anything, even removing base packages or bootloader ones with a live cd and chroot or using pacman with a different root with the --root /mnt flag to pacman.
Otherwise if your system still boots it is just a matter of following the install instructions for whatever is not working like you did the first time.
I would not worry about virtual memory usage. Virtual memory can include memory mapped files and does not indicate actual ram usage - only the address space that the program has opened at some point. There is little point in worrying about it.
IMO the best thing is to just start using it. You will start to pick things up fairly quickly then. Puzzles don’t often ingrain different ways todo things and often focus on weird or niche things that don’t come up as often. They can be a nice supplement to not a substitute for just using it in real world usescases.
I do also find it helpful to read the shortcut keys on their site to get a feel for what is available. You won’t remember everything but it can be useful to know what is possible. Then when you hit a problem you may remember reading about something that can help and go look it up again.


Just dont format the drive when installing a new distro. BTRFS or not you can delete the system folders manually first if needed but I believe that some if not all distros will delete the system folders for you (at least ubuntu used to do this last I tried). And if not you can do it manually.
It does not matter if you have a separate partition or not for /home installers won’t touch it if it already exists except to create a new user if needed. Remember, all the installers do is optionally format the drives, mount them then install files into those drives. If you skip the formatting and manually do that partitioning (or using an existing partition layout) it will still mount and write to the same places regardless of it they are separate partitions or not. So a separate partition does not add any extra protection to your home files at all.
But regardless of what you do you should ALWAYS backup your home data anyway. Even with separate partitions or subvolumes the installer can touch or delete anything it wants to and you can easily click the wrong button or accidentally wipe thing. At most preserving your home saves you from restoring from a backup it should not be done instead of backup.
There is no problem with having home on a different disk. But why do you want swap on the slower disk? These would benefit from being on the faster disks. Same with all the system binaries.
Personally I would put as much as possible on the faster disk and mount the slower somewhere that the speed matters less. Like for photos/videos in your home dir.
/boot can be anywhere though if you are getting a grub error that suggests the UEFI firmware is finding grubs first stage but grub is having issues after that. Personally I don’t use grub anymore, systemd-boot is far simpler as it does not need to deal with legacy MBR booting.
My point is the different levels of just working are subjective, not objective. I personally have spent far more time fixing bugs or just reinstalling ubuntu systems then I have over the same period for Arch systems. So many of my ubuntu installs just ended up breaking after a while where I have had the same Arch install on systems for 5+ years now. Could never get a Ubuntu system to last more then a year.
Everyone has different stories about the different OSs. It is all subjective.
I would say a rolling distro update has a higher chance of it breaking something. Each one might bring in a new major version of something that has breaking changes in it. But that breakage is typically easier to fix and less of a problem.
Point release distros tend to bundle up all their breakages between major versions so breaks loads of things at once. And that IMO can be more of a hassle then dealing with them one at a time as they come out.
I tended to find I needed to reinstall point release distros instead of upgrading them as it was less hassle. Which is still more disruptive then fixing small issues over time as the crop up.