Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • Pacman just does a lot less work than apt, which keeps things simpler and more straightforward.

    Pacman is as close as it gets to just untar’ing the package to your system. It does have some install scripts but they do the bare minimum needed.

    Comparatively, Debian does a whole lot more under the hood. It’s got a whole configuration management thing that generates config files and stuff, which is all stuff that can go wrong especially if you overwrote it. Debian just assumes apt can log into your MySQL database for example, to update your tables after updating MySQL. If any of it goes wrong, the package is considered to have failed to install and you get stuck in a weird dependency hell. Pacman does nothing and assumes nothing, its only job is to put the files in the right place. If you want it to start, you start it. If you want to run post-upgrade, you got to do it yourself.

    Thus you can yank an Arch system 5 years into the future and if your configs are still valid or default, it just works. It’s technically doable with apt too but just so much more fragile. My Debian updates always fail because NGINX isn’t happy, Apache isn’t happy, MySQL isn’t happy, and that just results in apt getting real unhappy and stuck. And AFAIK there’s no easy way to gaslight it into thinking the package installed fine either.



  • I also wanted to put an emphasis on how working with virtual disks is very much the same as real ones. Same well known utilities to copy partitions work perfectly fine. Same cgdisk/parted and dd dance as you otherwise would.

    Technically if you install the arch-install-scripts package on your host, you can even install ArchLinux into a VM exactly as if you were in archiso with the comfort of your desktop environment and browser. Straight up pacstrap it directly into the virtual disk.

    Even crazier is, NBD (Network Block Device) is generic so it’s not even limited to disk images. You can forward a whole ass drive from another computer over WiFi and do what you need on it, even pass it to a VM boot it up.

    With enough fuckery you could even wrap the partition in a fake partition table and boot the VM off the actual partition and make it bootable by both the host and the VM at the same time.


  • What you’re trying to do is called a P2V (Physical to Virtual). You want to directly copy the partition as going through a file share via Linux will definitely strip some metadata Windows wants on those files.

    First, make a disk image that’s big enough to hold the whole partition and 1-2 GB extra for the ESP:

    qemu-img create -f qcow2 YourDiskImageName.qcow2 300G
    

    Then you can make the image behave like a real disk using qemu-nbd:

    sudo modprobe nbd
    sudo qemu-nbd -c /dev/nbd0 YourDiskImageName.qcow2
    

    At this point, the disk image behaves like any other disk at /dev/nbd0.

    From there create a partition table, you can use cgdisk or parted or even the GUI GParted will work on it.

    And finally, copy the partition over with dd:

    sudo dd if=/dev/sdb3 of=/dev/nbd0p2 bs=4M status=progress
    

    You can also copy the ESP/boot partition as well so the bootloader works.

    Finally once you’re done with the disk image, unload it:

    sudo qemu-nbd -d /dev/nbd0
    



  • It’s hard to give concrete advice without knowing the specs or the software you want to run on this, but for tiny Linux systems there’s Buildroot so you can compile just the bare minimum you need and not use a distro at all (unless you could Buildroot as a distro). This is what OpenWRT uses to build all the router firmwares among other things.

    For something that would go in a car that seems pretty ideal to me. Skip initializing things you won’t use, make something that boots to GUI in 3 seconds. When you want to update the software you flash it as a new firmware image, no on-device installing or anything.

    Depending on what you run, ideally you’d skip Xorg/Wayland and use the framebuffer directly. But if you need to run a more standard environment, that’s what things like Cage are designed for. Single app, always full screen. It’s called a kiosk environment.


  • Proton is Wine but tweaked for the sole purpose of running games, so it packs a bunch of extra stuff needed to make games run well together.

    Usually there’s also a long list of per-game tweaks and changes to make sure it runs, it’s all preconfigured so you press play in your launcher and it works. Not need to change settings whenever you want to play a game.

    You can still use regular Wine but you’ll have to set up a bunch of stuff yourself, and eventually you run into a game that needs a different version of something that breaks another game, you get into prefix management and it’s a mess. Or oh this game runs better when we pretend to be Windows 7 but this one works best with Windows 10. Proton just does it all for you, every game gets its own space with all the correct settings from the get go, and you just launch into the game and play.


  • Honestly a VPN that doesn’t support Linux at least through manual connection settings, run away. All reputable and even the sketchier VPN providers support Linux, because that’s what the privacy crowd uses, not supporting it implies those aren’t even the target user base at all. It’s a red flag. It’s not a VPN for privacy or getting another country’s Netflix.

    I’d trust Norton about as much as my ISP, so unless you use public WiFi somewhat often, it doesn’t add much value, just the downsides of captchas everywhere. They’re probably analyzing the traffic to map out malware campaigns and such, which would make sense but isn’t very private.

    The business model of antivirus companies is fear, and they sell the solution to that fear. They have a VPN because people assume VPN means more security, of course they’ll sell you one. At best they block known malware domains and IPs, which is utterly useless on Linux anyway.

    If you want a VPN get a real VPN.


  • It ran fairly well for me out of the box. I think it’s similar to trying to run Windows 98/2000/XP on modern VM software, it gets utterly confused and needs very specific hardware configuration to boot. Modern VMs run this good in big part because of paravirtualized hardware.

    I think what made Ubuntu so good is a combination of being based on Debian and also being there at the right time when Linux software was getting generally better. When I tried Mandrake it was too early for Wine to run any sort of game, codecs were lacking for video. When I tried Linux again with Ubuntu, there was now VirtualBox and computers fast enough to run that reasonably, graphics drivers were more usable. Compiz was popping off to show off that Xorg could now do compositing like macOS and Vista.

    Mandrake was good but limited by what Linux could do back then. Enjoyed it quite a bit but 9 year old me ran back to XP for the games. When I tried Ubuntu I was a bit older and more interested in programming and WoW ran great in Wine, so I managed to stick and have been on Linux since.





  • It’s going to depend on how the access is set up. It could be set up such that the only way into that network is via that browser thing.

    You can always connect to yourself from the Windows machine and tunnel SSH over that, but it’s likely you’ll hit a firewall or possibly even a TLS MitM box.

    Virtual desktops like that are usually used for security, it would be way cheaper and easier to just VPN your workstation in. Everything about this feels like a regulated or certified secure environment like payment processing/bank/government stuff.


  • You’ll first want to lock down the laptop with using the TPM so it only boots kernels signed by you, and also encrypt the drive using the TPM as the locking key so the key is only ever available to a kernel you signed. From there you’ll probably want to use dm-verity to also verify the integrity of the system or at least during the boot process.

    Then, on top of that, once online and the machine is still authorized to access that data, you download a key from a server under your control to unlock the rest of the drive (as another partition). And log those accesses of course.

    Then, when you want to revoke access to it, all you have to do is stop replying with the key whenever requested. That just puts a ton of hurdles to overcome to access the data once the server stops handing the key. They would have to pry out the key from the TPM to unlock the first stage and even be able to see how it works and how to potentially obtain the key. They could still manage to copy the data out while the system is fully unlocked and still trusted, which you can make a lot harder by preventing access to external drives or network shares. But they have physical access so they kind of have the last word if they really really really want to exfiltrate data.

    This is the best you can do because it’s a passive: you stop supplying the unlock key so it’s stuck locked encrypted with no key, so the best they can do is format the laptop and sell it or use it for themselves. Any sort of active command system can be pretty easy to counter: just don’t get it online if you suspect the kill signal is coming, and it will never come, and therefore never get wiped. You want that system to be wiped by default unless your server decides it’s not.



  • It’s still going but I think a good chunk of the FOSS community avoids it. Distros that still ships it disable the telemetry.

    Definitely feels like the desperate attempts to monetize it, and the enshittification that typically arises next.

    As far as I know it’s still fine to use if your distro disables the telemetry, which is what most people had issues with. It’s still under the same license in the end, which is probably why they’re now pivoting to cloud features: that they can make proprietary. I’m sure cloud-based AI plugins are next.