I’m gonna build my first ever server with a old laptop I have. Bought a 240gb ssd to install in it and plan to use manjaro to do it. I’ll add some HDs to it and a total of 4tb, not counting the ssd, I wanna leave that only for the system. I have a jellyfin server kn my desktop and wanna change it to this laptop. Also wanna make a Spotify version of it and a google drive replacement. But since these are my first steps I have little to no idea of what I’m doing or how to do it. Any tips for a first timer? Mistakes to avoid and all that, preferably without buying anything else

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 months ago
    1. Don’t do unattended upgrades. Neither host nor containers. Do blind or automated updates if you want but check up on them and be ready to roll back if something is wrong.
    2. People should use what distro they know best. A rolling distro they know how to handle is much better than a non-rolling one they don’t.
    3. You can run a command to update docker containers too. And there’s this thing called cron that can run commands periodically. But maybe you should re-read point 1.
    4. Docker doesn’t “bypass” the firewall. It manages rules so the ports that you pass to host will work. Because there’s no point in mapping blocked ports. You want to add and remove firewall rules by hand every time a container starts or stops, and look up container interfaces yourself? Be my guest.
    • moonpiedumplings@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Don’t do unattended upgrades. Neither host nor containers. Do blind or automated updates if you want but check up on them and be ready to roll back if something is wrong.

      Those issues are only common on rolling releases. On stable distros, they put tape between breaking changes, test that tape, and then roll out updates.

      Debian, and many other distros support it officially: https://wiki.debian.org/UnattendedUpgrades. It’s not just a cronjob running “apt install”, but an actual process, including automated checks. You can configure it to not upgrade specific packages, or stick to security updates.

      As for containers, it is trivial to rollback versions, which is why unattended upgrades are ok. Although, if data or configuration is corrupted by a bug, then you probably would have to restore from backup (probably something I should have suggested in my initial reply).

      It should be noted that unattended upgrade doesn’t always mean “upgrade to the latest version”. For docker/podman containers, you can pin them to a stable release, and then it will do unattended upgrades within that release, preventing any major breaking changes.

      Similarly, on many distros, you can configure them to only do the minimum security updates, while leaving other packages untouched.

      People should use what distro they know best. A rolling distro they know how to handle is much better than a non-rolling one they don’t.

      I don’t really feel like reinstalling the bootloader over ssh, to a machine that doesn’t have a monitor, but you do you. There are real significant differences between stable and rolling release distros, that make a stable release more suited for a server, especially one you don’t want to baby remotely.

      I use arch. But the only reason I can afford to baby a rolling release distro is because I have two laptops (both running arch). I can feel confident that if one breaks, I can use the other. All my data is replicated to each laptop, and backed up to a remote server running syncthing, so I can even reinstall and not lose anything. But I still panicked when I saw that message suggesting that I should reinstall grub.

      That remote server? Ubuntu with unattended upgrades, by the way. Most VPS providers will give you a linux distro image with unattended security upgrades enabled because it removes a footgun from the customer. On Contabo with Rocky 9, it even seems to do automatic reboots. This ensures that their customers don’t have insecure, outdated binaries or libraries.

      Docker doesn’t “bypass” the firewall. It manages rules so the ports that you pass to host will work. Because there’s no point in mapping blocked ports. You want to add and remove firewall rules by hand every time a container starts or stops, and look up container interfaces yourself? Be my guest.

      Docker is a way for me to run services on my server. Literally every other service application respects the firewall. Sometimes I want services to be exposed on my home network, but not on a public wifi, something docker isn’t capable of doing, but the firewall is. Sometimes I may want to configure a service while keeping it running. Or maybe I want to test it locally. Or maybe I want to use it locally

      It’s only docker where you have to deal with something like this:

      ---
      services:
        webtop:
          image: lscr.io/linuxserver/webtop:latest
          container_name: webtop
          security_opt:
            - seccomp:unconfined #optional
          environment:
            - PUID=1000
            - PGID=1000
            - TZ=Etc/UTC
            - SUBFOLDER=/ #optional
            - TITLE=Webtop #optional
          volumes:
            - /path/to/data:/config
            - /var/run/docker.sock:/var/run/docker.sock #optional
          ports:
            - 3000:3000
            - 3001:3001
          restart: unless-stopped
      

      Originally from here, edited for brevity.

      Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

      Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

      No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

      On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.