• 0 Posts
  • 58 Comments
Joined 3 months ago
cake
Cake day: April 5th, 2024

help-circle

  • Not without good logs or debugging tools.

    You need to know what to observe. You are not going to get the information you are looking for directly from zfs or even system logs.

    What I suggest stands. You have to understand the behavior of the USB controller. That information is acquired from researching USB itself.

    Now if you intend to utilize something like a USB enclosure you indeed would be better off with something like ext4. However, keep in mind that this effect is not directly a file system issue. It’s an issue with how USB controllers interact with file systems.

    That has been my experience from researching this matter. ZFS is simply more sensitive.

    In my experience even for motherboards that have port limitations it’s possible to take advantage of pci lanes and install a hba with an onboard SATA controller. They also make pci devices that will accept nvme drives.

    Good luck with your experimentation and research.


  • This takes a degree of understanding of what you are doing and why it fails.

    I’ve done some research on this myself and the answer is the USB controller. Specifically the way the USB controller “shares” bandwidth. It is not the way a sata controller or a pci lane deals with this.

    ZFS expects direct control of the disk to operate correctly and anything that gets in between the file system and the disk is a problem.

    I the case of USB let’s say you have two USB - nvme adapters plugged in to the same system in a basic zfs mirror. ZFS will expect to mirror operations between these devices but will be interrupted by the USB controller constantly sharing bandwidth between these two devices.

    A better but still bad solution would be something like a USB to SATA enclosure. In this situation if you installed a couple disks in a mirror on the enclosure… They would be using a single USB port and the controller would at least keep the data on one lane instead of constantly switching.

    Regardless if you want to dive deeper you will need to do reading on USB controllers and bandwidth sharing.

    If you want a stable system give zfs direct access to your disks and accept it will damage zfs operations over time if you do not.



  • That doesn’t make any sense to me. It can be installed directly from pacman. It may be something silly like adding docker to your user group. Have you done something like below for docker?

    1. Update the package index:

    sudo pacman -Syu

    1. Install required dependencies:

    sudo pacman -S docker

    1. Enable and start the Docker service:
    sudo systemctl enable docker.service
    sudo systemctl start docker.service
    
    1. Add your user to the docker group to run Docker commands without sudo:

    sudo usermod -aG docker $USER

    1. Log out and log back in for the group changes to take effect.

      Verify that Docker CE is installed correctly by running:

    docker --version

    If you get the above working docker compose is just

    sudo pacman -S docker-compose








  • Bookmark this if you utilize zfs at all. It will serve you well.

    https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

    You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it’s easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

    It needs to be 12 if it’s a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it’s a spinning disk.

    Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

    In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

    You should consider tweaking a couple things to really improve performance via the guide de I linked.

    Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it’s silly to create vm filesystems like btrfs if you’re vm is sitting on top of a cow filesystem.

    Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it’s just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

    If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

    In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It’s a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

    Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It’s also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

    So many knobs to turn. I hope you have fun playing with this.


  • I agree with this. The only vm I have that has multiple interfaces is an opnsense router vm heavily optimized for kvm to reach 10gb speeds.

    One of the interfaces beyond wan and lan is an interface that links to a proxmox services bridge. It’s a proxbridge I gave to a container and is just a gateway in opnsense. It points traffic destined for services directly at the container ip. It keeps the service traffic on the bridge instead of having to hit the physical network.


  • I use using docker networks but that’s me. They are created for every service and it’s easy to target the gateway. Just make sure DNS is correct for your hostnames.

    Lately I’ve been optimizing remote services for reverse proxy passthru. Did you know that it can break streams momentarily and make your proxy work a little harder if your host names don’t match outside and in?

    So in other words if you want full passthru of a tcp or udp stream to your server without the proxy breaking it then opening a new stream you would have to make sure the internal network and external network are using the same fqdn for the service you are targeting.

    It actually can break passthru via sni if they don’t use the same hostname and cause a slight delay. Kinda matters for things like streaming videos. Especially if you are using a reverse proxy and the service supports quic or http2.

    So a reverse proxy entry that simply passes without breaking the stream and resending it might ook like…

    Obviously you would need to get the http port working on jellyfin and have ipv6 working with internal DNS in this example.

    server {
        listen 443 ssl;
        listen [::]:443 ssl;  # Listen on IPv6 address
    
        server_name jellyfin.example.net;
    
        ssl_certificate /path/to/ssl_certificate.crt;
        ssl_certificate_key /path/to/ssl_certificate.key;
    
        location / {
            proxy_pass https://jellyfin.example.net:8920;  # Use FQDN
            ...
        }
    }
    

  • Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It’s better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

    Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

    I went the zfs route because I’m familiar with it and I appreciate it’s native sharing options built into the filesystem. It’s cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.


  • It depends on your needs. It’s entirely possible to just format a bunch of disks as xfs and setup some mount points you hand to a union filesystem like mergerfs or whatever. Then you would just hand that to proxmox directly as a storage location. Management can absolutely vary depending how you do this.

    At its heart it’s just Debian so it has all those abilities of Debian. The web UI is more tuned to vm/lxc management operations. I don’t really like the default lvm/ext4 but they do that to give access to snapshots.

    I personally just imported an existing zfs pool into proxmox and configured it to my liking. I discovered options like directly passing datasets into lxc containers with lxc options like lxc.mount.entry

    I recently finished optimizing my proxmox for performance in regards to disk io. It’s modified with things like log2ram, tmpfs in fstab for /tmp and /var/tmp, tcp congestion control set to cubic, a virtual opnsense heavily modified for 10gb performance, a bunch of zfs media datasets migrated to one media dataset and optimized for performance. Just so many tweaks and knobs to turn in proxmox that can increase performance. Folks even mention docker I’ve got it contained in an lxc. My active ram usage for all my services down to 7 gigs and disk io jumping .9 - 8%. That’s crazy but it just works.


  • Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks…

    If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

    The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

    Either way have fun.


  • It’s the production vs development issue. My advice is the old tech advice. “If it’s not broken don’t try to fix it”

    Modified into a separate proxmox development environment. Btw proxmox is perfect for this with vm and container snapshots.

    When you get a vm or container in a more production ready state then you can attempt migrations. That way the users don’t kill you :)



  • Music playlists are different from Plex. You can create them import them or generate an instant list.

    4k is seamless and performs better imo. You can use transcoding or not if you have files they way you want them. If you do you can select on a per user basis who gets to transcode.

    You can set bandwidth limits.

    I’ve seen a feature to allow multi user streaming the same movie so you ig watch at the same time. I use npm and often a couple peeps might watch a movie at the same time without using this feature and works fine

    I use the client app on Android and a firestick atm. I think I just downloaded it but you can side load too if you want. The media server app is available for various os. So technically you could set it up on whatever you want. Just check your app store

    https://jellyfin.org/downloads/clients/

    It can plug into homebrew or m3u playlists for live tv if that is your suggestion. It has a plugin for nextpvr and tvheadend if you utilize those for over the air or already have an m3u setup too in those pvr services. Those are great btw and available in docker containers.

    It always defaulted to what I have my files encoded. It absolutely can transcode to support other clients and you decide preferences. I did notice since most of my files are h.264 with few h265 sometimes it helped to turn off transcoding for me because the client supported it natively. Jellyfin was transcoding h265 mkv to like an MP4. Anyway a quirk

    Login is pretty simple. Passwords users can change. Has codes it can generate to approve a new device if you are already logged into an app on your phone. Like 6 temp numbers. Can also setup pins or whatever they call them under users.