I’m in the process of starting a proper backup solution however over the years I’ve had a few copy-paste home directory from different systems as a quick and dirty solution. Now I have to pay my technical debt and remove the duplicates. I’m looking for a deduplication tool.

  • accept a destination directory
  • source locations should be deleted after the operation
  • if files content is the same then delete the redundant copy
  • if files content is different, move and change the name to avoid name collision I tried doing it in nautilus but it does not look at the files content, only the file name. Eg if two photos have the same content but different name then it will also create a redundant copy.

Edit: Some comments suggested using btrfs’ feature duperemove. This will replace the same file content with points to the same location. This is not what I intend, I intend to remove the redundant files completely.

Edit 2: Another quite cool solution is to use hardlinks. It will replace all occurances of the same data with a hardlink. Then the redundant directories can be traversed and whatever is a link can be deleted. The remaining files will be unique. I’m not going for this myself as I don’t trust my self to write a bug free implementation.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    I don’t actually know but I bet that’s relatively costly so I would at least try to be mindful of efficiency, e.g

    • use find to start only with large files, e.g > 1Gb (depends on your own threshold)
    • look for a “cheap” way to find duplicates, e.g exact same size (far from perfect yet I bet is sufficient is most cases)

    then after trying a couple of times

    • find a “better” way to avoid duplicates, e.g SHA1 (quite expensive)
    • lower the threshold to include more files, e.g >.1Gb

    and possibly heuristics e.g

    • directories where all filenames are identical, maybe based on locate/updatedb that is most likely already indexing your entire filesystems

    Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.

      • paris@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        4 months ago

        I was using Radarr/Sonarr to download files via qBittorrent and then hardlink them to an organized directory for Jellyfin, but I set up my container volume mappings incorrectly and it was only copying the files over, not hardlinking them. When I realized this, I fixed the volume mappings and ended up using fclones to deduplicate the existing files and it was amazing. It did exactly what I needed it to and it did it fast. Highly recommend fclones.

        I’ve used it on Windows as well, but I’ve had much more trouble there since I like to write the output to a file first to double check it before catting the information back into fclones to actually deduplicate the files it found. I think running everything as admin works but I don’t remember.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      FWIW just did a quick test with rmlint and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it’s for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g ./node_modules, so the risk of breaking things is relatively high. IMHO it’s better to learn why there are duplicates on case by case basis but again I don’t know your specific use case so maybe it’d fit.

      PS: I imagine it’d be good for a content library, e.g ebooks, ROMs, movies, etc.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      if you use rmlint as others suggested here is how to check for path of dupes

      jq -c '.[] | select(.type == "duplicate_file").path' rmlint.json