About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I am using btrfs on raid1 for a few years now and no major issue.

    It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.

    Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.

  • tripflag@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 hours ago

    Not proxmox-specific, but I’ve been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it’s bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.

    The only place I don’t use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.

  • SendMePhotos@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    11 hours ago

    I run it now because I wanted to try it. I haven’t had any issues. A friend recommended it as a stable option.

      • horse_battery_staple@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Are you backing up files from the FS or sre you backing up the snapshots? I had a corrupted journal from a power outage that borked my install. Could not get to the snapshots on boot. Booted into a live disk and recovered the snapshot that way. Would’ve taken hours to restore from a standard backup, however it was minutes restoring the snapshot.

        If you’re not backing up BTRFS snapshots and just backing up files you’re better off just using ext4.

        https://github.com/digint/btrbk

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don’t have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I’ll use ext4 for data too.

    I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.

    You can benchmark them if you care about performance. You can find plenty of discussion by googling “ext vs xfs vs btrfs” or whichever ones you’re considering. They haven’t changed that much in the past few years.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      but I found I spent way too much time trying to manage RAM and tuning it,

      I spent none, and it works fine. what was you’re issue?

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        I have four 6tb data drives and 32gb of RAM. When I set them up with zfs, it claimed quite a few gb of RAM for its cache. I tried allocating some of the other NVMe drive as cache, and tried to reduce RAM usage to reasonable levels, but like I said, I found that I was spending a lot of time fiddling instead of just configuring RAID and have it running just fine in much less time.

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 hours ago

    Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

    Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

    https://discourse.practicalzfs.com/t/psa-raidz2-proxmox-efficiency-performance/1694

  • Brownian Motion@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    My setup is different to yours but not totally different. I run ESXi 8, and I started to use BTRFS on some of my VM’s.

    I had a power failure, that was longer than the UPS could handle. Most of the system shutdown safely, a few VM’s did not. All of the EXT4 VM’s were easily recovered (including another one that was XFS). TWO of the BTRFS systems crashed into a non recoverable state.

    Nothing I could do to fix them, they were just toast. I had no choice but to recover using backups. This made me highly aware that BTRFS is still not a reliable FS.

    I am migrating everything from BTRFS to something more stable and reliable like EXT4. It’s simply not worth the headache.

  • Suzune@ani.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    9 hours ago

    The question is how do you get a bad performance with ZFS?

    I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.

    The fourth run (obviously cached) gave me over 3.8 GB/s.

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 hours ago

      I have never heard of anyone getting those speeds without dedicated high end hardware

      Also the write will always be your bottleneck.

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          How much ram and what is the drive size?

          I suspect this also could be an issue with SSDs. I have seen a lot a posts around describing similar performance on SSDs.

      • stuner@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 hours ago

        I’m seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:

        • 169 MB/s write
        • 254 MB/s read

        What’s your setup?

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Maybe I am CPU bottlenecked. I have a mix of i5-8500 and i7-6700k

          The drives are a mix but I get almost the same performance across machines

          • stuner@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            It’s possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.

      • Suzune@ani.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it’s quite busy because it’s my home server with a VM and containers.

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 hours ago

    Don’t use btrfs if you need RAID 5 or 6.

    The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

    https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

    • lurklurk@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Or run the raid 5 or 6 separately, with hardware raid or mdadm

      Even for simple mirroring there’s an argument to be made for running it separately from btrfs using mdadm. You do lose the benefit of btrfs being able to automatically pick the valid copy on localised corruption, but the admin tools are easier to use and more proven in a case of full disk failure, and if you run an encrypted block device you need to encrypt half as much stuff.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 hours ago

    One day I had a power outage and I wasn’t able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn’t able to boot from it anymore. I was very pissed, lost a whole day of work

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      edit-2
      8 hours ago

      What’s up is ZFS. It is solid but the architecture is very dated at this point.

      There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.

      • prenatal_confusion@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 hours ago

        Since most people with decently simple setups don’t have the described problem likely somethings up with your setup.

        Yes ifta old and yes it’s complicated but it doesn’t have to be to get a decent performance.