Proxmox ext4 vs xfs. Originally I was going to use EXT4 on KVM til I ran across ProxMox (and ZFS). Proxmox ext4 vs xfs

 
Originally I was going to use EXT4 on KVM til I ran across ProxMox (and ZFS)Proxmox ext4 vs xfs  While RAID 5 and 6 can be compared to RAID Z

Small_Light_9964 • 1 yr. This backend is configured similarly to the directory storage. I have set up proxmox ve on a dell R720. From this several things can be seen: The default compression of ZFS in this version is lz4. Select your Country, Time zone and Keyboard LayoutHi, on a fresh install of Proxmox with BTRFS, I noticed that the containers install by default with a loop device formatted as ext4, instead of using a BTRFS subvolume, even when the disk is configured using the BTRFS storage backend. I am trying to decide between using XFS or EXT4 inside KVM VMs. ago. resemble your workload, to compare xfs vs ext4 both with and without glusterfs. Inside of Storage Click Add dropdown then select Directory. If you want to run a supported configuration, using a proven enterprise storage technology, with data integrity checks and auto-repair capabilities ZFS is the right choice. Sistemas de archivos de almacenamiento compartido 1. ago. XFS supports larger file sizes and. Users should contemplate their. Otherwise you would have to partition and format it yourself using the CLI. For Proxmox VE versions up to 4. But running zfs on raid shouldn't lead to anymore data loss than using something like ext4. ZFS is a filesystem and volume manager combined. Below is a very short guide detailing how to remove the local-lvm area while using XFS. Con: rumor has it that it is slower than ext3, the fsync dataloss soap. Run through the steps on their official instructions for making a USB installer. I understand Proxmox 6 now has SSD TRIM support on ZFS, so that might help. For ID give your drive a name, for Directory enter the path to your mount point, then select what you will be using this. 6-3. Interesting. Both Btrfs and ZFS offer built-in RAID support, but their implementations differ. Xfs is very opinionated as filesystems go. For really big data, you’d probably end up looking at shared storage, which by default means GFS2 on RHEL 7, except that for Hadoop you’d use HDFS or GlusterFS. But unless you intend to use these features, and know how to use them, they are useless. On xfs I see the same value=disk size. Via the Phoronix Test Suite a. all kinds for nice features (like extents, subsecond timestamps) which ext3 does not have. This article here has a nice summary of ZFS's features: acohdehydrogenase • 4 yr. You also have full ZFS integration in PVE, so that you can use native snapshots with ZFS, but not with XFS. /dev/sdb ) from the Disk drop-down box, and then select the filesystem (e. As in general practice xfs is being used for large file systems not likely for / and /boot and /var. • 2 yr. Example: Dropbox is hard-coded to use ext4, so will refuse to work on ZFS and BTRFS. The operating system of our servers is always running on a RAID-1 (either hardware or software RAID) for redundancy reasons. 411. XFS was more fragile, but the issue seems to be fixed. EarthyFeet. Use XFS as Filesystem at VM. The XFS PMDA ships as part of the pcp package and is enabled by default on installation. WARNING: Anything on your soon to be server machine is going to be deleted, so make sure you have all the important stuff off of it. You're working on an XFS filesystem, in this case you need to use xfs_growfs instead of resize2fs. But they come with the smallest set of features compared to newer filesystems. That XFS performs best on fast storage and better hardware allowing more parallelism was my conclusion too. There's nothing wrong with ext4 on a qcow2 image - you get practically the same performance as traditional ZFS, with the added bonus of being able to make snapshots. Backups can be started via the GUI or via the vzdump command line tool. You're better off using a regular SAS controller and then letting ZFS do RAIDZ (aka RAID5). CoW ontop of CoW should be avoided, like ZFS ontop of ZFS, qcow2 ontop of ZFS, btrfs ontop of ZFS and so on. -- is very important for it to work here. J. Proxmox actually creates the « datastore » in an LVM so you’re good there. This is necessary should you make. service (7. 2 Unmount and Delete lvm-thin. Snapraid says if the disk size is below 16TB there are no limitations, if above 16TB the parity drive has to be XFS because the parity is a single file and EXT4 has a file size limit of 16TB. . Replication uses snapshots to minimize traffic sent over the. ZFS gives you snapshots, flexible subvolumes, zvols for VMs, and if you have something with a large ZFS disk you can use ZFS to do easy backups to it with native send/receive abilities. Performance: Ext4 performs better in everyday tasks and is faster for small file writes. ago. Unmount the filesystem by using the umount command: # umount /newstorage. The maximum total size of a ZFS file system is exbibytes minus one byte. 1. with LVM and ext4 some time ago. Step 4: Resize / partition to fill all space. What about using xfs for the boot disk during initial install, instead of the default ext4? I would think, for a smaller, single SSD server, it would be better than ext4? 1 r/Proxmox. (Install proxmox on the NVME, or on another SATA SSD). g. LVM thin pools instead allocates blocks when they are written. BTRFS. Key Points: ZFS stands for Zettabyte filesystem. With the noatime option, the access timestamps on the filesystem are not updated. If no server is specified, the default is the local host ( localhost ). g. I also have a separate zfs pool for either additional storage or VMs running on zfs (for snapshots). As in Proxmox OS on HW RAID1 + 6 Disks on ZFS ( RAIDZ1) + 2 SSD ZFS RAID1. 2. xfs /dev/zvol/zdata/myvol, mounted it and sent in a 2 MB/s stream via pv again. This is a significant difference: The Ext4 file system supports journaling, while Btrfs has a copy-on-write (CoW) feature. The only realistic benchmark is the one done on a real application in real conditions. They deploy mdadm, LVM and ext4 or btrfs (though btrfs only in single drive mode, they use LVM and mdadm to span the volume for. 2 ensure data is reliably backed up and. Prior to EXT4, in many distributions, EXT3 was the default file-system. The default is EXT4 with LVM-thin, which is what we will be using. g. When you start with a single drive, adding a few later is bound to happen. Three identical nodes, each with 256 GB nvme + 256 GB sata. To enable and start the PMDA service on the host machine after the pcp and pcp-gui packages are installed, use the following commands: # systemctl enable pmcd. All four mainline file-systems were tested off Linux 5. using ESXi and Proxmox hypervisors on identical hardware, same VM parameters and the same guest OS – Linux Ubuntu 20. Select I agree on the EULA 8. It may consume a lot of RAM if you'll enable deduplication feature, but I think it makes sense only for backup servers and similar storage scenarios, not for casual users/gamers. 7. Add the storage space to Proxmox. In the future, Linux distributions will gradually shift towards BtrFS. Two commands are needed to perform this task : # growpart /dev/sda 1. Still, I am exclusively use XFS where there is no diverse media under the system (SATA/SAS only, or SSD only), and had no real problem for decades, since it's simple and it's fast. we are evaluating ZFS for our Proxmox VE future installations over the currently used LVM. Profile both ZFS and ext4 to see how performance works out on your system in your use-case. , power failure) could be acceptable. g. isaacssv • 3 yr. Subscription Agreements. When installing Proxmox on each node, since I only had a single boot disk, I installed it with defaults and formatted with ext4. This can make differences as there. Tenga en cuenta que el uso de inode32 no afecta a los inodos que ya están asignados con números de 64 bits. Created new nvme-backed and sata-backed virtual disks, made sure discard=on and ssd=1 for both in disk settings on Proxmox. Zfs is terrific filesystem. Create a zvol, use it as your VM disk. 2. ) Then, once Proxmox is installed, you can create a thin lvm pool encompassing the entire SSD. Everything on the ZFS volume freely shares space, so for example you don't need to statically decide how much space Proxmox's root FS requires, it can grow or shrink as needed. With the -D option, replace new-size with the desired new size of the file system specified in the number of file system blocks. (You can also use RAW or something else, but this removes a lot of the benefits of things like Thin Provisioning. The following command creates an ext4 filesystem and passes the --add-datastore parameter, in order to automatically create a datastore on the disk. used for files not larger than 10GB, many small files, timemachine backups, movies, books, music. 对应的io利用率 xfs 明显比ext4低,但是cpu 比较高 如果qps tps 在5000以下 etf4 和xfs系统无明显差异。. In the table you will see "EFI" on your new drive under Usage column. Move/Migrate from 1 to 3. From the documentation: The choice of a storage type will determine the format of the hard disk image. If this works your good to go. I've used BTRFS successfully on a single drive proxmox host + VM. r/Proxmox. It will result in low IO performance. I have been looking at ways to optimize my node for the best performance. This is the same GUID regardless of the filesystem type, which makes sense since the GUID is supposed to indicate what is stored on the partition (e. Comparación de XFS y ext4 1. Don't worry about errors or failure, I use a backup to an external hard drive daily. 44. For large sequential reads and writes XFS is a little bit better. NTFS or ReFS are good choices however not on Linux, those are great in native Windows environment. Without knowing how exactly you set it up it is hard to judge. Everything on the ZFS volume freely shares space, so for example you don't need to statically decide how much space Proxmox's root FS requires, it can grow or shrink as needed. And xfs. I’d still choose ZFS. While ZFS has more overhead, it also has a bunch of performance enhancements like compression and ARC which often “cancel out” the overhead. The Proxmox Backup Server installer, which partitions the local disk(s) with ext4, xfs or ZFS, and installs the operating system. Starting with ext4, there are indeed options to modify the block size using the "-b" option with mke2fs. Then I selected the "Hardware" tab and selected "Hard Disk" and then clicked the resize. You can get your own custom. The server I'm working with is: Depending on the space in question, I typically end up using both ext4 (on lvm/mdadm) and zfs (directly over raw disks). Maybe I am wrong, but in my case I can see more RAM usage on xfs compared with xfs (2 VM with the same load/io, services. If you are okay to lose VMs and maybe the whole system if a disk fails you can use both disks without a mirrored RAID. Select the VM or container, and click the Snapshots tab. The problem here is that overlay2 only supports EXT4 and XFS as backing filesystems, not ZFS. btrfs is a filesystem that has logical volume management capabilities. also XFS has been recommended by many for MySQL/MariaDB for some time. Even if you don’t get the advantages that come from multi-disk systems, you do get the luxury of ZFS snapshots and replication. This section highlights the differences when using or administering an XFS file system. ZFS is faster than ext4, and is a great filesystem candidate for boot partitions! I would go with ZFS, and not look back. ago. I want to convert that file system. 2. XFS mount parameters - it depends on the underlying HW. This page was last edited on 9 June 2020, at 09:11. Then i manually setup proxmox and after that, i create a lv as a lvm-thin with the unused storage of the volume group. 2. Proxmox Filesystems Unveiled: A Beginner’s Dive into EXT4 and ZFS. Best Linux Filesystem for Ethereum Node: EXT4 vs XFX vs BTRFS vs ZFS. Yeah those are all fine, but for a single disk i would rather suggest BTRFS because it's one of the only FS that you can extend to other drives later without having to move all the data away and reformat. x or newer). Compared to classic RAID1, modern FS have two other advantages: - RAID1 is whole device. g. The XFS one on the other hand take around 11-13 hours!But Proxmox won't anyway. Some features do use a fair bit of RAM (like automatic deduplication), but those are features that most other filesystems lack entirely. Proxmox VE Linux kernel with KVM and LXC support. Edge to running QubesOS is can run the best fs for the task at hand. I understand Proxmox 6 now has SSD TRIM support on ZFS, so that might help. So what is the optimal configuration? I assume keeping VMs/LXC on the 512GB SSD is the optimal setup. ) Inside your VM, use a standard filesystem like EXT4 or XFS or NTFS. So I think you should have no strong preference, except to consider what you are familiar with and what is best documented. The pvesr command line tool manages the Proxmox VE storage replication framework. No LVM for simplicity of RAID recovery. B. For a single disk, both are good options. While the XFS file system is mounted, use the xfs_growfs utility to increase its size: Copy. or details, see Terms & Conditions incl. Proxmox VE Linux kernel with KVM and LXC support Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources XFS与Ext4性能比较. It's absolutely better than EXT4 in just about every way. Quota journaling: This avoids the need for lengthy quota consistency checks after a crash. . So you avoid the OOM killer, make sure to limit zfs memory allocation in proxmox so that your zfs main drive doesn’t kill VMs by stealing their allocated ram! Also, you won’t be able to allocate 100% of your physical ram to VMs because of zfs. This can be an advantage if you know and want to build everything from scratch, or not. Regarding boot drives : Use enterprise grade SSDs, do not use low budget commercial grade equipment. 14 Git and tested in their default/out-of-the-box. Install proxmox backup server with ext4 inside proxmox. 2 SSD. The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway. If I were doing that today, I would do a bake-off of OverlayFS vs. €420,00EUR. ZFS and LVM are storage management solutions, each with unique benefits. Additionally, ZFS works really well with different sized disks and pool expansion from what I've read. I would like to have it corrected. This was our test's, I cannot give any benchmarks, as the servers are already in production. Using Proxmox 7. Ability to shrink filesystem. LVM vs. A directory is a file level storage, so you can store any content type like virtual disk images, containers, templates, ISO images or backup files. Select the Directory type. 2 NVMe SSD (1TB Samsung 970 Evo Plus). xfs is really nice and reliable. Here is a look at the Linux 5. Now, the storage entries are merely tracking things. Hope that answers your question. XFS for array, BTRFS for cache as it's the only option if you have multiple drives in the pool. Ext4文件系统是Ext3的继承者,是Linux下的主流文件系统。经过多年的发展,它是目前最稳定的文件系统之一。但是,老实说,与其他Linux文件系统相比,它并不是最好的Linux文件系统。 在XFS vs Ext4方面,XFS在以下几个方面优. 2. EXT4 is the successor of EXT3, the most used Linux file system. You either copy everything twice or not. XFS is a robust and mature 64-bit journaling file system that supports very large files and file systems on a single host. 1 and a LXC container with Fedora 27. 0, BTRFS is introduced as optional selection for the root. El sistema de archivos es mayor de 2 TiB con inodos de 512 bytes. Storages which present block devices (LVM, ZFS, Ceph) will require the raw disk image format, whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose either the raw disk image format or the QEMU image format. I need to shrink a Proxmox-KVM raw volume with LVM and XFS. Also, for the Proxmox Host - should it be EXT4 or ZFS? Additionally, should I use the Proxmox host drive as SSD Cache as well? ext4 is slow. Created new nvme-backed and sata-backed virtual disks, made sure discard=on and ssd=1 for both in disk settings on Proxmox. 4 HDD RAID performance per his request with Btrfs, EXT4, and XFS while using consumer HDDs and an AMD Ryzen APU setup that could work out for a NAS type low-power system for anyone else that may be interested. Elegir un sistema de archivos local 27. Looking for advise on how that should be setup, from a storage perspective and VM/Container. I have a high end consumer unit (i9-13900K, 64GB DDR5 RAM, 4TB WD SN850X NVMe), I know it total overkill but I want something that can resync quickly new clients since I like to tinker. El sistema de archivos XFS 1. Key Takeaway: ZFS and BTRFS are two popular file systems used for storing data, both of which offer advanced features such as copy-on-write technology, snapshots, RAID configurations and built in compression algorithms. For LXC, Proxmox uses ZFS subvols, but ZFS subvols cannot be formatted with a different filesystem. . This is why XFS might be a great candidate for an SSD. It was basically developed to allow one to combine many inexpensive and small disks into an array in order to realize redundancy goals. 1. and post the output here. ;-) Proxmox install handles it well, can install XFS from the start. The only case where XFS is slower is when creating/deleting a lot of small files. EXT4 - I know nothing about this file system. For this reason I do not use xfs. Você pode então configurar a aplicação de cotas usando uma opção de montagem. ago. 2 Use it in Proxmox. Also, the disk we are testing has contained one of the three FSs: ext4, xfs or btrfs. At the same time, XFS often required a kernel compile, so it got less attention from end. I created the zfs volume for the docker lxc, formatted it (tried both ext4 and xfs) and them mounted to a directory setting permissions on files and directories. XFS scales much better on modern multi-threaded workloads. you're all. Category: HOWTO. 10 with ext4 as main file system (FS). €420,00EUR. Enter the username as root@pam, the root user’s password, then enter the datastore name that we created earlier. There are results for “single file” with O_DIRECT case (sysbench fileio 16 KiB blocksize random write workload): ext4 1 thread: 87 MiB/sec. or use software raid. 7. The root volume (proxmox/debian OS) requires very little space and will be formatted ext4. If the LVM has no spaces left or not using thin provisioning then it's stuck. I just got my first home server thanks to a generous redditor, and I'm intending to run Proxmox on it. I'd like to install Proxmox as the hypervisor, and run some form of NAS software (TRueNAS or something) and Plex. #1. Starting with Proxmox VE 3. 0 is in the pre-release stage now and includes TRIM,) and I don't see you writing enough data to it in that time to trash the drive. data, so it's possible to only keep the metadata with redundancy ("dup" is the default BTRFS behaviour on HDDs). What's the right way to do this in Proxmox (maybe zfs subvolumes)? 8. However, it has a maximum of 4KB. 15 comments. Ext4 seems better suited for lower-spec configurations although it will work just fine on faster ones as well, and performance-wise still better than btrfs in most cases. In case somebody is looking do the same as I was, here is the solution: Before start, make sure login to PVE web gui, delete local-lvm from Datacenter -> Storage. BTRFS and ZFS are metadata vs. e. Copy-on-Write (CoW): ZFS is a Copy-on-Write filesystem and works quite different to a classic filesystem like FAT32 or NTFS. Optiplex micro home server, no RAID now, or in foreseeable future, (it's micro, no free slots). It's not the most cutting-edge file system, but that's good: It means Ext4 is rock-solid and stable. The compression ratio of gzip and zstd is a bit higher while the write speed of lz4 and zstd is a bit higher. But they come with the smallest set of features compared to newer filesystems. Specs at a glance: Summer 2019 Storage Hot Rod, as tested. The process occurs in the opposite. What should I pay attention regarding filesystems inside my VMs ?. Utilice. What we mean is that we need something like resize2fs (ext4) for enlarge or shrunk on the fly, and not required to use another filesystem to store the dump for the resizing. Trim/Discard If your storage supports thin provisioning (see the storage chapter in the Proxmox VE guide), you can activate the Discard option on a drive. I've tried to use the typical mkfs. 0 moved to XFS in 2014. Offizieller Beitrag. we use high end intel ssd for journal [. snapshots are also missing. This is addressed in this knowledge base article; the main consideration for you will be the support levels available: Ext4 is supported up to 50TB, XFS up to 500TB. gbr: Is there a way to convert the filesystem to EXT4? There are tools like fstransform but I didn’t test them. El sistema de archivos ext4 1. Create a VM inside proxmox, use Qcow2 as the VM HDD. 1. With the integrated web-based user interface you can manage VMs and containers, high availability for. You can add other datasets or pool created manually to proxmox under Datacenter -> Storage -> Add -> ZFS BTW the file that will be edited to make that change is /etc/pve/storage. Let’s go through the different features of the two filesystems. Re: EXT4 vs. So it has no barring. 1 Login to pve via SSH. Like I said before, it's about using the right tool for the job and XFS would be my preferred Linux file system in those particular instances. 703K subscribers in the DataHoarder community. We assume the USB HDD is already formatted, connected to PVE and Directory created/mounted on PVE. all kinds for nice features (like extents, subsecond timestamps) which ext3 does not have. ext4 is a filesystem - no volume management capabilities. sdb is Proxmox and the rest are in a raidz zpool named Asgard. sysinit (RHEL/CentOS 6. As PBS can also check for data integrity on the software level, I would use a ext4 with a single SSD. Note that when adding a directory as a BTRFS storage, which is not itself also the mount point, it is highly recommended to specify the actual mount point via the is_mountpoint option. Sistemas de archivos de almacenamiento compartido 27. Ich selbst nehme da der Einfachheit und. All benchmarks concentrate on ext4 vs btrfs vs xfs right now. Select Datacenter, Storage, then Add. I hope that's a typo, because XFS offers zero data integrity protection. When dealing with multi-disk configurations and RAID, the ZFS file-system on Linux can begin to outperform EXT4 at least in some configurations. 52TB I want to dedicate to GlusterFS (which will then be linked to k8s nodes running on the VMs through a storage class). ZFS vs EXT4 for Host OS, and other HDD decisions. , where PVE can put disk images of virtual machines, where ISO files or container templates for VM/CT creation may be, which storage may be used for backups, and so on. 05 MB/s and the sdb drive device gave 2. You will need a ZIL device. But there are allocation group differences: Ext4 has user-configurable group size from 1K to 64K blocks. Even if I'm not running Proxmox it's my preferred storage setup. ext4 can claim historical stability, while the consumer advantage of btrfs is snapshots (the ease of subvolumes is nice too, rather than having to partition). But on this one they are clear: "Don't use the linux filesystem btrfs on the host for the image files. Given that, EXT4 is the best fit for SOHO (Small Office/Home. Watching LearnLinuxTV's Proxmox course, he mentions that ZFS offers more features and better performance as the host OS filesystem, but also uses a lot of RAM. As cotas XFS não são uma opção remountable. Ext4 has a more robust fsck and runs faster on low-powered systems. Hinsichtlich des SpeicherSetting habe ich mich ein wenig mit den folgenden Optionen befasst: Hardware-RAID mit batteriegepuffertem Schreibcache (BBU) Nicht-RAID für ZFS Grundsätzlich ist die zweite Option. If this were ext4, resizing the volumes would have solved the problem. For a while, MySQL (not Maria DB) had performance issues on XFS with default settings, but even that is a thing of the past. Unless you're doing something crazy, ext4 or btrfs would both be fine. Proxmox installed, using ZFS on your NVME. Although swap on the SD Card isn't ideal, putting more ram in the system is far more efficient than chasing faster OS/boot drives. xfs 4 threads: 97 MiB/sec. Create a directory to mount it to (e. EDIT: I have tested a bit with ZFS and Proxmox Backup Server for quite a while (both hardware and VMs) and ZFS' deduplication and compression have next to 0 gains. EXT4 is just a file system, as NTFS is - it doesn't really do anything for a NAS and would require either hardware or software to add some flavor. For example, if a BTRFS file system is mounted at /mnt/data2 and its pve-storage. Both ext4 and XFS should be able to handle it. I've got a SansDigital EliteRAID storage unit that is currently set to on-device RAID 5 and is using usb passthrough to a Windows Server vm. So I think you should have no strong preference, except to consider what you are familiar with and what is best documented. Fourth: besides all the above points, yes, ZFS can have a slightly worse performance depending on these cases, compared to simpler file systems like ext4 or xfs. • 1 yr. ext4 is slow. Select the local-lvm, Click on “Remove” button. As pointed out by the comments deduplication does not make sense as Proxmox stores backups in binary chunks (mostly of 4MiB) and does the deduplication and most of the. Fstrim is show something useful with ext4, like X GB was trimmed . Now, XFS doesn't support shrinking as such. michaelpaoli 2 yr. 5. This is not ZFS. 2 Navigate to Datacenter -> Storage, click on “Add” button. The only realistic benchmark is the one done on a real application in real conditions. ZFS is supported by Proxmox itself. 42. And ext3. LVM-thin is preferable for this task, because it offers efficient support for snapshots and clones. To me it looks it is worth to try conversion of EXT4 to XFS and obviously need to either have full backup or snapshots in case of virtual machines or even azure linux vms especially you can take os disk snapshot. Running on an x570 server board with Ryzen 5900X + 128GB of ECC RAM. So yes you can do it but it's not recommended and could potentially cause data loss. Copied! # xfs_growfs file-system -D new-size. I have a 20. Jan 5, 2016. And this lvm-thin i register in proxmox and use it for my lxc containers. EXT4 is very low-hassle, normal journaled filesystem. Before that happens, either rc. Results were the same, +/- 10%. If it is done in a hardware controller or in ZFS is a secondary question. 1. I have a RHEL7 box at work with a completely misconfigured partition scheme with XFS. If this were ext4, resizing the volumes would have solved the problem. 1, the installer creates a standard logical volume called “data”, which is mounted at /var/lib/vz. というのをベースにするとXFSが良い。 一般的にlinuxのブロックサイズは4kなので、xfsのほうが良さそう。 MySQLでページサイズ大きめならext4でもよい。xfsだとブロックサイズが大きくなるにつれて遅くなってる傾向が見える。The BTRFS RAID is not difficult at all to create or problematic, but up until now, OMV does not support BTRFS RAID creation or management through the webGUI, so you have to use the terminal. Thanks!I installed proxmox with pretty much the default options on my hetzner server (ZFS, raid 1 over 2 SSDs I believe). 3. Btrfs is still developmental and has some deficiencies that need to be worked out - but have made a fair amount of progress. But shrinking is no problem for ext4 or btrfs. They perform differently for some specific workloads like creating or deleting tenthousands of files / folders. howto use a single disk with proxmox. yes, even after serial crashing. Subscription period is one year from purchase date. I chose two established journaling filesystems EXT4 and XFS two modern Copy on write systems that also feature inline compression ZFS and BTRFS and as a relative benchmark for the achievable compression SquashFS with LZMA.