Follow

So you know how Linux systems letter SCSI disks sda, sdb, sdc, sdd, etc... and if you manage to get more than 26 disks attached to a system, it rolls over from sdz to sdaa.

I just watched a customer list the disks on a server, and it goes up to sdahs.

So I guess that answers the question about what happens after you attach 702 disks to Linux. It rolls over from sdzz to sdaaa. 🤯

· · Web · 2 · 1 · 2

@kwf that makes sense, also that's got to be a fiber channel array

@warthog9 ROCEv2.

Because the proper place for hard drives is mounted directly to your NIC so they can be used by other servers

@warthog9 One of these network attached NVMe drives with the dual 25G NICs built-in MUST have enough horsepower to run nginx and rsync daemons locally....

🤔 just saying...

@warthog9 We should write a module for dnf that uses S3 APIs to pull updates from buckets.

@kwf also, wait... No that's a terrible idea...

How can we get one?

@warthog9 That is what I've been thinking too! We need to make more friends at storage companies.

@gme @kwf @warthog9 I do BTRFS RAID. Yeah, I'm that guy. It works better than ZFS at least, but then anything does.
Anyway, my point is a BTRFS array is only one line in fstab. Pretty disappointing huh?

@calchan @gme @warthog9 ZFS doesn't even need a line in fstab. It mounts on its own.

@kwf @gme @warthog9 Which proves that even the worst technology can't be bad at everything

Sign in to participate in the conversation
A Social Front Organization

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!