Context for newbies: Linux refers to network adapters (wifi cards, ethernet cards, etc.) by so called “interfaces”. For the longest time, the interface names were assigned based on the type of device and the order in which the system discovered it. So, eth0
, eth1
, wlan0
, and wwan0
are all possible interface names. This, however, can be an issue: “the order in which the system discovered it” is not deterministic, which means hardware can switch interface names across reboots. This can be a real issue for things like servers that rely on interface names staying the same.
The solution to this issue is to assign custom names based on MAC address. The MAC address is hardcoded into the network adaptor, and will not change. (There are other ways to do this as well, such as setting udev rules).
Redhat, however, found this solution too simple and instead devised their own scheme for assigning network interface names. It fails at solving the problem it was created to solve while making it much harder to type and remember interface names.
To disable predictable interface naming and switch back to the old scheme, add net.ifnames=0
and biosdevname=0
to your boot paramets.
The template for this meme is called “stop doing math”.
You’re not wrong. But generally the idiocy is in response to beserkeness elsewhere, madness follows…
I have to disagree with you there. Systemd sucks ass, and so does RPM.
Careful. Jeff’s format gives us really great advantages from an atomic package that we don’t have elsewhere. THAT, at least, was a great thing.
Lennart’s Cancer, though, can die in a fire.
Atomic updates are amazing. But the package manager is slow as hell. SuSE managed to make zypper much faster using the same package format.
The only thing that’s slow is dnf’s repository check and some migration scripts in certain fedora packages. If that’s the price I need to pay to get seamless updates and upgrades across major versions for nearly a decade, then I can live with that.
I’ll grant you that; I haven’t used dnf so can’t speak to its performance.
I’m with our binary friend; the systems they try to replace tend to be time tested, reliable and simple (if not necessarily immediately obvious) to manage. I can think of a single instance where a Redhat-ism is better, or even equivalent, to what we already have. In eavh case it’s been a pretty transparent attempt to move from Embrace to Extend, and that never ends well for the users.
I don’t know if it would be accurate to call it a redhat-ism, but btrfs is pretty amazing. Transparent compression? Copy-on-write? Yes please! I’ve been using it for so long now that it’s spoiled me lol. Whenever I’m on an ext4 system I have to keep reminding myself that copying a huge file or directory will… you know… actually copy it instead of just making reflinks
I’ve never actually tried BTRFS, there were a few too many “it loses all your data” bugs in the early days, and I was already using ZFS by then anyway. ZFS has more than it’s fair share of problems, but I’m pretty confident my data is safe, and it has the same upsides as BTRFS. I’m looking forward to seeing how BCachefs works now it’s in kernel, and I really want to compare all three under real workloads.
Ooh, I’ve never heard of bcachefs, sounds exciting! I see it supports encryption natively, which btrfs doesn’t. Pretty cool!
Personally I’ve never had any issues with btrfs, but I did start using it only a couple years ago, when it was already stable. Makes sense that you’d stick with zfs tho, if that’s what you’re used to.