3

Context

When I was configuring a wireless interface support for my Wi-Fi card, I used the following page on Linux Wireless documentation website.

It is said that I should enable Qualcomm Technologies 802.11ax chipset support and Atheros ath11k PCI support to <M>. I did that and installed all the needed firmware blobs from linux-firmware into /lib/firmware. Worked like a charm after reboot: wireless interface successfully appeared.

However I started to wonder about the difference between modular support and built-in support. All I found in the Net about the subject was no more than something like "Built-in stuff loads automatically and modular stuff can be loaded/unloaded from kernel during the runtime". Maybe I'm bad at searching.

Anyways later I decided to try to switch the support for ath11k driver and 802.11ax chipset support from modular (<M>) to built-in (<*>). And this is when things failed: the kernel was not able to find the needed firmware blobs:

# dmesg | grep ath
<...>
mhi mhi0: Direct firmware load for ath11k/WCN6855/hw2.0/amss.bin failed with error -2
<...>

I wonder if it has something to do with these firmware blobs being proprietary.

Questions

  1. The "real" difference between modular and built-in support?
  2. Why can't the kernel find the needed blobs? (The blobs were successfully found when support was modular)
2
  • 5
    It's been some time and I may be wrong so I'll only make this a comment. I believe the kernel modules are loaded at a later time after more of the file systems are loaded, so then the support files (firmware) could be found. I think you need to bake the firmware into the initramfs to be accessible to the kernel at first boot. That's one reason modules are easier to deal with and recommended. Commented Sep 2 at 22:15
  • 1
    Also IIRC for builtin you cannot use modprobe.d conf to set driver options / parameters when needed. Instead you'd need to mess with kernel line in your bootloader entry. Commented Sep 3 at 7:13

2 Answers 2

6

What you found "Built-in stuff loads automatically and modular stuff can be loaded/unloaded from kernel during the runtime" is pretty much it.

The only difference between compiled in drivers and drivers compiled as modules is that modules can be loaded (or removed) after the kernel boots, while compiled-in drivers are built-in to the kernel, activate on boot if the hardware they're designed for is present (although access to matching firmware files may also be needed for some drivers), and can't be unloaded.

Other than that, the drivers are the same, the hardware they work with is the same, their requirements (e.g. firmware blobs) are the same.

On a practical level, there's little or no visible difference to the user of the system. Most modules (especially those essential for booting the OS) are typically loaded at boot time, shortly after the kernel itself has booted and the initrd has been mounted.

As @AaronD.Marasco said in his comment: with a compiled-in driver, you need to make sure that any firmware files required are available in the initrd used to boot up any given kernel.

This is also true for any drivers compiled as modules that are required to successfully boot the system. i.e. some drivers (e.g. those providing the driver for the / filesystem) ALSO need to be installed on the initrd so that they're available at boot time.

For example, if your rootfs is ZFS or NFS then the fs drivers for those (and any hardware drivers and firmware they may need, e.g. a NIC driver and its firmware and tools for setting up the network interface, or hardware drivers for SATA or SAS/SCSI interfaces, etc) need to be on the initrd if they're compiled as modules.

It's fairly common to have things like SATA or SAS and common filesystems like ext4 compiled in, while uncommon filesystems like ZFS, and NIC drivers are compiled as modules (since typically you only need NIC drivers for the network cards you actually have installed)

In short: regardless of whether a driver is compiled-in or compiled as a module, if it is essential to successfully complete the boot process then the module's .ko file AND any firmware it requires AND any relevant module options files (e.g. /etc/modprobe.d/zfs.conf need to be on the initrd.

This is usually handled semi-automatically when you install a new kernel or install/upgrade a dkms-packaged driver module, by copying everything (or at least everything required by drivers in use - this is configurable in /etc/initramfs-tools/initramfs.conf) from /lib/firmware to the initrd.

You can manually update the initramfs at any time by running (as root) update-initramfs -u -k <kernel-version> or update-initramfs -u -k all.

However, some distros use dracut to create and manage init ramdisks. I'm not very familiar with that, having never seen any compelling need to switch from the older initrd tools, so I can't give you an exact update command but it can also create and update initrds.

5

The other answer is essentially correct, but here's some finer points:

Pros for modules:

  • Moving drivers not needed to boot to modules removes the need for them to be in the kerenel, allowing it to be smaller, speeding up boot time
  • Sometimes when drivers crash, if they are a module, they can be reset by unloading them and reloading them (but this doesn't always work) or loading them with different parameters without rebooting
  • Drivers that are modules don't have to be loaded into memory, where built in drivers can't be released even if they are not needed
  • If a driver isn't needed to boot the system, as a module, it can be loaded late in the boot process, allowing firmware to be loaded from the regular filesystem, allowing the initrd to be smaller, and speeding the boot process
  • If a driver is undergoing rapid development, its source code can be distributed separately from the kernel in a faster update stream
  • If a device's firmware is updated and the driver is a module, the initrd doesn't need to be rebuilt to update the firmware, speeding up updates
  • Building only common and critical drivers into the kernel and making the rest modules allows a small number of vendor built kernel binaries to be distributed for many machines, while still giving support to all machines that need the remaining modularized drivers.

Disadvantages for modules instead of being built in:

  • If a driver is needed at boot and it is a module, it has to be embedded in the initrd which can increase time for building the initrd
  • Modules distributed as source code will need to be recompiled on every kernel update, which can slow down updates significantly

Pros (and neutral advantages) for drivers built into the kernel:

  • Typically kernels are compiled by the vendor and distributed as binaries, so drivers always needed at boot won't need to be added to the initrd, saving a bit of time during updates
  • Drivers needed at boot probably can't be unloaded even if they were modules, so being a module has no advantages
  • If a driver is needed at boot, it will be the same size no matter if it is built into the kernel or added to the initrd, and the firmware will need to be in the initrd either way as well
  • Some drivers can't be made into modules because they are too critical to even put into the initrd and have to be built into the kernel

Disadvantages of built in drivers:

  • can't be unloaded
  • distributed with and frozen to the vendor kernel, so it can't be updated without making a custom compiled kernel or waiting for vendor updates
  • makes the kernel images larger, which may fill up small /boot partitions and makes booting slower if the driver is unneeded

The biggest visible differences to the user between a modular driver and a built in driver are:

  • if it can be unloaded at all
  • at what time it is loaded (early in the boot process, later, or after booting)
  • how it gets updated (with the kernel, in the modules directory, or added to the initrd at update)
  • where it gets its firmware (from the firmware directory or from initrd)
  • how to adjust its load time parameters (kernel command line or modprobe config files or command line)
  • if it is installed with the kernel or if it has its own separate package (which might need to be manually installed if it isn't autodetected during OS install)
  • Drivers not built into the kernel have to be triggered to load, ether by udev, device detection probes, config files, or manually loaded. Sometimes this is more automatic than others, and the user may notice.

Note that even if they are modules, some drivers can't be unloaded either because the device doesn't allow it, the device is in use and can't be freed without rebooting, or complicated dependency issues with other drivers. (And a crashed driver may leave a device in a state that can't be fixed by reinitializing the driver, so that doesn't always work either.)

Some modules have load time parameters, but a lot of modules duplicate those in /proc or /sys, allowing them to also be adjusted after the driver is loaded.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.