We Built a Linux Distribution from Scratch — Here's What Happened

There's a certain kind of Linux user who isn't satisfied running someone else's system. Not because the available distributions are bad — many are excellent — but because the question "how does this actually work?" never stops nagging. If that sounds familiar, this post is for you.

Sable Linux is a custom Linux distribution we're building from the ground up, targeting advanced users who want a system built for serious work: security research, penetration testing, AI and LLM workflows, gaming, and virtualization. It runs on modern high-end hardware — in our case an Intel Core Ultra 5 245K with both integrated Intel Arc graphics and a discrete AMD Radeon RDNA4 GPU — and it boots from a USB SSD, which turned out to be far more interesting than we expected.

The foundation is Linux From Scratch 12.4-systemd. If you're not familiar with LFS, the concept is straightforward and the execution is anything but: you build a complete, bootable Linux system entirely from source code, one package at a time, using only a host Linux system as scaffolding. No package manager. No installer. No shortcuts. By the time you're done, every binary on the system passed through a compiler you configured yourself, linked against libraries you built yourself, running on a kernel you assembled option by option.

We're documenting this process publicly for a few reasons. First, because the problems we ran into — and there were several genuinely difficult ones — aren't well documented anywhere. Second, because we think there's an audience of technically capable people who want to understand their systems at this level but don't know where to start. And third, because Sable Linux is heading somewhere: a full security and AI research platform, eventually distributable, with an installer and a proper identity. This is the origin story.

This post covers the complete LFS build: from bootstrapping the cross-toolchain through the moment we got an independent boot prompt on a system we built from nothing. It's technical. It gets into kernel configuration, initramfs internals, GRUB EFI installation, and GPU firmware loading. If you want the surface-level version, this isn't it. If you want to understand what actually happened — including the things that went wrong and how we fixed them — read on.

Cross-Toolchain and Temporary Environment

Before you can build a Linux system from scratch, you face a fundamental problem: you need a compiler to build software, but the compiler itself is software that needs to be built. And the compiler on your host system — whatever Linux distribution you're running — makes assumptions about libraries, paths, and system interfaces that won't match your new system. If you use it directly, you'll end up with a system subtly contaminated by your host's configuration. LFS solves this with a cross-compilation toolchain: a compiler built specifically to produce binaries for the new system, isolated from the host entirely.

This is Chapters 5 and 6 of LFS, and it sets the tone for everything that follows. You're not clicking through an installer. You're building binutils so you have a linker, then building a minimal GCC so you have a compiler, then using that compiler to rebuild binutils and GCC properly, then using those to build the rest of the temporary toolchain. It's deliberately circular in a way that makes sense once you understand why — each pass produces tools that are more independent of the host than the last.

Our host system was Ubuntu 24.04.4 running on an Intel Core Ultra 5 245K with 30GB of RAM and an NVMe drive encrypted with LUKS. That last detail mattered more than expected. LFS typically assumes you're building to a dedicated partition, but our encrypted setup made that awkward — we couldn't easily create a new unencrypted partition without restructuring the entire drive. The solution was a loop device: a file on the encrypted filesystem that the kernel treats as a block device, mountable and partitionable like any drive. It worked cleanly, though it added a step to every session — remounting the loop device and re-entering the chroot environment after any reboot or interruption.

We established two practices early that paid dividends throughout the entire build. The first was a two-terminal workflow: one terminal living inside the chroot build environment, one on the host for git operations. Every completed package got a git commit with a timestamp and brief notes. By the end of the build we had a detailed audit trail of exactly what was built, in what order, and when — invaluable when something broke and we needed to understand the state of the system at any given point. The second practice was keeping the LFS 13.0 development documentation alongside the 12.4 book we were primarily following. When a package behaved unexpectedly, checking whether 13.0 had updated instructions for it often revealed exactly what had changed and why.

Chapter 7 moves into the chroot environment itself — you've built enough temporary tools to enter the new system's filesystem and build the remaining scaffolding from inside it. This is the first moment the new system feels real. You're no longer building toward something; you're building inside it. Six packages complete this phase: Gettext, Bison, Perl, Python, Texinfo, and Util-linux. Each one is a prerequisite for something that comes next. None of them are exciting on their own. Together they constitute the platform everything else depends on.

No dramatic failures here. The cross-toolchain phase is well-documented and the LFS book is precise about what to do. The discipline required is patience — following instructions exactly, resisting the urge to improvise, verifying each step before proceeding to the next. The drama comes later.

86 Packages, One at a Time

Chapter 8 of LFS is where the actual operating system gets built. Not the scaffolding, not the temporary tools — the real system that will boot and run. It's 86 packages, ranging from fundamental libraries like Glibc and Zlib through the full GCC compiler suite, Bash, Perl, Python, OpenSSL, and finally systemd as the init system. On a 14-core Intel Core Ultra 5 with parallel make, the whole chapter takes several hours. Some packages take minutes. GCC takes the better part of an hour on its own.

The experience is meditative in a way that's hard to describe to someone who hasn't done it. Each package follows the same basic rhythm: extract the source, configure the build, compile, run the test suite if the book requires it, install. Then commit to git and move to the next one. After the first dozen or so you develop a feel for the process — you start recognizing the configure output patterns, understanding what the options mean, noticing when something looks different from what the book describes. You stop being a person following instructions and start being a person who understands what's happening.

The GCC 15 Problem

Our host system was running GCC 15, which introduced a significant behavioral change: it defaults to C23 mode rather than the previous C17. This sounds like a minor version bump, but C23 removes several language constructs that were common in older codebases — implicit function declarations chief among them. A surprising number of packages in the LFS build were written expecting pre-C23 compiler behavior, and GCC 15 rejected them outright.

The fix was straightforward once we understood the pattern: add -std=c11 to the compiler flags for affected packages, explicitly telling GCC to use an older language standard. The frustrating part was that the error messages weren't always obvious about the root cause — you'd see a cryptic compilation failure and have to recognize it as a C23 compatibility issue rather than a genuine bug in the package. Once we'd seen it a few times the pattern became easy to spot, and we documented it in our build notes for future reference. Anyone building LFS on a modern host with GCC 15 will hit this. Now you know why.

The Stripping Disaster

Late in Chapter 8, after all packages are installed, LFS instructs you to strip debug symbols from the installed binaries and libraries. This is a housekeeping step — debug symbols are useful during development but add significant size to the final system. The LFS 12.4 stripping script automates the process, iterating through the installed files and running the strip utility on each one.

What the LFS 12.4 script doesn't account for is a subtle race condition: strip itself is a binary that depends on shared libraries, and if the script attempts to strip one of those libraries while strip is actively using it, the result is corruption. That's exactly what happened. The script hit libbfd-2.45.so — a library used by the binutils toolchain including strip itself — and corrupted it mid-operation. The corruption cascaded immediately to ld-linux-x86-64.so.2, the ELF dynamic linker. Every executable on the system depends on the dynamic linker to load shared libraries at runtime. With it corrupted, the system was effectively bricked — nothing would run.

The recovery process was tense. The corrupted dynamic linker meant we couldn't execute anything inside the chroot environment, so we had to work entirely from the host system. The key asset was the backup image we'd created before the stripping operation — a complete snapshot of the Chapter 8 system state saved with dd to an external drive. We restored ld-linux-x86-64.so.2 directly from that image, which got the chroot environment functional again. Then we rebuilt libbfd using the host system's toolchain, operating outside the chroot to avoid the same dependency problem. Finally we implemented the corrected stripping approach from LFS 13.0, which copies libraries to temporary locations before stripping rather than operating on them in place — eliminating the race condition entirely.

The experience reinforced a principle that should be obvious but apparently needs to be learned by doing: always take a backup before any operation that modifies files in place across the entire system. We had taken one. It saved the build. If we hadn't, we would have been starting Chapter 8 over from the beginning.

The Result

After the stripping drama the rest of Chapter 8 completed without incident. The final system measured 3.5GB — lean, complete, containing everything needed to boot and operate as a minimal Linux system. No graphical interface, no package manager, no user applications. Just a kernel's worth of userspace: shells, compilers, libraries, init system, and the tools needed to build more.

Standing at the end of Chapter 8 with a clean system and a full git log of every package that went into it is a genuinely satisfying moment. The system doesn't do much yet. But it exists because of decisions you made and commands you ran, and you understand every layer of it in a way that simply isn't possible when you install a distribution someone else built.


Part 3 — Making It Bootable: Kernel, Initramfs, and First Boot

There's a meaningful difference between a Linux system and a bootable Linux system. At the end of Chapter 8 we had the former — a complete userspace sitting in a directory on our host machine's filesystem, waiting. Chapters 9 through 11 close that gap: system configuration, kernel compilation, bootloader installation, and the moment of truth where you find out if any of it actually works.

Hardware Decisions

Our original target drive was an old USB 2.0 hard drive — available, expendable, seemingly adequate for development purposes. It wasn't. USB 2.0 throughput made compilation unbearably slow when building inside the target environment, and the drive itself proved unreliable under sustained write loads. We abandoned it partway through and extracted a 500GB SSD from a spare laptop, connected it via USB 3.0. The difference was immediately apparent — throughput went from crawling to genuinely fast, and the drive handled extended build sessions without complaint.

The new drive got a clean partition layout: 512MB EFI partition for the UEFI bootloader, 2GB for /boot, and the remainder for root. Simple, standard, and as it turned out, the source of interesting complications later. We transferred the complete Chapter 8 system to the new drive using rsync and continued from there.

The Kernel

Kernel configuration is where a from-scratch build diverges most dramatically from installing a distribution. A distribution kernel is configured for broad hardware compatibility — thousands of options enabled, most as modules, covering hardware the maintainers have never touched. Building your own kernel means making decisions about every significant option, and on bleeding-edge hardware those decisions matter.

Our hardware pushed us into several non-obvious configuration requirements. The Intel Core Ultra 5 245K uses x2APIC for interrupt routing, which requires IRQ_REMAP to be enabled — without it the kernel falls back to a compatibility mode that causes subtle performance problems. The USB SSD root filesystem requires USB_UAS (USB Attached SCSI), a protocol that treats USB storage devices as SCSI targets for better performance; without it the kernel can't reliably find the root partition during boot. The Intel Arc integrated graphics required its firmware loading infrastructure configured correctly, and the AMD discrete GPU needed its own driver considerations.

We built kernel 6.16.1 with these requirements in mind, enabling critical options as built-in rather than modules where early boot access was needed. The kernel compiled cleanly in under ten minutes across 14 cores.

The Initramfs Problem

Modern Linux boot requires an initramfs — a small temporary filesystem that the kernel unpacks into memory and uses to perform early boot tasks before the real root filesystem is mounted. On a USB drive, this is especially important: the kernel needs USB subsystem initialization to complete before it can find the root partition, and that takes time. Without an initramfs to manage this waiting period, the kernel tries to mount root immediately, finds nothing, and panics.

The standard approach on Ubuntu is mkinitramfs, a tool that generates initramfs images from the installed system. We tried it. It failed, and not in a straightforward way. Ubuntu's initramfs tooling is deeply integrated with its assumption that systems use LUKS disk encryption — our host system does, and that assumption was baked into every template and hook the tool used. The generated initramfs included LUKS-specific initialization code, tried to prompt for encryption passwords that didn't exist on our unencrypted USB drive, and produced a system that either hung silently or panicked before outputting anything useful.

The decision to abandon Ubuntu's tools entirely and build the initramfs from scratch was the right call, though it wasn't obvious at the time. What an initramfs actually needs to do is straightforward: mount the virtual filesystems, initialize devices, find the root partition, mount it, and hand control to the real init process. Ours needed to do one additional thing — wait for the USB subsystem to finish initializing before trying to find the root partition, since USB enumeration takes a moment after kernel startup.

We built a minimal initramfs using busybox — a single binary that implements dozens of standard Unix utilities in a fraction of the space. The init script was under 30 lines: mount proc, sysfs, and devtmpfs; run mdev to populate /dev; wait up to 30 seconds for the root partition to appear; mount it; execute switch_root to hand off to systemd. The entire initramfs weighed a few megabytes. It contained exactly what it needed and nothing else.

First Boot

The first successful boot of a system you've built from nothing is a specific kind of moment. You've been working toward it for weeks. You've seen kernel panics and blank screens and GRUB rescue prompts. You've rebooted into the host system and back more times than you can count. And then the login prompt appears.

Sable Linux 1.0
Kernel 6.16.1 on an x86_64 (tty1)

SableLinux login:

Network came up immediately — 1Gbps full duplex, confirmed by the kernel message scrolling past during boot. systemd initialized cleanly. The system was minimal, rootonly, with no graphical environment and no user applications beyond what LFS provides. But it was real, it was ours, and it worked.

We logged in as root, verified the basics — filesystem mounted correctly, systemd units running, network reachable — and committed the milestone to git before touching anything else.

Post-Build Stabilization: All the Things That Work But Don't Work Right

Getting a system to boot for the first time and getting it to boot reliably are two different problems. The first boot of SableLinux required manual intervention at the GRUB prompt, ran without GPU drivers, and had a hardcoded device path in the initramfs that would cause silent failures whenever USB enumeration didn't go exactly as expected. None of these were acceptable for a system heading toward public release. This section is the story of fixing them, in roughly the order we encountered them.

The GRUB Labyrinth

The symptom was consistent: power on the machine with the SableLinux USB SSD connected, watch the firmware hand off to the bootloader, and land at a raw GRUB command prompt instead of a boot menu. Every time. The workaround was typing configfile (hd0,gpt2)/grub/grub.cfg at the prompt to manually load the boot configuration. It worked, but it was not independence — it was a system that required a human to type the same command on every boot.

Diagnosing the root cause required peeling back several layers. The first layer was the EFI boot entry situation. Running efibootmgr from the host system revealed that SableLinux had no named boot entry in the firmware at all — it existed only as a generic "UEFI OS" fallback, third in the boot order behind Ubuntu and Windows. The firmware wasn't even trying to boot SableLinux by default. We registered a proper named entry pointing to the SableLinux EFI partition, moved it to first in boot order, and confirmed the change.

That fixed the boot priority but not the GRUB prompt problem. Digging deeper, we examined the EFI partition contents and found two issues. First, the grub.cfg file in the EFI partition was a complete duplicate of the boot configuration rather than a chainloader — it was supposed to find the /boot partition and load the real configuration from there, but instead it was trying to be the real configuration itself, while also failing to properly load the search_fs_uuid module needed to find the boot partition by UUID. When the module wasn't available, GRUB dropped to a prompt.

Second, and more fundamentally, the GRUB EFI installation was incomplete. The x86_64-efi GRUB modules were present in /boot/grub/ but the critical kernel.img file — required by grub-install to assemble a working EFI bootloader — was missing entirely. Only the i386-pc (legacy BIOS) version existed. GRUB had been set up for BIOS boot during the initial build, and the EFI files present were either manually copied or came from somewhere else without the complete module set. Running grub-install --target=x86_64-efi from inside the chroot environment failed through a chain of errors — wrong directory, missing files, efibootmgr not installed — each one requiring a specific workaround. We copied the complete x86_64-efi module set from the Ubuntu host, ran grub-install --no-nvram to skip the EFI variable registration (handled separately via efibootmgr), and ended up with a proper /EFI/SableLinux/grubx64.efi installation with a clean chainloader configuration in the EFI partition.

Replaced the EFI grub.cfg with a proper chainloader:

insmod part_gpt
insmod ext2
insmod search_fs_uuid
search --no-floppy --set=root --fs-uuid 13816e16-93ea-4e55-9b82-cfbb7946b7a0
set prefix=($root)/grub
configfile $prefix/grub.cfg

Five characters of grub.cfg. Weeks of work to understand why they were necessary.

The Initramfs Device Name Problem

While debugging the GRUB situation we noticed something alarming in the original initramfs init script: the root partition was hardcoded as /dev/sda3. On a USB drive, device names are not guaranteed. If another USB device is connected at boot — an external drive, a USB hub, anything that enumerates before the SSD — the kernel assigns device names in enumeration order, and the SSD becomes /dev/sdb. The initramfs would wait 30 seconds, fail to find /dev/sda3, print a failure message, and drop to a shell.

The fix was UUID-based root detection. Rather than looking for a specific device name, the new initramfs uses findfs UUID=<root-uuid> to locate the root partition regardless of what the kernel decided to call it. We also added proper mdev -s device settling before the search, ensuring all devices have had time to register before we start looking. The rebuilt initramfs also incorporated the Intel Arc firmware files — more on that shortly. The result was a boot process that works correctly whether the SSD is /dev/sda, /dev/sdb, or anything else.

The GPU Situation

SableLinux targets a machine with two GPUs: an Intel Arc integrated GPU on the Core Ultra 5 245K (Meteor Lake) and a discrete AMD Radeon PowerColor card based on the RDNA4 architecture. Getting both working from a minimal LFS base required solving three separate problems.

The monitor cable. We mention this because it wasted more time than we'd like to admit. During early boot testing the system appeared to hang after the EFI stub loaded the kernel — no output, no response, just silence. The actual problem was that the monitor was plugged into the motherboard output (Intel Arc), but once the kernel took over display initialization it was sending output to the discrete AMD card. Moving the monitor cable to the AMD card's output immediately revealed that the system had been booting successfully all along. Check your cables before assuming a kernel panic.

Intel Arc firmware. The i915 driver for Intel Arc Meteor Lake requires several firmware blobs to initialize properly: mtl_dmc.bin, mtl_guc_70.bin, mtl_huc_gsc.bin, mtl_gsc_1.bin, and mtl_dmc_ver2_10.bin. These files exist on the Ubuntu host in /lib/firmware/i915/ but are stored in .zst compressed format. Our kernel was not built with CONFIG_FW_LOADER_COMPRESS_ZSTD, so it couldn't decompress them. Additionally, because i915 is compiled into the kernel rather than loaded as a module, it attempts to load firmware before the root filesystem is mounted — meaning the firmware files need to be in the initramfs, not just in /lib/firmware/ on the root partition.

We decompressed all five firmware files using zstd -d on the host, added them to the initramfs build script, and rebuilt the initramfs. The result on next boot was immediate and satisfying — the cascade of ERROR messages in dmesg was replaced by clean initialization output:

Finished loading DMC firmware i915/mtl_dmc.bin (v2.21)
GT0: GuC firmware i915/mtl_guc_70.bin version 70.36.0
GT1: HuC firmware i915/mtl_huc_gsc.bin version 8.5.4
Initialized i915 1.6.0 for 0000:00:02.0 on minor 0

No more "wedged GPU." Intel Arc fully operational.

AMD RDNA4. The discrete AMD card presented a different kind of problem. It wasn't producing errors — it was producing nothing at all. It didn't appear in dmesg beyond a basic PCI enumeration entry. The reason turned out to be embarrassingly simple: CONFIG_DRM_AMDGPU was not set in the kernel configuration. There was no AMD GPU driver in the kernel whatsoever. All the firmware preparation we'd done was irrelevant without a driver to use it.

We enabled CONFIG_DRM_AMDGPU=m — as a module rather than built-in, which avoids the same firmware timing problem we encountered with i915, since modules load after the root filesystem is mounted and firmware is accessible — rebuilt the kernel, installed the new modules and kernel image, and rebooted. The AMD card initialized cleanly on the next boot, firmware loaded without issue, display output available from both GPUs.

System Stabilization

With boot independence and GPU support resolved, the remaining stabilization work was methodical rather than dramatic. Locale generation (en_US.UTF-8 via localedef), timezone configuration, hostname set to SableLinux, journal configured for persistent storage with a 500MB size limit. Network verified — systemd-networkd handling DHCP on all Ethernet interfaces, systemd-resolved providing DNS with Cloudflare and Quad9 as fallback resolvers, connectivity confirmed at 1Gbps.

The /etc/os-release file got its final form:

NAME="SableLinux"
PRETTY_NAME="Sable Linux 1.0"
VERSION="1.0"
HOME_URL="https://sablelinux.dev"

A small thing. A real thing.

User setup completed the phase — root password confirmed, a non-root user created. sudo and SSH are BLFS packages, not yet installed. The system is minimal by design. Everything added from here is intentional.

System Identity and What Comes Next

The Moment It Becomes Real

There's a specific moment in this process where the system stops feeling like a build project and starts feeling like an operating system. It's not the first boot — that's exciting but raw, a proof of concept more than a product. It's not the GPU initialization, satisfying as that is. It's something smaller.

It's the first time you run cat /etc/os-release and see:

NAME="SableLinux"
PRETTY_NAME="Sable Linux 1.0"
HOME_URL="https://sablelinux.dev"

That's it. That's when it clicks. You built this. It has a name. It knows its own name. Every layer underneath that two-line output — the kernel, the dynamic linker, the shell, the filesystem, the init system — you put there deliberately, one decision at a time. No distribution did this for you. No installer made these choices on your behalf. This is yours in a way that no installed system ever is.

We committed that state to git, took fresh partition backups of all three partitions, and closed the LFS phase of the project.

What LFS Actually Teaches You

People sometimes ask whether building LFS is worth the time investment given that you end up with a system roughly equivalent to a minimal Debian or Arch install — fewer packages, less polish, more rough edges. The question misses the point entirely.

LFS isn't about the destination. It's about what happens to your mental model of Linux along the way. By the time you finish, you understand things that are genuinely difficult to learn any other way:

You understand the boot sequence not as abstraction but as a chain of specific programs handing control to the next specific program, each with dependencies that must be satisfied in order. You understand why the initramfs exists and what it actually does. You understand why firmware loading is separate from driver loading and why the timing matters. You understand what a dynamic linker does and why corrupting it bricks a system instantly. You understand why cross-compilation exists and what problem it solves.

More practically: you understand your system well enough to fix it when it breaks. Not by searching for someone else's solution to your specific error message, but by reasoning from first principles about what should be happening and what isn't. That skill transfers to every Linux system you'll ever touch.

Where SableLinux Goes From Here

The LFS base is a foundation, not a destination. A minimal LFS system is capable but spartan — it has no package manager, no graphical environment, no network tools beyond basic connectivity, no user-facing applications of any kind. Building it into a functional security research and AI platform requires the Beyond Linux From Scratch phase, and that work is already underway.

The immediate priorities are infrastructure: sudo and OpenSSH first, because working without privilege escalation and remote access is unnecessarily painful. Then the certificate infrastructure and download tools needed to pull packages reliably. Then Mesa with full AMD and Intel GPU support, enabling hardware-accelerated graphics as the foundation for everything graphical that follows.

From there the build expands in several directions simultaneously. The security and penetration testing stack — network analysis tools, exploitation frameworks, wireless tools — forms the core of SableLinux's identity as a research platform. The AI and LLM stack requires Python, the scientific computing libraries, and ROCm for AMD GPU compute acceleration, enabling the system to run local language models with hardware acceleration on the RDNA4 card. Gaming support means Vulkan, Steam, Wine, and controller infrastructure. Virtualization means QEMU/KVM and libvirt for running guest systems.

Each of these represents weeks of BLFS work. The dependency chains are deep — Mesa alone has dozens of dependencies, several of which have their own significant dependency trees. The LFS discipline of building from source, understanding what you're installing and why, continues throughout.

The longer-term vision for SableLinux is a distributable system — not just a personal build, but something with an installer, a proper release process, and documentation sufficient for other technically capable users to deploy and build on. The sable-install.sh script that will eventually handle automated installation on arbitrary target hardware is already sketched out in concept. The domain is registered. The GitHub repository is public.

We built the foundation. Now we build the system.


A Word About Backups — Or: How We Avoided Losing Everything Twice

If there is one piece of advice worth extracting from this entire build process and applying to any future project of similar complexity, it is this: take a backup before any operation that modifies files in place across the entire system, and take another one every time you reach a state worth preserving.

We learned the first half of that lesson the hard way during the Chapter 8 stripping disaster. We had taken a backup. It saved us. If we hadn't, weeks of work would have been gone. We learned the second half gradually, each time we reached a milestone and realized we'd be devastated to have to rebuild to that point from scratch.

Here's the complete backup history for SableLinux and exactly how each one was taken.


Backup 1 — Post Chapter 8 (Pre-Stripping)

When: Immediately after all Chapter 8 packages were installed, before running the binary stripping script.

Why: The stripping operation modifies files in place across the entire system. If anything goes wrong — and as documented above, something did go wrong — you need a clean restore point from before the operation.

What we had at this point: A loop device containing the complete LFS system, mounted at /mnt/lfs on the Ubuntu host. The loop device itself was a file on the encrypted NVMe drive.

The procedure:

bash

# Unmount everything cleanly first
umount -R /mnt/lfs

# Create compressed image of the entire loop device
dd if=/dev/loop0 bs=4M status=progress | gzip > /mnt/external/lfs-ch8-complete.img.gz

Result: A 4.7GB compressed image containing the complete post-Chapter 8 system state. This became the ultimate fallback — the oldest restore point, still worth keeping even after subsequent backups superseded it for most purposes.

The lesson it taught: When the stripping script corrupted the dynamic linker, this image was the difference between a one-hour recovery and starting Chapter 8 over entirely. Take the backup. Always.


The Migration Backup

When: After migrating from the USB 2.0 hard drive to the 500GB USB 3.0 SSD, before beginning Chapter 9.

Why: The migration itself — partitioning the new drive, copying the system via rsync, configuring the new partition layout — was a point of potential failure. Having a verified working state on the new hardware before proceeding was essential.

At this point the system lived on three partitions on the USB SSD rather than a loop device, so the backup strategy shifted from a single loop device image to per-partition images. We used dd for the EFI and boot partitions (small enough that full-partition imaging is fast) and introduced partclone for the root partition.

Why partclone instead of dd for root: dd reads every block of a partition regardless of whether it contains data. Our root partition was 463GB. With only 3.5GB of actual data, dd would read and compress 463GB of mostly empty blocks — taking 40+ minutes and producing a needlessly large image. partclone reads only the blocks actually in use, completing the same backup in under 4 minutes and producing a proportionally smaller image.

The procedure:

bash

# Verify nothing is mounted
findmnt | grep sable

# EFI partition — dd is fine, 512MB
sudo dd if=/dev/sda1 bs=4M status=progress | gzip > sable-efi.img.gz

# Boot partition — dd is fine, 2GB  
sudo dd if=/dev/sda2 bs=4M status=progress | gzip > sable-boot.img.gz

# Root partition — partclone for efficiency
sudo apt install partclone -y
sudo partclone.ext4 -c -s /dev/sda3 | gzip > sable-root.img.gz
```

**partclone output confirmed:**
```
File system:  EXTFS
Device size:  497.4 GB = 121441025 Blocks
Space in use:  17.0 GB = 4144619 Blocks
Free Space:   480.4 GB = 117296406 Blocks
Total Time: 00:03:42, Ave. Rate: 4.59GB/min, 100.00% completed!

3 minutes 42 seconds versus an estimated 40+ minutes with dd. The resulting image was 2.9GB compressed — representing 17GB of actual data on a 463GB partition.


Backup 2 — Post Stabilization (Current State)

When: After achieving independent boot, full GPU support, and system stabilization — immediately before beginning BLFS.

Why: This represents the most significant milestone since Chapter 8. Independent boot, Intel Arc firmware, AMD RDNA4 driver, locale, hostname, journal configuration, user setup — all of it done and verified. This is the state we'd want to return to if early BLFS work destabilizes the system.

The procedure: Identical to the migration backup above. Unmount everything cleanly first — backing up a mounted filesystem risks producing an inconsistent image.

bash

# Unmount the full stack
sudo umount /mnt/sable/dev/pts
sudo umount /mnt/sable/dev
sudo umount /mnt/sable/proc
sudo umount /mnt/sable/sys
sudo umount /mnt/sable/run
sudo umount /mnt/sable/boot/efi
sudo umount /mnt/sable/boot
sudo umount /mnt/sable

# Verify
findmnt | grep sable

# Then backup
cd /mnt/two/backups/sable-system

sudo dd if=/dev/sda1 bs=4M status=progress | gzip > sable-efi.img.gz
sudo dd if=/dev/sda2 bs=4M status=progress | gzip > sable-boot.img.gz
sudo partclone.ext4 -c -s /dev/sda3 | gzip > sable-root.img.gz
```

**Final backup inventory:**
```
lfs-ch8-complete.img.gz   4.7G   — Full loop device, post-Ch8, oldest fallback
sable-efi.img.gz           12M   — EFI partition, current
sable-boot.img.gz         151M   — /boot partition, current  
sable-root.img.gz         2.9G   — Root partition, current (partclone)

Total backup storage: approximately 7.6GB for a complete system with two restore points.


Restore Procedures

Restoring EFI or boot partitions (dd images):

bash

gzip -dc sable-efi.img.gz | sudo dd of=/dev/sdX1 bs=4M status=progress
gzip -dc sable-boot.img.gz | sudo dd of=/dev/sdX2 bs=4M status=progress

Restoring root partition (partclone image):

bash

gzip -dc sable-root.img.gz | sudo partclone.restore -o /dev/sdX3

Important after any restore: If the drive has been reformatted or the UUIDs have changed, update /etc/fstab and both grub.cfg files with the new UUIDs before attempting to boot. The system uses UUIDs everywhere — in fstab, in /boot/grub/grub.cfg, in /boot/efi/EFI/SableLinux/grub.cfg, and in the initramfs init script. All four must match the actual partition UUIDs or the system will not boot.

bash

# Check actual UUIDs after restore
blkid /dev/sdX1 /dev/sdX2 /dev/sdX3

The Backup Philosophy Going Forward

BLFS introduces a new complication: the system is no longer minimal. As packages accumulate — Mesa, X11 or Wayland, security tools, language runtimes — the root partition grows and the time cost of a full restore increases. The backup strategy should evolve accordingly.

Our approach going forward: take a fresh partclone image of the root partition at the completion of each major BLFS milestone — after the display stack is working, after the security toolchain is in place, after the AI stack is functional. Keep the two most recent milestone backups plus the post-LFS baseline. That gives three restore points at any time without consuming unreasonable storage.

The cost of a backup is minutes. The cost of not having one when you need it is days. Take the backup.


SableLinux is developed openly at github.com/black-vajra/sablelinux. Follow the project's progress at bordercybergroup.com. The distribution targets a public release within one year.