But right now, you're watching this video because one, God put at least one eyeball on your head. And two, because an operating system decided you could. It might be Mac OS if you're rich, Windows if you've given up on life, or Linux if you're part of the 3% of base gigachads out there. Either way, your CPU is running something like 400 other programs, if Chrome is eating 20 GB of RAM for no reason, and somehow your cursor still moves when you wiggle the mouse. This is not normal. It's a miracle performed thousands of times per second by the most underappreciated software ever written, the operating system. The first one, GM-NIO,
was shipped in 1956 at General Motors because an engineer decided humanity had better things to do than hand feed punch cards into a two-story IBM mainframe. It could run exactly one program at a time, had no memory protection, no users, no files, and it still crashed less than Windows Millennium Edition. It's now 70 years later, and in today's video, you'll finally understand how the bootloadader, processes, scheduling, threads, system calls, virtual memory, interrupts, privilege rings, IO, index nodes, and more all work together in perfect harmony by figuring out exactly what happens in your computer from the moment you press the power button to the moment you rage quit and shut it down.
Stage one, the boot loader. You press the power button, electricity hits the motherboard, and the CPU wakes up in the most primitive state possible. At this point, there's no memory management or even a concept of files. It's just a single core executing instructions at a hard-coded address burned into the firmware. On modern machines, that's UEFI, but on ancient ones, it was called BIOS. A firmware's job is to wake up just enough hardware to find a disc, then hand it off to a bootloadader. On Linux, it's called Grub, the grand unified bootloader. On Mac, it's called IBO, and on Windows, it's called Bootmagger. But the job of the bootloader is simple. Find the kernel on disk and load it into RAM. That's the
handoff. And at this point, the CPU is running kernel code with full hardware privileges. But everything interesting about your computer, like files, processes, windows, and so on, doesn't even exist yet. The colonel has to build it all from scratch over the next few seconds. Before we go any further, though, we need to understand stage two, privilege rings. Your CPU is protected by multiple privilege levels. On x86, there are four, but basically only two matter. Ring zero for the kernel which can do basically anything and ring three user space which can run applications but needs to ask permission for everything else. Currently the kernel is running CC code in ring zero with no
guardrails. One wrong pointer and the entire machine catches on fire. That's why kernel developers drink. But if we didn't have this ring of separation enforced by the CPU itself, every program would be able to read every other program's memory and crash your entire system. But with the privilege ring in place, it means a buggy program can usually only crash itself. But now we enter stage three where the colonel tells the biggest lie in computing virtual memory. Here's how the scam works. When a program eventually requests a memory address like this, that address doesn't exist. It's a fake virtual address that gets translated into a real physical address by a piece of hardware called the MMU, the memory management unit, which itself uses a
data structure called a page table that the colonel is building right now. Memory is handed out in chunks called pages, typically four kilobytes each. But what's really interesting is that each process gets its own page table. And that means two applications can work together without screwing each other over. Like your browser can't read your password manager's memory and vice versa. They live in parallel universes that only the kernel can see between. Pretty cool. But the MMU also caches recent translations in a tiny structure called the TLB. The translation look aside buffer. a translation, meaning a virtual address mapped to a physical address. When a program touches a page
that isn't in RAM, the MMU raises a page fault, which wakes the kernel up and loads a page from disk and resumes the program like nothing happened. And now that we have these memory lies in place, it's time for stage four, the file system. Your disc at the lowest level is a long line of numbered blocks. But a file system is the software that lies about that and presents you with these nice files and folders instead. The colonel mounts the file system, but the files themselves are stored as something called index nodes. But the index node isn't actually the file itself. Instead, it contains metadata like size, permissions, and timestamps, and most importantly, a pointer to the actual
data block on disk. But it's also important to notice what's not in the index node, the file name. You see, file names live in directories, which are themselves just special files mapping names to index node numbers. This is why you can have multiple file names pointing to the same file. Now, file system architectures come in a variety of different personalities like ext4, NTFS, and APFS, just to name a few. But what's cool about modern file systems is that they use a feature called journaling, which will write your intentions before writing data. And that means if you accidentally yank the power cord midwrite, you don't end up with a bunch of corrupted garbage most of the time. But now it's time to move on to
stage five, device drivers and interrupts. But first, we need to quickly talk about Railway, who was cool enough to sponsor this 11-minute video on esoteric operating system knowledge. It's an all-in-one intelligent cloud provider that lets you deploy anything in a few clicks. So, instead of drowning in YAML, you can just connect your repo and Railway will read your code and set up the right config for you automatically. Or you can use the new Railway CLI with their official agent skills to let Codeex, Claude Code, or one of your favorite AI agent minions to do all the dirty work instead.
Developers love how easy it is to spin up any service you want, and how Rayway only charges you for the resources you actually use and not what you provision, which can save over 65% on cloud costs. Sign up for free today at the link below and you'll get $20 in free credits when you upgrade. Pardon the interruption, but now it's time to talk about device drivers and interrupts. At this point, we have memory in a file system. So, the kernel starts loading device drivers, which is specialized code that translates generic kernel requests into specific hieroglyphics for a given chip architecture. Each piece of external hardware, like your GPU, Wi-Fi card, and
keyboard, it gets a driver loaded from disk and registered with the kernel. The drivers actually run in kernel mode, which means one buggy driver can often crash the entire operating system. Like the Windows blue screen of death often comes from graphics drivers. And a couple years ago, a bad driver released by the cyber security company Crowdstrike almost took down the entire global economy. But once the drivers are loaded, the colonel then enables something called interrupts. Like, have you ever wondered how does the keyboard tell the operating system a key was pressed? Well, the OS doesn't sit there in an infinite loop asking. Instead, the keyboard fires an interrupt, which is an electrical signal that yanks the CPU out
of whatever it's doing and jumps to an interrupt handler in the kernel. Interrupts are the magic that lets your computer react instantly to input. Like when you move the mouse, an interrupt fires and the cursor moves. Or if your Wi-Fi receives some data from the internet, an interrupt fires and the network stack wakes up so that applications can use it. But basically, the entire machine is driven by tiny electrical screams from hardware saying, "Hey, something happened. Deal with it." But now that all the drivers are loaded, we can move on to stage six, PID1, the first process. The kernel is now fully operational, but it's lonely. So it creates the first user space program, PID1. On Linux, that's usually systemd,
but essentially a process is just a running program. But creating one means the colonel needs to allocate memory, load an executable from disk, set up the virtual address space in a page table we talked about earlier, and then finally add an entry to a giant data structure called a process table. This is how the kernel keeps score and every process gets a PID or process ID. PID1 is special because it's the ancestor of every other process on the machine. If P1 dies, the colonel panics and the whole system goes down. But a key thing to understand here is that PID1 runs in ring 3, the user space. And that means from this moment forward, everything running on your machine needs to ask the
kernel for permission. And that's where stage 7 comes in, system calls, which might be the single most important API in computing that you've never actually written by hand. When a process wants to read a file, it can't just reach into the disk. The CPU will physically refuse. Instead, it has to make a system call where it puts arguments into specific registers. it triggers a special instruction and the CPU switches from ring three back to ring zero. This boundary is the only reason your computer is secure. Like if you've ever programmed in C, you might think you're some kind of hardcore low-level engineer. But when you use a function like print in your C code, you may not realize it's actually making a right
system call under the hood. On Linux, there are around 400 different system calls to make, and these are the actual API of your computer. Everything else is just a library built on top. Two of the most important system calls are fork and exec, which are used to create new processes in the user space. But by the time you actually see the desktop, you'll have dozens or even hundreds of running processes. And that's where stage 8 comes in, the scheduler. But wait a minute, your computer only has eight CPU cores. So, how does it actually manage hundreds of processes at the same time? You can think of theuler like an air traffic controller at a busy airport. all the airplanes or processes and the scheduler decides who gets to
land on the runway which is the CPU in this case. Colonel developers have tried many different architectures to deal with this challenge. But modern Linux now uses a technique called earliest eligible virtual deadline first, which sounds like a boss in Elden Ring, but it uses a variety of rules to ensure that every process gets its fair share of CPU time. But then we run into another problem. is some applications want to do multiple things at once without the overhead of multiple processes. And that's where stage 9 comes in, threads. A thread shares the same memory and file descriptors, but a different stack in a different program counter. And now suddenly, one program can do two
different things in parallel. But it's also like having a loaded gun pointed at your foot. The threads share memory, and writing to the same variable at the same time can often produce race conditions. Modern programming languages try to prevent you from shooting yourself in the foot like go's go routines or the Rust borrow checker which refuses to compile threaded code that could race. The threads are awesome but two entirely different applications can't just share memory like that. So what do we do when they need to communicate with each other? Well that brings us to stage 10 IPC interprocess communication. Like
imagine I open up a terminal in Linux and want to read a file with cat. That provides me with some output. But now I want to search for a specific term in that output. The easiest solution is to use a pipe which was invented in 1973 and is still undefeated. It allows us to combine cat and GP which are two totally separate processes. The operating system creates a pipe so the output from one becomes the input to another. No shared memory just a stream of bytes flowing between them. Pretty cool. And there are other IPC techniques like sockets and message cues. But the main idea is that it allows two processes to communicate safely. Congratulations, you now understand how your operating system
works at a low level. But now it's time to rage quit. Let's find out what happens in stage 10 when we hit the shutdown button. P1 sends a signal called SIG term to every process, which is a polite way of asking them to stop what they're doing. A well- behaved process will save their state and quit. But then after a timeout, another signal gets sent called sig kill. This is game over and the file system will flush its journals and unmount. Drivers release their hardware. The colonel syncs memory to disk. Interrupts are disabled and finally the CPU comes to a halt. The firmware cuts power and your screen goes black. You're finally free to go outside and touch some grass. Thanks for watching and I will see you in the next