In DirectX 11 and classic DirectX 12, the CPU had to record every single GPU task in a massive linear list. If a game needed to calculate shadows, then physics, then lighting, the CPU had to sit there, line by line, building that list.
The GPU finally learned to manage itself. Developers just have to learn to let go. latest directx
We have reached a point where CPUs aren't getting much faster; they are just getting more cores. Work Graphs finally admit that the GPU is the star of the show. By letting the GPU manage itself, Microsoft has effectively removed the traffic cop from the intersection. In DirectX 11 and classic DirectX 12, the
The terror comes from memory. Because the GPU can now generate infinite work (a particle system that explodes into a million more particles), developers can no longer rely on static buffers. Microsoft solved this with —a safety net where excess work spills over into system memory without crashing the driver. Developers just have to learn to let go
With , the GPU launches a "Node." That node processes the work. If it needs more work (a second bounce, a third bounce, a particle effect that spawns more particles), it spawns a child node right there on the silicon.
For decades, programming a graphics card has felt like managing a chaotic restaurant kitchen. The CPU (the head chef) had to shout every single instruction: chop the onions, boil the water, plate the steak. If the kitchen fell behind, the chef had to stop everything to micro-manage the cleanup.