Hey, great read as always. The LLM part on code transformations is super intersting. What if this could lead to self-optimizing compilers adapting to specific hardware in real-time?
Great question, Daniel. The ‘code the transforms’ approach hints at that path: if transforms are explicit and auditable, a compiler can adapt them per-device using runtime/profile signals while checking that semantics are preserved. We dug into the enabling pieces—LLVM’s modular backends, ORC JIT, PGO/sample-based feedback, and where MLIR fits—in our Deep Engineering feature with Quentin Colombet: "Deconstructing Codegen: How LLVM’s Modular Backends Enable Portable, Maintainable Optimization." You can read it here: https://deepengineering.substack.com/p/deep-engineering-11-quentin-colombet
Hey, great read as always. The LLM part on code transformations is super intersting. What if this could lead to self-optimizing compilers adapting to specific hardware in real-time?
Great question, Daniel. The ‘code the transforms’ approach hints at that path: if transforms are explicit and auditable, a compiler can adapt them per-device using runtime/profile signals while checking that semantics are preserved. We dug into the enabling pieces—LLVM’s modular backends, ORC JIT, PGO/sample-based feedback, and where MLIR fits—in our Deep Engineering feature with Quentin Colombet: "Deconstructing Codegen: How LLVM’s Modular Backends Enable Portable, Maintainable Optimization." You can read it here: https://deepengineering.substack.com/p/deep-engineering-11-quentin-colombet