Heterogeneous Compute Physics
"The structure of computation shapes the space of possible algorithms. When hardware was uniform, computation could be abstracted. When hardware is heterogeneous, computation becomes an object of scientific study."
General Diffusion trains foundational AI models that learn the behavior of heterogeneous hardware — predicting performance, partitioning workloads, generating optimized kernels, and controlling infrastructure autonomously. This research establishes a new scientific discipline: Heterogeneous Compute Physics.
Foundational Models
Hardware Profiler
HP1
SystemsGraph Neural Network that learns behavioral fingerprints from execution traces. Predicts performance across CPU, GPU, ASIC, and FPGA — including hardware it has never seen.
Graph Partition Intelligence
GP1
CompilersReinforcement learning agent with Graph Transformer architecture. Discovers optimal workload-to-substrate assignments for static, dynamic, and agentic workloads.
Compute Model (World Model)
CM1
TheoryTransformer-based latent dynamics model predicting system state across three timescales: millisecond hardware events, second-scale migrations, minute-scale thermal dynamics.
Policy Optimization
PO1
SystemsModel-based safe RL using CM1's world model. Learns execution policies that optimize resource allocation subject to hard safety constraints.
Compute Generation
CG1
CompilersLLM pretrained on kernel code, fine-tuned with RLHF from hardware execution feedback. Every generated kernel undergoes formal semantic equivalence verification.
Runtime Safety
RS1
InfrastructureCircuit breaker architecture nested within PO1. Continuous constraint enforcement, runtime validation against reference implementations, automatic escalation on violation.