Our clients have been asking for this. The same organizations using Ornn to hedge GPU compute exposure are managing billions in memory procurement with no derivatives market to turn to. Modern AI workloads are increasingly memory-bound. Training large language models requires not just GPU compute but massive amounts of high-bandwidth memory, and the servers housing these chips require terabytes of system memory. The AI memory wall is real, and procurement teams are feeling it.
The supply side makes volatility inevitable. The memory market is dominated by three manufacturers: Samsung, SK Hynix, and Micron. This concentration creates supply dynamics that amplify price swings. Capacity decisions made years in advance collide with unpredictable demand from AI buildouts, consumer electronics cycles, and data center expansion.
Memory and GPU prices often move together during AI demand spikes, but they respond to different supply constraints. Offering derivatives on both allows participants to trade the spread, hedge more precisely, and construct positions that match their actual infrastructure exposure. This is the natural next step in building out the full financial stack for compute infrastructure.
We're not launching into a vacuum. Ornn's existing network of market makers, institutional traders, and infrastructure operators gives us immediate liquidity in new products. The relationships we've built as the central venue for compute derivatives translate directly into memory. The participants are the same, the hedging needs are adjacent, and the trust is already established.
Contract specifications follow the same principles that made our GPU products successful: monthly settlements, cash-settled against transparent spot pricing, and lot sizes calibrated to real procurement volumes. Whether you're a hyperscaler locking in next quarter's memory costs or a fund expressing a view on the semiconductor cycle, the product is built for serious participants.