Meridian — Governed Agent Runtime
Live host demo · Loom-backed workloads

See the runtime, then see what runs on top of it.

This page is not the Loom landing page and not the whole product thesis. It shows how Loom + Kernel look on the live host, then shows the first-party Meridian workflows that currently prove the runtime under real pressure.

Live host snapshot

These charts read directly from the live proof routes on this host. They stay here so the runtime picture remains visible while you scroll through the first-party workflow examples.

Runtime

Footprint now

Loading runtime footprint…

Queue

Pressure now

Loading queue state…

Proof

Boundary now

Loading proof status…

What a Loom-backed signal stream looks like

The workflow example here is intentionally commercial because it pressures the runtime in a useful way. But treat it as a first-party app on Loom, not as Loom's identity.

tracked rivals
3

OpenAI, Anthropic, Google.

watched themes
8

Models, pricing, policy, deprecations, and runtime risk.

operator state
governed

The workflow stays inside Loom + Kernel instead of loose docs and chat.

OpenAI

Pricing shifts, new models, forced migrations, and API deprecations become inputs to a governed signal stream instead of random bookmarks.

Anthropic

Context changes, Claude launches, and safety posture changes become cited inputs instead of rumor-driven updates.

Google

Gemini pricing, preview shutdowns, and Vertex changes land as governed runtime artifacts rather than scattered notes.

What the alert slice looks like

This is a shortened reference slice. The point is to show how a first-party Loom workload turns signal into operator-ready shape without turning this page into a wall of prose.

Compressed alert

01OpenAI pricing shape changed across regions.
02Anthropic context-per-dollar improved.
03Google deprecations raised migration risk.

Operator response

01Rework provider cost models by region.
02Check preview dependency risk now.
03Update GTM talk tracks with the pricing shifts.

Reuse the same governed stream twice

One governed stream becomes two different first-party outputs: a calmer leadership brief and a sharper GTM asset. That reuse matters because it proves Loom can back real operator workflows instead of isolated toy prompts.

Weekly brief

01OpenAI pricing shape changed.
02Google preview shutdown cadence remained risky.
03Competition kept shifting toward price and deployment trust.

Battlecard slice

01Recent move: GPT-5.4 plus migration churn.
02Pricing read: regional surcharge plus batch discount.
03Buyer question: have you modeled the regional impact yet?
One governed stream, two reusable outputs. That is why the demo still matters even after Loom became the product front door: it proves the runtime can carry real, repeated workflow pressure.

What this host proves cleanly

Will claim

01Loom + Kernel are active on this host.
02First-party workflows can run on that runtime and produce operator-facing output.
03The proof routes and boundary remain inspectable while those workflows run.

Will not fake

01Every future hosted deployment mode.
02Universal delivery channel maturity.
03Automatic proof of broad commercial scale.

If you want the runtime itself, go to /loom. If you want the truth line for all claims, read the boundary note.

Want the runtime instead of the example?

Install Loom Compare Loom vs the field See the current first-party pilot path