Meridian — Constitutional OS
Six-platform-primitive Meridian · first managed vertical

Meridian — Governed Competitor Intelligence on a Constitutional Kernel

This demo shows the first managed vertical running on Meridian's constitutional kernel, Commitment-aware platform layer, and live Loom runtime boundary. It is not a claim that every future deployment mode is already live.

What this demo proves: Meridian can govern a real workflow through the kernel's five primitives, compose Commitment as a sixth platform primitive, and execute through Loom.

Launch posture: curl -fsSL https://raw.githubusercontent.com/mapleleaflatte03/meridian-loom/main/scripts/install.sh | bash is the public install story for the local runtime.

What this demo does not prove: broad cross-runtime adapter maturity or live multi-institution routing. Those remain separate architecture programs and must stay bounded by the live host truth line.

Step 1

Set Your Competitor Watchlist

You tell Meridian which competitors to track and what topics matter. This workflow runs as Meridian's first managed vertical: a governed institution operating registered agents against monitored competitor sources. The point is governed work, not just a cool multi-agent story.

Watchlist: Acme Corp (sample customer)
CompetitorTopicsSources MonitoredStatus
OpenAI pricingmodelsAPI changes 3 (blog, changelog, pricing page) Active
Anthropic pricingmodelssafety 2 (blog, model docs) Active
Google (Gemini) pricingdeprecationsVertex AI 3 (DeepMind blog, pricing page, deprecation page) Active

Watchlists are configured per customer. Meridian's pipeline automatically prioritizes sources for tracked competitors.

Step 2

Receive Daily Competitor Alerts

When Meridian is running for a team, it produces a cited intelligence alert covering the most important changes across tracked competitors. Below is a qualified March 2026 pipeline output shown as a reference example.

Competitor Intelligence Alert — March 17, 2026

Covering: OpenAI, Anthropic, Google (Gemini)

Pricing OpenAI introduces regional surcharges and batch discounts for GPT-5.4
OpenAI's pricing page now shows a 10% regional processing surcharge on all GPT-5.4 models and a 50% Batch API discount on both input and output costs. Standard pricing applies under 270K context tokens. This is the first time OpenAI has added geography-based pricing to its API, creating new cost dynamics for customers choosing between providers. Teams running high-volume batch workloads may see significant savings; teams in surcharge regions face a material cost increase.
Source: openai.com/api/pricing (fetched 2026-03-17)
Product Anthropic holds pricing steady while expanding Claude to 1M-token context
Anthropic shipped Claude Opus 4.6 (Feb 5) and Sonnet 4.6 (Feb 17) without raising prices. Opus stays at $5/$25 per million tokens; Sonnet stays at $3/$15. Both models gained a 1M-token context window in beta. Opus 4.6 shows state-of-the-art results on agentic coding benchmarks (Terminal-Bench 2.0) and long-context reliability. For competitive positioning: Anthropic is expanding capability without price increases, making the "context window per dollar" comparison increasingly favorable.
Source: anthropic.com/news/claude-opus-4-6 (2026-02-05), anthropic.com/news/claude-sonnet-4-6 (2026-02-17)
Deprecation Google hard-deprecates Gemini 3 Pro Preview
Google shut down gemini-3-pro-preview on March 9, 2026 and recommends migrating to Gemini 3.1 Pro Preview. This is a fast deprecation cycle — the preview model was live for less than 8 weeks. Teams building on Gemini preview models need active migration tracking. Google's deprecation cadence is notably faster than OpenAI's or Anthropic's.
Source: ai.google.dev/gemini-api/docs/pricing (2026-03-09 shutdown date)
Pricing Google positions Gemini Flash as the low-cost high-volume option
Gemini 2.5 Flash is priced at $0.30/1M input and $2.50/1M output tokens with a 1M-token context window and 50% Batch API discount. This undercuts both OpenAI and Anthropic on raw token cost for high-volume workloads. Google is clearly competing on price for the inference-heavy segment.
Source: ai.google.dev/gemini-api/docs/pricing (fetched 2026-03-17)
Product OpenAI accelerates release cadence with GPT-5.4
OpenAI introduced GPT-5.4 on March 5 as its most capable model, then migrated existing GPT-5.1 conversations to GPT-5.3/5.4 by March 11. This indicates a monthly release cadence that operators need to track. GPT-5.4 emphasizes coding, computer use, tool search, and a 1M-token context window — converging with Anthropic on context size.
Source: openai.com/index/introducing-gpt-5-4 (2026-03-05), help.openai.com release notes (2026-03-11)

Action Items

Review OpenAI's regional surcharges to assess cost impact on your deployment regions.
Update internal pricing comparisons: Anthropic and Google are both more competitive per token than 30 days ago.
If using Gemini preview models, check deprecation timelines now. Google's cycle is fast.
Track OpenAI's monthly release cadence — plan for model migration as a recurring task, not an exception.
Risk to Watch: Provider pricing is diverging: OpenAI adding surcharges, Google competing on price, Anthropic holding steady. Teams using multiple providers need updated cost models. Gemini's fast deprecation cycle adds migration risk for preview-model adopters.

Above: qualified March 17, 2026 pipeline output used as a reference example. The real brief (605 words, 5 findings, 5 sources, QA PASS) is delivered as formatted text when Meridian runs for a team. This demo shows the same category of output in a visual layout.

Step 3

Weekly Intelligence Brief

At the end of each week, Meridian curates the most important competitive moves into a single brief. Below is a representative weekly summary.

Weekly Competitive Brief — Week of March 11–17, 2026

Competitors tracked: OpenAI, Anthropic, Google

Top Competitive Moves This Week

1. OpenAI introduces geography-based API pricing (first in industry)
GPT-5.4 now carries a 10% regional surcharge. Combined with the 50% batch discount, OpenAI is segmenting its pricing more aggressively than before. This is a strategic shift from flat global pricing.
2. Google deprecates Gemini 3 Pro Preview after less than 8 weeks
Fastest deprecation cycle among major providers. Signals Google's willingness to iterate quickly at the cost of API stability. Migration burden falls on developers.
3. Context window convergence at 1M tokens across all three providers
OpenAI (GPT-5.4), Anthropic (Opus/Sonnet 4.6), and Google (Gemini Flash) all now offer 1M-token context. Context window is no longer a differentiator — pricing, reliability, and tooling are.
4. Anthropic maintains price stability while competitors adjust
No pricing changes from Anthropic since initial launch. In a week where OpenAI added surcharges and Google pushed aggressive Flash pricing, Anthropic's stability is itself a competitive signal.

Takeaway

The LLM provider market is shifting from capability competition to pricing and deployment model competition. Teams should update their vendor comparison matrices and revisit total cost of ownership by region.

Step 4

Battlecard: On-Demand Competitor Snapshot

Request a battlecard for any tracked competitor. Structured for sales enablement, exec briefings, or product planning.

Battlecard: OpenAI

Generated March 18, 2026 | Data from pipeline findings | HIGH CONFIDENCE (5 findings, 3 from last 7 days)
Recent Moves
Launched GPT-5.4 on March 5 with 1M-token context, coding focus, and tool-search capabilities
Migrated existing GPT-5.1 users to GPT-5.3/5.4 by March 11 (fast forced migration)
Introduced regional processing surcharges (10%) — first geography-based API pricing in the industry
Added 50% Batch API discount for high-volume workloads
Pricing Intelligence
GPT-5.4: standard pricing under 270K context; 10% surcharge for regional processing
Batch API: 50% reduction on both input and output — significant for high-volume users
Pricing is now more complex than Anthropic's flat model; comparison requires region-aware calculation
Product Updates
GPT-5.4 positioned as coding + agentic work model (competing directly with Claude Opus 4.6)
1M-token context matches Anthropic and Google offerings
Monthly release cadence (5.1 -> 5.3 -> 5.4 in 6 weeks) shows accelerating iteration
Talking Points
OpenAI is the first provider to add regional surcharges — ask customers if they've modeled the impact
Batch discount is compelling for offline workloads but does not apply to real-time use cases
Fast model migration (5.1 -> 5.4) means customers face integration churn; position stability as a selling point
Context window parity at 1M tokens means the differentiator is now pricing, tooling, and reliability
Step 5

Delivery

Meridian routes outputs into the channels a team actually uses. In this demo, competitor intelligence is the first reference workload running on top of the constitutional kernel, the Commitment-aware platform layer, and the live Loom runtime. The workload is the wedge; the platform boundary is the durable thing.

Nightly Pipeline 30+ sources scanned
QA-verified
Daily Alert Founder-led pilot now
automation later
Your Team PMMs, CI, PMs,
Sales Enablement
Current Delivery Channels
ChannelCadenceStatus
Telegram (daily alert)Founder-delivered during the current pilotManual pilot
Telegram Channel (preview)Preview surface, posted explicitly when a brief is readyStandby
MCP Protocol (developer API)On demandLive
EmailDaily / weeklyPlanned
SlackDaily / weeklyPlanned

How the Pipeline Works

Meridian is built to run governed workflows like this one on a nightly cadence:

PhaseWhat HappensOutput
1. ResearchFetch 30+ sources: provider blogs, changelogs, pricing pages, tech news, community feeds. Watchlist competitors get priority.8–12 sourced findings
2. WriteDraft competitor intelligence alert from findings. Frame each finding as competitive intelligence with source citation.400+ word alert
3. QAMulti-agent verification: check source freshness, citation accuracy, minimum quality bar (200+ words, 5+ sources).PASS / FAIL decision
4. DeliverRoute the approved alert through the currently honest delivery path: founder-led manual pilot now, automated channels only after Meridian reaches treasury-cleared automation through customer-backed phase progression.Telegram/manual delivery log

This is the workflow Meridian is built to run every night. In the current treasury-gated phase, the honest customer path is a founder-led manual pilot rather than pretending this automation is live for every new customer today. The kernel matters even when the delivery promise stays narrow.

Start With A Real Pilot

If this output looks useful, start with a founder-led 7-day pilot. Same category of output, same source discipline, no fake automation story.

Request Manual Pilot Support The Build Email Sơn
Back to Main Page