Testnet Phase #3 (DRFAT/WIP)
Phase #3 advances Cortensor from “trust-layer hardening” into economic + agentic hardening — finalizing payments, staking utility, and reward automation, while continuing router iteration so agents can reliably delegate, validate, and reach consensus on results.
Building on Phase #2’s stable baseline, self-managed L3 traction, and expanded router surfaces, Phase #3 treats Testnet-0 + Testnet-1a as the primary proving grounds — with pre-mainnet environments used for mainnet-identical dry runs.
Phase #3 — Payments, Staking Utility & Incentive Mechanics
(Privacy & data mobility: optional / staged but strongly tested.)
Timeline: End Q1 → Early Q2 2026 Goal: Finalize economic flows and automate incentives while pushing agent-ready router surfaces. Focus: Sustainability, pricing correctness, incentive fairness, and real-world data movement under load.
Contest Rules
Stake Requirement
All participants must stake 10,000 $COR per node (Pool 3) to qualify for Testnet Phase #3.
Nodes must remain staked and active throughout the testing window to receive rewards.
Node Uptime & Performance
Operators must:
Maintain continuous uptime and responsiveness to assigned sessions.
Support payment-aware sessions (
SessionPaymentmetadata + fee attribution) without breaking telemetry.Run nodes capable of handling redundant workflows (multi-run / multi-miner consensus requirements).
Ensure correct registration + telemetry sync in the primary environments:
Testnet-0 (Arbitrum Sepolia)
Testnet-1a (Self-managed COR L3)
Participation & Reporting
Participants are expected to:
Report anomalies in payments, quotas, rewards, and routing behavior — not just inference/validation bugs.
Help validate staking-to-use accounting and NodeReward automation under real network traffic.
Share structured logs + findings through Discord reporting channels.
Disqualification Criteria
Cortensor reserves the right to disqualify any operator for:
Misreporting uptime or falsifying telemetry data.
Exploiting staking, rewards, routing, or payment mechanisms.
Any conduct or manipulation that undermines fairness, sustainability, or network stability.
Rewards
Rewards follow the same structure as Phase #2, unless explicitly updated in the Phase #3 announcement.
Phase #3 emphasis shifts toward:
automating reward computation, and
validating the parameters used for level-based rewards and capacity signals (see Testing Focus).
Testing Focus
1) Payment Flow Refinement (SessionPayment “Production Shape”)
Phase #3 makes payments robust enough to scale:
Tune unit costs, rebates, and surcharge parameters.
Expand
SessionPaymentmetadata for pricing transparency:execution depth / validator depth
redundancy profile (miners, consensus method, threshold)
model class + route hints
Improve gas tracking and cost attribution (per session / per route / per validation path).
Validate fee correctness under real traffic:
predictable totals
stable accounting
clean failure handling.
2) Dynamic Pricing Activation (SLA / Depth / Redundancy Aware)
Turn the pricing scaffolding into live economics:
Activate
SessionPaymentTablewith richer inputs:SLA tier / reliability tier
model class (light vs heavy)
execution depth / tool depth
validator depth
redundancy level + consensus requirements.
Validate pricing is consistent, manipulation-resistant, and understandable.
3) Staking-to-Use (Stake Utility → Quotas → Auto-Deduction)
Primary economics feature for Phase #3:
Extend
SessionPaymentto support stake-based free inference quotas.Enable auto-deduction from user stake balances for paid overages.
Define quota accounting + guardrails:
refresh cadence (daily / weekly)
usage buckets (by model class / redundancy)
abuse prevention (rate limits / per-address caps).
4) Node Reward Automation + Level Parameter Tuning
(Model Count as Capacity Signal)
Phase #3 moves rewards toward a predictable, automated system:
Implement automated reward computation linked to:
Node Level /
NodeReputationvalidator feedback (agreement rate, consistency, failure behavior)
supporting signals: throughput + uptime.
Shift “fairness” tuning to level reward parameter adjustments driven by capacity signals:
reward parameters incorporate model count / routing breadth (how many models a node supports).
ensure incentives reflect useful coverage + reliability, not just raw hardware advantage.
prevent dominance by a narrow set of high-end configs by rewarding breadth + consistency.
Automate reward computation + payout pipeline to reduce manual ops overhead.
5) Router Surfaces (v3/v4) + Consensus/Redundancy Attributes (Agent-Ready)
Phase #3 pushes router endpoints closer to agent primitives:
Iterate on
/delegateand/validatetoward router v3:richer policy hints (budget/risk/latency)
structured outputs + tool traces
retries/fallbacks under partial failure.
Add explicit redundancy + consensus attributes in requests:
number of miners / sessions
aggregation method (
majority, futureweighted,median, etc.)agreement thresholds
disagreement/arbitration rules.
Apply the same consensus model to factcheck and any new endpoints required by agent workflows.
(These features lay the groundwork for router v4: programmable trust + reusable validation artifacts.)
6) PyClaw – Router-Aligned Agent Runtime (Design & Validation)
PyClaw is a separate product layer on top of Cortensor — a local-first agent runtime aligned with router/session primitives:
Continue early PyClaw design so it lines up with:
/completion,/delegate,/validate, andfactchecksurfaces.session IDs, routing policies, redundancy attributes, and validation contracts.
Local-first design goals:
a control plane (CLI/chat/voice adapters) that routes commands to the right agent/session and keeps a global audit log.
a state-machine brain loop:
interpret → plan → policy-gate → act → observe → reflect → improve.tools + memory + strict context compaction for safety and efficiency.
Scaling intent:
start as single-host daemon; keep contracts ready for multi-agent / remote workers later.
Phase #3 emphasis is design + validation, so Phase #4 can begin deeper implementation work.
7) Bardiel Dashboard Iterations + Regular Test Jobs
Make router experimentation visible and repeatable:
Iterate Bardiel dashboard views for delegation/validation:
policy tiers, risk settings, verdict traces, consensus metadata.
Set up regular scheduled test jobs (smoke + regression) so the dashboard always has live examples.
Ensure the UX reflects v3 router schema changes without breaking existing views.
8) Pre-Mainnet Experiments (Mainnet-Identical Setup)
Phase #3 introduces realistic “go-live” dry runs:
Use pre-mainnet environments designed to be apple-to-apple with mainnet config.
Validate:
real ops + reliability expectations
cost mechanics (RPC / sequencer / validator costs where applicable)
payment + staking + reward flows under realistic conditions
agent-facing router behaviors with mainnet-like constraints.
9) Self-Managed L3 as Primary Track
(Ops Hardening + Runbooks)
Phase #3 treats Testnet-1a (self-managed COR L3) as the forward path:
Continue L3 ops hardening with existing playbooks:
backups, snapshots, disk-space estimates
recovery-oriented runbooks and restart procedures.
Validate that ops procedures actually work under incidents, not just on paper.
10) ERC-8004 Agent Presence + Router MCP/Prototype Stack
Build on Phase #2’s ERC-8004 registrations:
Exercise Corgent + Bardiel in real workflows backed by the Router Node MCP/prototype stack.
Validate registry presence isn’t “just a listing”:
run practical delegation/validation flows.
verify reliability + cost behavior.
attempt pre-mainnet runs if environments are ready.
11) Gas Profiling Iteration (Economics + Optional Privacy)
Measure gas after payment / staking / reward changes.
Compare against Phase #2 baseline.
Continue trimming:
storage writes
log spam
unnecessary on-chain calls.
12) Privacy-Preserving Sessions (Optional / Staged)
Privacy stays opt-in until stable and cost-acceptable:
Implement V0 encryption for dedicated-node sessions:
router-issued, scope-bound
payload_enc_key(session / session+task).env-based allowlist (
ENCRYPTION_ALLOWED_LIST) controlling who may fetch keys.deterministic keys derived from a secret
ENCRYPTION_SEED+ scope.
Prepare V1 design for ephemeral/node-pool sessions using:
dynamic policy storage (router data module / contract), not only env.
Benchmark latency / failure modes and keep privacy optional until overhead is understood.
(See: Private / Encrypted Inference docs for full design details.)
13) Data Mobility & Large Dataset Stress Tests
Re-introduce data mobility as a first-class test axis, focused on large datasets:
Design data-mobility workloads that:
move or stream large artifacts across miners and storage layers (local CAS, IPFS, future Stratos/Filecoin or similar).
simulate realistic workloads: multi-GB logs, embeddings, checkpoints, or multi-part documents.
Stress-test:
router + session behavior when tasks operate on large artifacts.
artifact ingestion, fetch, and eviction behavior under load.
bandwidth and latency impacts when many nodes fetch the same large dataset.
Validate:
that artifact references remain stable even as data moves or is re-chunked.
that node operators can handle high I/O + storage churn without crashing router or miner processes.
that future data-management roadmap (IPFS / L3-aware storage / off-chain backends) stays compatible with:
SessionPaymentaccountingrewards (proof that data-heavy work was actually done)
privacy constraints where applicable.
These tests ensure Cortensor is ready for data-heavy AI workloads, not just small text-only inference.
Key Outcomes
By the end of Phase #3, Cortensor should achieve:
Payments that are transparent, tunable, and resilient under real traffic.
Stake-to-use quotas working end-to-end with auto-deduction and abuse guardrails.
Automated rewards with tuned level parameters using model-count + routing breadth as capacity signals.
Router endpoints closer to agent primitives, supporting explicit redundancy + consensus attributes.
PyClaw design validated as a router-aligned agent runtime layer (foundation for Phase #4 implementation).
A repeatable testing loop via Bardiel dashboard + scheduled test jobs.
Pre-mainnet environments validated as mainnet-identical dry-run platforms.
Continued confidence in the self-managed L3 ops path via playbooks + incident-ready runbooks.
Expanded real-world use of ERC-8004 agents (Corgent + Bardiel) backed by Router MCP/prototype stack.
Proven handling of data mobility and large dataset workloads, feeding into long-term data-management roadmap.
Outcome Statement
Testnet Phase #3 is where Cortensor becomes a real economic network:
Phase #2 hardened trust.
Phase #3 makes execution payable, stake-usable, data-heavy capable, and incentivized fairly, while pushing the router toward agent-ready delegation, validation, consensus, and data mobility under mainnet-like conditions.
Join Cortensor Discord: https://discord.gg/cortensor
Last updated