Dashboard

The Cortensor Dashboard is the central interface for observing and interacting with the decentralized Cortensor network. It serves simultaneously as a Mining Explorer, Node Performance Tracker, and User Interaction Layer, bridging miners, validators, developers, and end users within a unified environment.

The Dashboard provides real-time visibility into inference tasks, validator feedback, and node reliability, anchoring the operational backbone of Cortensor’s Proof of Inference (PoI) and Proof of Useful Work (PoUW) frameworks.


Purpose

The Dashboard functions as the operational and analytical control layer for the Cortensor ecosystem:

  • For miners: View live task states, precommit scores, uptime history, and validator feedback. Understand node reliability and performance tiers in real time.

  • For validators: Access quantitative and qualitative metrics for inference validation. Observe cross-node consistency and task quality through structured reports.

  • For developers: Create, monitor, and analyze decentralized inference sessions. Review payment flows, session performance, and node routing outcomes.

  • For users: Interact with Cortensor’s decentralized AI network transparently, verifying both the provenance and quality of every inference result.

The Dashboard is currently deployed across DevNet7, Testnet-0, and Testnet-1, providing the complete monitoring and validation interface for both L2 and L3 environments.


Key Features and Modules

1. Cognitive Tab (Mining Explorer)

Purpose: Tracks network-assigned tasks and miner activity, visualizing validation outcomes from the PoUW framework.

Integration: Connected to the Cognitive Module, which runs the network task loop for node assessment and baseline capability measurement.

Details:

  • Displays lifecycle states — Request → Create → Prepare → Precommit → Commit

  • Includes unsupervised network task games and rounds for ongoing node evaluation

  • Tracks precommit points, validator feedback, and Cognitive Level scoring

  • Integrates with QuantitativeStats (PoI) for cross-node output similarity analysis

  • Provides early visibility into QualitativeStats (PoUW) for usefulness scoring (in progress)


2. NodeStats Tab (Node Performance Tracker)

Purpose: Monitors per-node behavior, reliability, and long-term consistency across network and user tasks.

Integration: Linked to NodeStats, NodeReputation, and Validator modules to provide composite performance insight.

Details:

  • Tracks uptime, heartbeat pings, task completion counters, and failure ratios

  • Displays time-series reliability graphs from NodeReputation

  • Consolidates Cognitive Level, validation feedback, and PoI results into unified node scoring

  • Fully live on Testnet-1, continuously updated from Validator v2 data streams


3. Session Tab (User Interaction Layer)

Purpose: Provides the interface for developers and users to create, manage, and verify AI inference sessions.

Integration: Built upon Session, SessionQueue, SessionStats, and SessionPayment modules.

Details:

  • Create sessions with configurable accuracy, correctness, and model options

  • Execute decentralized inferences, with payments and deposits handled in $COR

  • Record outputs to IPFS or decentralized storage; retain metadata on-chain

  • Display validator feedback with quantitative and qualitative scoring per task

  • Integrates with Smart Job Queue for efficient workload distribution


Active Systems and Enhancements

Node Reputation System

  • Aggregates node reliability, validation, and performance data.

  • Derived from NodeStats, Cognitive Level, and Validator feedback.

  • Drives reputation-based routing, staking tiering, and reward adjustments.

  • Fully live on Testnet-1.



Enhanced Developer & User Interface

  • Expanded insights for validator results, node reputation, and PoI/PoUW scoring.

  • Unified routing, staking, and payment views.

  • Streamlined across all environments — Testnet-0 (Arbitrum L2) and Testnet-1 (L3 COR Rollup).


Smart Job Queue (SessionQueue Module)

  • Dynamically routes workloads based on NodeSpec, Cognitive Level, and model capacity.

  • Balances inference load between Cognitive and user sessions.

  • Improves throughput and prevents congestion under high request volume.

  • Fully operational on Testnet-1 router nodes.


Quantitative / Qualitative Validation Stats ⚙️

  • QuantitativeStats (PoI): Measures embedding distance and inference similarity across nodes. ✅ Foundation complete and live on Testnet-1.

  • QualitativeStats (PoUW): Uses LLM-verifier models to evaluate task usefulness and coherence. ⚙️ Foundation built; under active development for full rollout.

  • Together they form the foundation of Cortensor’s validator intelligence layer, powering reputation and scoring models.


Privacy-Preserving Sessions 🚧


Access

Each network serves a progressive role:

  • DevNet7 – Canary validation and UI iteration

  • Testnet-0 – L2 validation, session/payment integration

  • Testnet-1 – Full L3 architecture with validator-driven PoI/PoUW


Conclusion

The Cortensor Dashboard has matured into the live command center of the decentralized inference ecosystem. By integrating Cognitive, NodeStats, Session, and Validator modules, it provides real-time transparency into how AI inferences are executed, validated, and rewarded.

With Quantitative PoI live, Qualitative PoUW foundation built, and Node Reputation and Smart Job Queue fully active, Cortensor’s execution fabric is verifiable and production-ready. The next milestone is privacy-preserving sessions, completing the loop of trust, execution, and confidentiality for decentralized AI.

As the project advances toward Testnet, the Dashboard remains the public window into Cortensor’s mission: to make every inference verifiable, every node accountable, and every result private by design.


Status: Live across DevNet7, Testnet-0, and Testnet-1. Quantitative PoI operational; Qualitative PoUW foundation under active development; Privacy-Preserving Sessions next.

Last updated