Cortensor
  • Home
  • Abstract
    • Value Proposition
    • Whitepaper
      • Page 1: Introduction and Vision
      • Page 2: Architecture and Technical Overview
      • Page 3: Incentive Structure and Tokenomics
      • Page4: Development Roadmap and Phases
      • Page5: Summary
  • Introduction
    • What is Cortensor?
    • Key Features & Benefits
    • Vision & Mission
    • Team
  • Getting Started
    • Quick Start Guide
    • System Requirements
    • Installation & Setup
      • Getting Test ETH
      • Setup Own RPC Endpoint
      • Router Node Setup
        • Router API Reference
  • Core Concepts
    • Decentralized AI Inference
      • Community-Powered Network
      • Gamification and Quality Control
      • Incentive Structure
    • Universal AI Accessibility
    • Multi-layer Blockchain Architecture
  • Technical Architecture
    • Design Principles
    • Node Roles
    • Node Lifecycle
      • Ephemeral Node State
    • Node Reputation
    • Network & Flow
    • Type of Services
    • Coordination & Orchestration
      • Multi-Oracle Node Reliability & Leadership Rotation
    • AI Inference
      • Open Source Models
        • Centralized vs Decentralized Models
      • Quantization
      • Performance and Scalability
    • Consensus & Validation
      • Proof of Inference (PoI) & Proof of Useful Work (PoUW
      • aka Mining
      • Proof of Useful Work (PoUW)
      • Proof of Useful Work (PoUW) State Machine
        • Miner & Oracle Nodes in PoUW State Machine
      • Sampling in Large Distributed Systems
      • Parallel Processing
      • Embedding Vector Distance
    • Multi-Layered Blockchain Architecture
    • Modular Architecture and Smart Contract Interactions
      • Session Queue
      • Node Pool
      • Session Payment
    • Mining Overview
    • User Interaction & Node Communication
      • Session, Session Queue, Router, and Miner in Cortensor
    • Data Management
      • IPFS Integration
    • Security & Privacy
    • Dashboard
    • Development Previews
      • Multiple Miners Collaboration with Oracle Node
      • Web3 SDK Client & Session/Session Queue Interaction
    • Technical Threads
      • AI Agents and Cortensor's Decentralized AI Inference
    • Infographic Archive
  • Community & Ecosystem
    • Tokenomics
      • Network Incentive Allocation
      • Token Allocations & Safe Wallet Management
    • Staking Pool Overview
    • Contributing to Cortensor
    • Incentives & Reward System
    • Governance & Compliance
    • Safety Measures and Restricted Addresses
    • Buyback Program
    • Liquidity Additions
    • Partnerships
      • Partnership Offering for Demand-Side Partnerships
    • Community Testing
      • Closed Alpha Testing Phase #1
        • Closed Alpha Testing Phase #1 Contest: Closing & Winners Announcement
      • Closed Alpha Testing Phase #2
      • Closed Alpha Testing Phase #3
      • Discord Roles & Mainnet Privileges
      • DevNet Mapping
      • DevNet Modules & Parameters
    • Jobs
      • Technical Writer
      • Communication & Social Media Manager
      • Web3 Frontend Developer
      • Distributed Systems Engineer
  • Integration Guide
    • Web2
      • REST API
      • WebSocket
      • Client SDK
    • Web3
      • Web3 SDK
  • Use Cases
  • Roadmap
    • Technical Roadmap: Launch to Next 365 Days Breakdown
    • Long-term Vision: Beyond Inference
  • Glossary
  • Legal
    • Terms of Use
    • Privacy Policy
    • Disclaimer
    • Agreement for Sale of Tokens
Powered by GitBook
On this page
  • Session & Session Queue: Serving User Requests
  • Node Pool & Ephemeral Node State
  • Current Design & Future Considerations
  • Conclusion
  1. Technical Architecture
  2. User Interaction & Node Communication

Session, Session Queue, Router, and Miner in Cortensor

Cortensor’s modular architecture is designed to facilitate efficient AI inference processing through a microservices-inspired structure, where each module acts as a mediator between two or more parties. This ensures scalability, reliability, and modular communication within the decentralized AI network.

Each module:

  • Stores only necessary datasets, reducing data redundancy.

  • Relies on other modules to fetch relevant data through function calls.

  • Facilitates seamless communication between different network components.

Examples of Module Interactions

  • Cognitive Module – Manages interactions between oracle nodes and miners to maintain network health.

  • Node Stats & Reputation Module – Tracks performance and reliability of miners by directly gathering real-time data from miners, while oracle nodes assist in coordination and timing measurement.

  • Node Pool Module – Maintains the state of available miner nodes, updating data through interactions with the node reputation module and node pool agents.

Now, let’s dive into the Session and Session Queue modules, which are primarily responsible for serving user requests efficiently.


Session & Session Queue: Serving User Requests

The Session Module and Session Queue are central components in Cortensor that handle AI inference requests from users. They are designed to ensure optimal task assignment, execution, and data integrity.

Module Interactions

  • Router Node ↔ Client → Handles user interactions, relays requests, and returns responses.

  • Session Module ↔ Router Node → Manages user session creation and request handling.

  • Session Module ↔ Session Queue Module → Facilitates task queuing and miner assignment.

  • Session Queue Module ↔ Miners → Allocates jobs to miners, manages task execution, and ensures balanced workload distribution.

  • Session Queue Module ↔ Router Node → Receives inference results from miners and forwards them back to the Router Node for client delivery.

User Request Flow in Cortensor

The Session Module plays a vital role in creating and managing AI inference sessions. Below is the rough workflow of how Cortensor handles user requests:

  1. Session Creation

    • A user initiates a session through a router node.

    • The router node forwards the session request to the Session Module.

  2. Node Selection & Reservation

    • The Session Module queries the Node Pool to find available ephemeral nodes.

    • Selected nodes are reserved in the Session Queue for the session.

  3. User Request Processing Begins

    • Once a session is fully assigned, it is ready to process inference requests.

  4. User Submits Inference Request

    • The user sends a request via the Router Node (either through REST API or directly via smart contract).

    • The Router Node relays the request to the Session Module.

  5. Task Assignment & Dispatch

    • The Session Module forwards the request to the Session Queue.

    • The Session Queue dispatches tasks to the miners assigned to that session.

  6. Event Emission for Miners

    • The Session Module emits task events, notifying miners in the session.

    • Miners listen to these events and begin working on the inference request.

  7. Miners Process & Relay Inference Data

    • Miners execute inference and stream results per token processed.

    • WebSockets relay the inference data to the Router Node.

    • The Router Node forwards the processed response to the Client SDK through REST Streaming or WebSockets.

Data Integrity & Validation

To ensure reliable inference data, miners commit their results in two steps, similar to the Cognitive Module’s state machine validation process.

  1. Precommit Stage

    • Miners generate a hash of the inference data and submit it.

    • This ensures commitment without exposing the actual inference result.

  2. Commit Stage

    • Miners submit the actual inference data to the Session Queue.

    • The Session Queue validates the inference data before marking it as finalized.


Node Pool & Ephemeral Node State

The Node Pool is responsible for managing ephemeral state miners—nodes that are available for AI inference tasks.

  • Ephemeral State: A node enters the ephemeral state once it has passed Cognitive Module validation and is ready to serve inference tasks.

  • Session Assignment: When a session is created, the Session Module selects ephemeral nodes from the Node Pool.

  • Node Marking & Release:

    • When a node is assigned to a session, it is marked as reserved.

    • Once the session is completed, the node is re-tested by the Cognitive Module before being returned to the ephemeral pool.

This mechanism ensures only high-quality nodes serve user requests, maintaining performance integrity across the network.


Current Design & Future Considerations

Current State

  • Session & Session Queue handle AI inference requests.

  • Ephemeral Nodes dynamically serve user requests with built-in integrity checks.

  • Router Nodes manage user interactions and relay responses efficiently.

Future Enhancements

  • TBA


Conclusion

The Session Module, Session Queue, Router, and Miner nodes form the backbone of Cortensor’s decentralized AI inference framework. Through modular interactions, real-time event processing, and a secure validation system, Cortensor ensures scalable, efficient, and reliable AI inference while maintaining network integrity.

PreviousUser Interaction & Node CommunicationNextData Management

Last updated 2 months ago