Cortensor
  • Home
  • Abstract
    • Value Proposition
    • Whitepaper
      • Page 1: Introduction and Vision
      • Page 2: Architecture and Technical Overview
      • Page 3: Incentive Structure and Tokenomics
      • Page4: Development Roadmap and Phases
      • Page5: Summary
  • Introduction
    • What is Cortensor?
    • Key Features & Benefits
    • Vision & Mission
  • Getting Started
    • Quick Start Guide
    • System Requirements
    • Installation & Setup
      • Getting Test ETH
      • Setup Own RPC Endpoint
      • Router Node Setup
        • Router API Reference
  • Core Concepts
    • Decentralized AI Inference
      • Community-Powered Network
      • Gamification and Quality Control
      • Incentive Structure
    • Universal AI Accessibility
    • Multi-layer Blockchain Architecture
  • Technical Architecture
    • Design Principles
    • Node Roles
    • Node Lifecycle
      • Ephemeral Node State
    • Node Reputation
    • Network & Flow
    • Type of Services
    • Coordination & Orchestration
      • Multi-Oracle Node Reliability & Leadership Rotation
    • AI Inference
      • Open Source Models
        • Centralized vs Decentralized Models
      • Quantization
      • Performance and Scalability
    • Consensus & Validation
      • Proof of Inference (PoI) & Proof of Useful Work (PoUW
      • aka Mining
      • Proof of Useful Work (PoUW)
      • Proof of Useful Work (PoUW) State Machine
        • Miner & Oracle Nodes in PoUW State Machine
      • Sampling in Large Distributed Systems
      • Parallel Processing
      • Embedding Vector Distance
    • Multi-Layered Blockchain Architecture
    • Modular Architecture and Smart Contract Interactions
      • Session Queue
      • Node Pool
      • Session Payment
    • Mining Overview
    • User Interaction & Node Communication
      • Session, Session Queue, Router, and Miner in Cortensor
    • Data Management
      • IPFS Integration
    • Security & Privacy
    • Dashboard
    • Development Previews
      • Multiple Miners Collaboration with Oracle Node
      • Web3 SDK Client & Session/Session Queue Interaction
    • Technical Threads
      • AI Agents and Cortensor's Decentralized AI Inference
    • Infographic Archive
  • Community & Ecosystem
    • Tokenomics
      • Network Incentive Allocation
      • Token Allocations & Safe Wallet Management
    • Staking Pool Overview
    • Contributing to Cortensor
    • Incentives & Reward System
    • Governance & Compliance
    • Safety Measures and Restricted Addresses
    • Buyback Program
    • Liquidity Additions
    • Partnerships
      • Partnership Offering for Demand-Side Partnerships
    • Community Testing
      • Closed Alpha Testing Phase #1
        • Closed Alpha Testing Phase #1 Contest: Closing & Winners Announcement
      • Closed Alpha Testing Phase #2
      • Closed Alpha Testing Phase #3
      • Discord Roles & Mainnet Privileges
      • DevNet Mapping
      • DevNet Modules & Parameters
    • Jobs
      • Technical Writer
      • Communication & Social Media Manager
      • Web3 Frontend Developer
      • Distributed Systems Engineer
  • Integration Guide
    • Web2
      • REST API
      • WebSocket
      • Client SDK
    • Web3
      • Web3 SDK
  • Use Cases
  • Roadmap
    • Technical Roadmap: Launch to Next 365 Days Breakdown
    • Long-term Vision: Beyond Inference
  • Glossary
  • Legal
    • Terms of Use
    • Privacy Policy
    • Disclaimer
    • Agreement for Sale of Tokens
Powered by GitBook
On this page
  • Key Modules Demonstrated
  • Demonstration Flow
  • What’s Shown in the Video?
  • Development Status
  • Future Improvements
  • Reference: Previous Development Preview
  1. Technical Architecture
  2. Development Previews

Web3 SDK Client & Session/Session Queue Interaction

PreviousMultiple Miners Collaboration with Oracle NodeNextTechnical Threads

Last updated 2 months ago

This development preview video (2:30 min) demonstrates the interaction between distributed miners and the Cortensor Web3 SDK client. It highlights how the Session & Session Queue modules handle user input and distribute AI inference tasks across the decentralized network.

Key Modules Demonstrated

  • Web3 SDK Client – The interface through which users interact with Cortensor’s AI inference system.

  • Session & Session Queue Modules – Responsible for handling and assigning user input to network miners.

  • Cognitive Module – Ensures network health, SLA enforcement, and node classification.

Demonstration Flow

  1. The user submits inference tasks via the Web3 SDK Client.

  2. The Session & Session Queue Modules dynamically route tasks across the network.

  3. Miners process tasks based on availability and hardware capabilities.

  4. The system collects and returns results from the miners to the user.

What’s Shown in the Video?

The preview features three core components of the Cortensor network:

  • Contabo VPS2 US West – A miner running on an AMD CPU.

  • Vultr A16 Nvidia Instance – A miner utilizing GPU inference.

  • Contabo VPS2 US Central – A client instance running the Web3 SDK, sending inference requests to the network.

Development Status

Early-Stage Implementation

  • Current implementation is still in early development, with latency optimizations underway.

  • The speed will improve as we transition to the v1 codebase.

Upcoming Integrations

  • Web2 real-time response via WebSocket & REST Stream API.

  • Router Node & Node Pool support for improved scalability & efficiency.

  • Full v0 to v1 codebase migration to enhance performance & reliability.

Future Improvements

  • Reduced inference latency for faster response times.

  • Real-time streaming capabilities for more seamless interactions.

  • Optimized network routing for improved task distribution & miner selection.

Reference: Previous Development Preview

The following v0 development previews demonstrate the evolution of Cortensor’s inference request handling. These earlier versions serve as a blueprint for the current v1 development.

1️⃣ Single Miner Interacting with Router via Web2/Web3 SDK Client

  • A single miner interacting with the router to serve user requests.

2️⃣ Multiple Miners Serving User Inference Requests

  • Showcases miners, the router, and Web3/Web2 SDK clients in action.

As we migrate from v0 to v1, these foundational concepts are being expanded, refined, and optimized to support a scalable, decentralized AI inference network.

Watch here:

Watch here:

https://www.youtube.com/watch?v=QWaL6rc0cv0
https://www.youtube.com/watch?v=2l90bBe0lXA