Web3 SDK Client & Session/Session Queue Interaction

This development preview video (2:30 min) demonstrates the interaction between distributed miners and the Cortensor Web3 SDK client. It highlights how the Session & Session Queue modules handle user input and distribute AI inference tasks across the decentralized network.

Key Modules Demonstrated

  • Web3 SDK Client – The interface through which users interact with Cortensor’s AI inference system.

  • Session & Session Queue Modules – Responsible for handling and assigning user input to network miners.

  • Cognitive Module – Ensures network health, SLA enforcement, and node classification.

Demonstration Flow

  1. The user submits inference tasks via the Web3 SDK Client.

  2. The Session & Session Queue Modules dynamically route tasks across the network.

  3. Miners process tasks based on availability and hardware capabilities.

  4. The system collects and returns results from the miners to the user.

What’s Shown in the Video?

The preview features three core components of the Cortensor network:

  • Contabo VPS2 US West – A miner running on an AMD CPU.

  • Vultr A16 Nvidia Instance – A miner utilizing GPU inference.

  • Contabo VPS2 US Central – A client instance running the Web3 SDK, sending inference requests to the network.

Development Status

Early-Stage Implementation

  • Current implementation is still in early development, with latency optimizations underway.

  • The speed will improve as we transition to the v1 codebase.

Upcoming Integrations

  • Web2 real-time response via WebSocket & REST Stream API.

  • Router Node & Node Pool support for improved scalability & efficiency.

  • Full v0 to v1 codebase migration to enhance performance & reliability.

Future Improvements

  • Reduced inference latency for faster response times.

  • Real-time streaming capabilities for more seamless interactions.

  • Optimized network routing for improved task distribution & miner selection.

Reference: Previous Development Preview

The following v0 development previews demonstrate the evolution of Cortensor’s inference request handling. These earlier versions serve as a blueprint for the current v1 development.

1️⃣ Single Miner Interacting with Router via Web2/Web3 SDK Client

2️⃣ Multiple Miners Serving User Inference Requests

As we migrate from v0 to v1, these foundational concepts are being expanded, refined, and optimized to support a scalable, decentralized AI inference network.

Last updated