Web3 SDK Client & Session/Session Queue Interaction
Last updated
Last updated
This development preview video (2:30 min) demonstrates the interaction between distributed miners and the Cortensor Web3 SDK client. It highlights how the Session & Session Queue modules handle user input and distribute AI inference tasks across the decentralized network.
Web3 SDK Client – The interface through which users interact with Cortensor’s AI inference system.
Session & Session Queue Modules – Responsible for handling and assigning user input to network miners.
Cognitive Module – Ensures network health, SLA enforcement, and node classification.
The user submits inference tasks via the Web3 SDK Client.
The Session & Session Queue Modules dynamically route tasks across the network.
Miners process tasks based on availability and hardware capabilities.
The system collects and returns results from the miners to the user.
The preview features three core components of the Cortensor network:
Contabo VPS2 US West – A miner running on an AMD CPU.
Vultr A16 Nvidia Instance – A miner utilizing GPU inference.
Contabo VPS2 US Central – A client instance running the Web3 SDK, sending inference requests to the network.
Current implementation is still in early development, with latency optimizations underway.
The speed will improve as we transition to the v1 codebase.
Web2 real-time response via WebSocket & REST Stream API.
Router Node & Node Pool support for improved scalability & efficiency.
Full v0 to v1 codebase migration to enhance performance & reliability.
Reduced inference latency for faster response times.
Real-time streaming capabilities for more seamless interactions.
Optimized network routing for improved task distribution & miner selection.
The following v0 development previews demonstrate the evolution of Cortensor’s inference request handling. These earlier versions serve as a blueprint for the current v1 development.
1️⃣ Single Miner Interacting with Router via Web2/Web3 SDK Client
A single miner interacting with the router to serve user requests.
Watch here: https://www.youtube.com/watch?v=QWaL6rc0cv0
2️⃣ Multiple Miners Serving User Inference Requests
Showcases miners, the router, and Web3/Web2 SDK clients in action.
Watch here: https://www.youtube.com/watch?v=2l90bBe0lXA
As we migrate from v0 to v1, these foundational concepts are being expanded, refined, and optimized to support a scalable, decentralized AI inference network.