Router Node Setup
Last updated
Last updated
The Router Node acts as a Web2-compatible RESTful API endpoint, enabling seamless integration of existing Web2 applications into the Cortensor network. It provides OpenAI-compatible APIs, allowing developers to integrate AI inference functionality as effortlessly as a hotswap—without modifying their core infrastructure.
While this Router Node is privately hosted, it mirrors the behavior of a public gateway by bridging external requests with Cortensor’s internal session flow.
For Web3 applications and smart contracts, direct interaction with the Session and Session Queue modules is supported, bypassing the Router Node to operate in a fully decentralized and trustless manner.
This setup empowers developers to serve both traditional and decentralized clients while participating in Cortensor’s distributed AI inference network.
Note: The Router Node setup follows the same process as a standard Cortensor node with additional configuration for API access.
Before starting, ensure the following:
You’ve followed the to install cortensord
and IPFS.
Your environment is properly configured with required dependencies and keys.
You are running a compatible system (Linux, macOS, or Windows).
cortensord
and IPFScortensord
(Cortensor daemon)
IPFS (InterPlanetary File System)
Use the key generation process described in the node setup documentation to generate necessary identity and signing keys.
.env
)Ensure the following variables are present and configured in your .env
file:
You can generate a secure API key using the following command:
Copy and paste the generated key into your .env
under API_KEY
.
Use the following command to start your router node:
Upon startup, your router node will:
Register itself to handle session routing
Open API access on the configured port (default: 5010
)
Begin communication with miners over WebSocket
Accept and relay inference tasks from clients
Once the router node is running, you can:
Use the Web2 REST API to create sessions and submit inference tasks
Monitor inference data as it streams between your node and miners
Integrate Cortensor AI functionality into applications via SDKs or custom integrations
The router node does not perform inference—it coordinates task flow between users and miners.
Your node must remain online and responsive to maintain API availability.
In future releases, additional features such as task prioritization, caching, and rate limits may be configurable.
By hosting your own router node, you gain private access to Cortensor’s decentralized AI inference capabilities, with full control over task submission, request routing, and session monitoring.
Follow the to install:
For more details on API endpoints and usage, visit: 📘