Cortensor
  • Home
  • Abstract
    • Value Proposition
    • Whitepaper
      • Page 1: Introduction and Vision
      • Page 2: Architecture and Technical Overview
      • Page 3: Incentive Structure and Tokenomics
      • Page4: Development Roadmap and Phases
      • Page5: Summary
    • Pitch Memo
  • Introduction
    • What is Cortensor?
    • Key Features & Benefits
    • Vision & Mission
    • Team
  • Getting Started
    • Quick Start Guide
    • System Requirements
    • Installation & Setup
      • Getting Test ETH
      • Setup Own RPC Endpoint
      • Router Node Setup
        • Dedicated Ephemeral Node Setup
        • Reverse Proxy Setup
    • Web2 API Reference
    • Web3 SDK Reference
  • Core Concepts
    • Decentralized AI Inference
      • Community-Powered Network
      • Gamification and Quality Control
      • Incentive Structure
    • Universal AI Accessibility
    • Multi-layer Blockchain Architecture
  • Technical Architecture
    • Design Principles
    • Node Roles
    • Node Lifecycle
      • Ephemeral Node State
    • Node Reputation
    • Network & Flow
    • Type of Services
    • Coordination & Orchestration
      • Multi-Oracle Node Reliability & Leadership Rotation
    • AI Inference
      • Open Source Models
        • Centralized vs Decentralized Models
      • Quantization
      • Performance and Scalability
      • LLM Memory
    • Consensus & Validation
      • Proof of Inference (PoI) & Proof of Useful Work (PoUW
      • aka Mining
      • Proof of Useful Work (PoUW)
      • Proof of Useful Work (PoUW) State Machine
        • Miner & Oracle Nodes in PoUW State Machine
      • Sampling in Large Distributed Systems
      • Parallel Processing
      • Embedding Vector Distance
    • Multi-Layered Blockchain Architecture
    • Modular Architecture and Smart Contract Interactions
      • Session Queue
      • Node Pool
      • Session Payment
    • Mining Overview
    • User Interaction & Node Communication
      • Session, Session Queue, Router, and Miner in Cortensor
    • Data Management
      • IPFS Integration
    • Security & Privacy
    • Dashboard
    • Development Previews
      • Multiple Miners Collaboration with Oracle Node
      • Web3 SDK Client & Session/Session Queue Interaction
    • Technical Threads
      • AI Agents and Cortensor's Decentralized AI Inference
    • Infographic Archive
  • Community & Ecosystem
    • Tokenomics
      • Network Incentive Allocation
      • Token Allocations & Safe Wallet Management
    • Ecosystem Flow
    • Developer Ecosystem
    • Staking Pool Overview
    • Contributing to Cortensor
    • Incentives & Reward System
    • Governance & Compliance
    • Safety Measures and Restricted Addresses
    • Mini Internal Hackathon
    • Buyback Program
    • Liquidity Additions
    • Partnerships
      • Partnership Offering for Demand-Side Partnerships
    • Community Testing
      • Closed Alpha Testing Phase #1
        • Closed Alpha Testing Phase #1 Contest: Closing & Winners Announcement
      • Closed Alpha Testing Phase #2
      • Closed Alpha Testing Phase #3
      • Closed Alpha Testing Phase #4
      • Discord Roles & Mainnet Privileges
      • DevNet Mapping
      • DevNet Modules & Parameters
    • Jobs
      • Technical Writer
      • Communication & Social Media Manager
      • Web3 Frontend Developer
      • Distributed Systems Engineer
  • Integration Guide
    • Web2
      • REST API
      • WebSocket
      • Client SDK
    • Web3
      • Web3 SDK
  • Use Cases
  • Roadmap
    • Technical Roadmap: Launch to Next 365 Days Breakdown
    • Long-term Vision: Beyond Inference
  • Glossary
  • Legal
    • Terms of Use
    • Privacy Policy
    • Disclaimer
    • Agreement for Sale of Tokens
Powered by GitBook
On this page
  • Authentication
  • Base URL
  • Reverse Proxy (Production Setup)
  • Endpoints
  • Streaming Completions
  • Error Codes
  • Usage Notes
  1. Getting Started

Web2 API Reference

Web2 RESTful API Reference

Status: WIP — Early Draft

This document provides a comprehensive reference for the RESTful API endpoints exposed by a Cortensor Router Node. These endpoints allow developers to interact with sessions, tasks, miners, and completions, enabling integration with AI inference workloads in Web2 applications.


Authentication

All endpoints require a Bearer token passed via the Authorization header:

Authorization: Bearer <api_key>

Default Dev Token: default-dev-token


Base URL

All requests are relative to the Router's base URL:

http://<router_host>:5010

For production, it is recommended to place the Router Node behind a reverse proxy (e.g. Nginx) for:

  • TLS termination

  • Rate limiting

  • Load balancing

  • Access control

  • Hiding internal ports


Reverse Proxy (Production Setup)

For production environments, it is strongly recommended to run the Router Node behind a reverse proxy like Nginx, Caddy, or HAProxy. This enhances security, scalability, and performance. The reverse proxy should forward requests to the Router's internal address (e.g., 127.0.0.1:5010).

Benefits:

  • TLS/SSL termination

  • Request rate limiting

  • Load balancing and retries

  • Fine-grained access control

  • IP whitelisting or firewall rules

  • Prevents exposing internal ports to the public


Endpoints

GET /api/v1/info

Returns basic metadata about the Router Node.

curl -X GET http://<router_host>:5010/api/v1/info \
-H "Authorization: Bearer <api_key>"

GET /api/v1/status

Returns current router status and health.

curl -X GET http://<router_host>:5010/api/v1/status \
-H "Authorization: Bearer <api_key>"

GET /api/v1/miners

Lists all currently connected miner nodes.

curl -X GET http://<router_host>:5010/api/v1/miners \
-H "Authorization: Bearer <api_key>"

GET /api/v1/sessions

Lists all active sessions.

curl -X GET http://<router_host>:5010/api/v1/sessions \
-H "Authorization: Bearer <api_key>"

GET /api/v1/sessions/{sessionId}

Returns metadata about a specific session.

Path Param: sessionId — numeric session ID

curl -X GET http://<router_host>:5010/api/v1/sessions/0 \
-H "Authorization: Bearer <api_key>"

GET /api/v1/tasks/{sessionId}

Lists all tasks under a given session.

curl -X GET http://<router_host>:5010/api/v1/tasks/0 \
-H "Authorization: Bearer <api_key>"

GET /api/v1/tasks/{sessionId}/{taskId}

Returns a specific task from a session.

curl -X GET http://<router_host>:5010/api/v1/tasks/0/0 \
-H "Authorization: Bearer <api_key>"

POST /api/v1/completions/{sessionId}

Submit an AI inference prompt using a session ID in the URL.

Request Body:

{
  "prompt": "Hello, how are you?",
  "stream": false,
  "timeout": 60
}
curl -X POST http://<router_host>:5010/api/v1/completions/0 \
-H "Authorization: Bearer <api_key>" \
-d '{"prompt": "Hello, how are you?", "stream": false, "timeout": 60}'

POST /api/v1/completions

Submit a prompt with session ID included in the body.

Request Body:

{
  "session_id": 0,
  "prompt": "Hello, how are you?",
  "stream": false,
  "timeout": 60
}
curl -X POST http://<router_host>:5010/api/v1/completions \
-H "Authorization: Bearer <api_key>" \
-d '{"session_id": 0, "prompt": "Hello, how are you?", "stream": false, "timeout": 60}'

Streaming Completions

Setting stream: true in the request body returns real-time streamed responses using Server-Sent Events (SSE).


Error Codes

Code
Meaning

200

Success

400

Bad Request

401

Unauthorized

404

Not Found

500

Internal Server Error


Usage Notes

  • Compatible with OpenAI-style interfaces

  • Designed for Web2 integrations via REST

  • Supports private/local inference routing

PreviousReverse Proxy SetupNextWeb3 SDK Reference

Last updated 5 hours ago