Cortensor
  • Home
  • Abstract
    • Value Proposition
    • Whitepaper
      • Page 1: Introduction and Vision
      • Page 2: Architecture and Technical Overview
      • Page 3: Incentive Structure and Tokenomics
      • Page4: Development Roadmap and Phases
      • Page5: Summary
  • Introduction
    • What is Cortensor?
    • Key Features & Benefits
    • Vision & Mission
    • Team
  • Getting Started
    • Quick Start Guide
    • System Requirements
    • Installation & Setup
      • Getting Test ETH
      • Setup Own RPC Endpoint
      • Router Node Setup
        • Router API Reference
        • Dedicated Ephemeral Node Setup
        • Reverse Proxy Setup
  • Core Concepts
    • Decentralized AI Inference
      • Community-Powered Network
      • Gamification and Quality Control
      • Incentive Structure
    • Universal AI Accessibility
    • Multi-layer Blockchain Architecture
  • Technical Architecture
    • Design Principles
    • Node Roles
    • Node Lifecycle
      • Ephemeral Node State
    • Node Reputation
    • Network & Flow
    • Type of Services
    • Coordination & Orchestration
      • Multi-Oracle Node Reliability & Leadership Rotation
    • AI Inference
      • Open Source Models
        • Centralized vs Decentralized Models
      • Quantization
      • Performance and Scalability
    • Consensus & Validation
      • Proof of Inference (PoI) & Proof of Useful Work (PoUW
      • aka Mining
      • Proof of Useful Work (PoUW)
      • Proof of Useful Work (PoUW) State Machine
        • Miner & Oracle Nodes in PoUW State Machine
      • Sampling in Large Distributed Systems
      • Parallel Processing
      • Embedding Vector Distance
    • Multi-Layered Blockchain Architecture
    • Modular Architecture and Smart Contract Interactions
      • Session Queue
      • Node Pool
      • Session Payment
    • Mining Overview
    • User Interaction & Node Communication
      • Session, Session Queue, Router, and Miner in Cortensor
    • Data Management
      • IPFS Integration
    • Security & Privacy
    • Dashboard
    • Development Previews
      • Multiple Miners Collaboration with Oracle Node
      • Web3 SDK Client & Session/Session Queue Interaction
    • Technical Threads
      • AI Agents and Cortensor's Decentralized AI Inference
    • Infographic Archive
  • Community & Ecosystem
    • Tokenomics
      • Network Incentive Allocation
      • Token Allocations & Safe Wallet Management
    • Developer Ecosystem
    • Staking Pool Overview
    • Contributing to Cortensor
    • Incentives & Reward System
    • Governance & Compliance
    • Safety Measures and Restricted Addresses
    • Buyback Program
    • Liquidity Additions
    • Partnerships
      • Partnership Offering for Demand-Side Partnerships
    • Community Testing
      • Closed Alpha Testing Phase #1
        • Closed Alpha Testing Phase #1 Contest: Closing & Winners Announcement
      • Closed Alpha Testing Phase #2
      • Closed Alpha Testing Phase #3
      • Closed Alpha Testing Phase #4
      • Discord Roles & Mainnet Privileges
      • DevNet Mapping
      • DevNet Modules & Parameters
    • Jobs
      • Technical Writer
      • Communication & Social Media Manager
      • Web3 Frontend Developer
      • Distributed Systems Engineer
  • Integration Guide
    • Web2
      • REST API
      • WebSocket
      • Client SDK
    • Web3
      • Web3 SDK
  • Use Cases
  • Roadmap
    • Technical Roadmap: Launch to Next 365 Days Breakdown
    • Long-term Vision: Beyond Inference
  • Glossary
  • Legal
    • Terms of Use
    • Privacy Policy
    • Disclaimer
    • Agreement for Sale of Tokens
Powered by GitBook
On this page
  • Why Use a Reverse Proxy?
  • Setup Guide
  • Available Endpoints (by default)
  • CORS Configuration
  • Maintenance Commands
  • Notes
  • What’s Next?
  1. Getting Started
  2. Installation & Setup
  3. Router Node Setup

Reverse Proxy Setup

To prepare your Cortensor Router Node for production use and support future scaling, it is recommended to set up an Nginx reverse proxy in front of your API service. This setup enables secure HTTPS access, controlled endpoint exposure, and load distribution across multiple router nodes.

Why Use a Reverse Proxy?

  • Production-Ready Architecture: Adds SSL, security headers, CORS, and configurable access control.

  • Scalability: Makes it easier to load balance requests across multiple router nodes.

  • Security: Hides internal ports and enables HTTPS via Let's Encrypt.

  • Routing Control: Restricts access to specific endpoints and protects internal services.


Setup Guide

1. Prerequisites

  • A running Cortensor Router Node (cortensord ~/.cortensor/.env routerv1)

  • Public domain pointing to your VPS (e.g., router.example.com)

  • Root access (sudo) on your node server

  • Ports 80 and 443 open on your firewall


2. Installation Script

Run the official Cortensor Nginx installer on your Router Node host:

sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/cortensor/installer/main/install-nginx-linux.sh)"

This will:

  • Install Nginx and Certbot

  • Prompt you to enter your domain

  • Generate and apply a complete reverse proxy config for your Router Node

  • Optionally install HTTPS using Let’s Encrypt


3. API Server Configuration

Ensure your Router Node API is running on port 5010 (default):

cortensord ~/.cortensor/.env routerv1

Update your .env:

API_ENABLE=1
API_KEY=<your_generated_key>
API_PORT=5010

4. SSL Setup (Optional but Recommended)

If DNS is properly configured, the installer will prompt you to secure your router domain via Certbot. You can always run this manually later:

sudo certbot --nginx -d router.example.com

5. Sample Architecture Diagram

Client (Web2/Web3) 
    ↓
Cloudflare or DNS/CDN
    ↓
Nginx Reverse Proxy (SSL + Routing)
    ↓
Router Node (REST API)
    ↓
Session Queue ↔ Miners

Available Endpoints (by default)

The reverse proxy allows access only to these:

  • /api/v1/info

  • /api/v1/status

  • /api/v1/miners

  • /api/v1/sessions

  • /api/v1/completions

  • /api/v1/tasks

  • /api/v1/ping

All other routes return 404.


CORS Configuration

The template includes both:

  • Restricted CORS: Allow specific trusted origins

  • Permissive CORS: (commented by default) allows all origins

Update CORS settings in:

/etc/nginx/sites-available/router-node.conf

Maintenance Commands

# Restart Nginx after config changes
sudo systemctl restart nginx

# Check status
sudo systemctl status nginx

Notes

  • This setup is designed for private router nodes. Load balancing and auto-scaling support will be introduced in future versions.

  • Make sure the API port (5010) is not exposed publicly when reverse proxy is enabled.


What’s Next?

  • Support for multi-router failover

  • Integration with Cortensor Dashboard & Metrics

  • Dynamic scaling across multiple router nodes

PreviousDedicated Ephemeral Node SetupNextCore Concepts

Last updated 2 days ago

Use the preset template:

router-node.nginx.template