Cortensor
  • Home
  • Abstract
    • Value Proposition
    • Whitepaper
      • Page 1: Introduction and Vision
      • Page 2: Architecture and Technical Overview
      • Page 3: Incentive Structure and Tokenomics
      • Page4: Development Roadmap and Phases
      • Page5: Summary
  • Introduction
    • What is Cortensor?
    • Key Features & Benefits
    • Vision & Mission
    • Team
  • Getting Started
    • Quick Start Guide
    • System Requirements
    • Installation & Setup
      • Getting Test ETH
      • Setup Own RPC Endpoint
      • Router Node Setup
        • Router API Reference
  • Core Concepts
    • Decentralized AI Inference
      • Community-Powered Network
      • Gamification and Quality Control
      • Incentive Structure
    • Universal AI Accessibility
    • Multi-layer Blockchain Architecture
  • Technical Architecture
    • Design Principles
    • Node Roles
    • Node Lifecycle
      • Ephemeral Node State
    • Node Reputation
    • Network & Flow
    • Type of Services
    • Coordination & Orchestration
      • Multi-Oracle Node Reliability & Leadership Rotation
    • AI Inference
      • Open Source Models
        • Centralized vs Decentralized Models
      • Quantization
      • Performance and Scalability
    • Consensus & Validation
      • Proof of Inference (PoI) & Proof of Useful Work (PoUW
      • aka Mining
      • Proof of Useful Work (PoUW)
      • Proof of Useful Work (PoUW) State Machine
        • Miner & Oracle Nodes in PoUW State Machine
      • Sampling in Large Distributed Systems
      • Parallel Processing
      • Embedding Vector Distance
    • Multi-Layered Blockchain Architecture
    • Modular Architecture and Smart Contract Interactions
      • Session Queue
      • Node Pool
      • Session Payment
    • Mining Overview
    • User Interaction & Node Communication
      • Session, Session Queue, Router, and Miner in Cortensor
    • Data Management
      • IPFS Integration
    • Security & Privacy
    • Dashboard
    • Development Previews
      • Multiple Miners Collaboration with Oracle Node
      • Web3 SDK Client & Session/Session Queue Interaction
    • Technical Threads
      • AI Agents and Cortensor's Decentralized AI Inference
    • Infographic Archive
  • Community & Ecosystem
    • Tokenomics
      • Network Incentive Allocation
      • Token Allocations & Safe Wallet Management
    • Staking Pool Overview
    • Contributing to Cortensor
    • Incentives & Reward System
    • Governance & Compliance
    • Safety Measures and Restricted Addresses
    • Buyback Program
    • Liquidity Additions
    • Partnerships
      • Partnership Offering for Demand-Side Partnerships
    • Community Testing
      • Closed Alpha Testing Phase #1
        • Closed Alpha Testing Phase #1 Contest: Closing & Winners Announcement
      • Closed Alpha Testing Phase #2
      • Closed Alpha Testing Phase #3
      • Discord Roles & Mainnet Privileges
      • DevNet Mapping
      • DevNet Modules & Parameters
    • Jobs
      • Technical Writer
      • Communication & Social Media Manager
      • Web3 Frontend Developer
      • Distributed Systems Engineer
  • Integration Guide
    • Web2
      • REST API
      • WebSocket
      • Client SDK
    • Web3
      • Web3 SDK
  • Use Cases
  • Roadmap
    • Technical Roadmap: Launch to Next 365 Days Breakdown
    • Long-term Vision: Beyond Inference
  • Glossary
  • Legal
    • Terms of Use
    • Privacy Policy
    • Disclaimer
    • Agreement for Sale of Tokens
Powered by GitBook
On this page
  • Key Services
  • Inference Services:
  • Prediction Services:
  • Generation Services:
  • Classification Services:
  • Oracle Services:
  • AI Marketplace:
  1. Technical Architecture

Type of Services

Cortensor offers a variety of AI services tailored to meet diverse application needs, evolving from basic inference tasks to more specialized capabilities. Not all services require real-time responses, allowing for flexible and efficient use of network resources.

Overview

Cortensor’s services cater to different intelligence needs, ensuring that applications can leverage AI capabilities effectively, whether they require immediate responses or can operate with delayed processing.

Key Services

Inference Services:

  • Provides AI inference capabilities for applications to submit prompts and receive completions.

  • Supports both real-time and non-real-time processing, enabling flexible integration.

  • Utilizes Llama 3 models, both quantized and regular, to accommodate various hardware capabilities.

  • Includes tasks such as classification, prediction, and content generation.

Prediction Services:

  • Analyzes historical data to predict future trends and outcomes.

  • Essential for applications in finance, marketing, and supply chain management.

  • Utilizes advanced machine learning algorithms for accurate forecasts.

Generation Services:

  • Synthetic Data Generation: Creates artificial data for training AI models, enhancing performance and addressing data scarcity.

  • Content Generation: Produces text, images, and other content types based on user prompts.

  • Data Augmentation: Improves the robustness and performance of AI models through data augmentation techniques.

Classification Services:

  • Categorizes data into predefined classes for applications like spam detection, image classification, and document categorization.

  • Identifies and classifies entities within text, useful for NLP applications.

  • Classifies text based on sentiment, aiding in customer feedback analysis and social media monitoring.

Oracle Services:

  • Offers decentralized oracle services that provide reliable data feeds to smart contracts.

  • Ensures applications receive accurate and verified external data, crucial for blockchain-based operations.

AI Marketplace:

  • Facilitates a marketplace where developers can share, sell, and purchase AI models and services.

  • Supports the sharing of fine-tuned models with watermarks to incentivize contributions.

  • Allows for the creation and distribution of prompts to be used with inference sessions.

  • Future plans include integrating AI agents that can interact with the network and be hosted on-chain, enhancing the functionality and reach of AI services.

PreviousNetwork & FlowNextCoordination & Orchestration

Last updated 3 months ago