Cortensor
  • Home
  • Abstract
    • Value Proposition
    • Whitepaper
      • Page 1: Introduction and Vision
      • Page 2: Architecture and Technical Overview
      • Page 3: Incentive Structure and Tokenomics
      • Page4: Development Roadmap and Phases
      • Page5: Summary
    • Pitch Memo
  • Introduction
    • What is Cortensor?
    • Key Features & Benefits
    • Vision & Mission
    • Team
  • Getting Started
    • Quick Start Guide
    • System Requirements
    • Installation & Setup
      • Getting Test ETH
      • Setup Own RPC Endpoint
      • Router Node Setup
        • Router API Reference
        • Dedicated Ephemeral Node Setup
        • Reverse Proxy Setup
  • Core Concepts
    • Decentralized AI Inference
      • Community-Powered Network
      • Gamification and Quality Control
      • Incentive Structure
    • Universal AI Accessibility
    • Multi-layer Blockchain Architecture
  • Technical Architecture
    • Design Principles
    • Node Roles
    • Node Lifecycle
      • Ephemeral Node State
    • Node Reputation
    • Network & Flow
    • Type of Services
    • Coordination & Orchestration
      • Multi-Oracle Node Reliability & Leadership Rotation
    • AI Inference
      • Open Source Models
        • Centralized vs Decentralized Models
      • Quantization
      • Performance and Scalability
    • Consensus & Validation
      • Proof of Inference (PoI) & Proof of Useful Work (PoUW
      • aka Mining
      • Proof of Useful Work (PoUW)
      • Proof of Useful Work (PoUW) State Machine
        • Miner & Oracle Nodes in PoUW State Machine
      • Sampling in Large Distributed Systems
      • Parallel Processing
      • Embedding Vector Distance
    • Multi-Layered Blockchain Architecture
    • Modular Architecture and Smart Contract Interactions
      • Session Queue
      • Node Pool
      • Session Payment
    • Mining Overview
    • User Interaction & Node Communication
      • Session, Session Queue, Router, and Miner in Cortensor
    • Data Management
      • IPFS Integration
    • Security & Privacy
    • Dashboard
    • Development Previews
      • Multiple Miners Collaboration with Oracle Node
      • Web3 SDK Client & Session/Session Queue Interaction
    • Technical Threads
      • AI Agents and Cortensor's Decentralized AI Inference
    • Infographic Archive
  • Community & Ecosystem
    • Tokenomics
      • Network Incentive Allocation
      • Token Allocations & Safe Wallet Management
    • Developer Ecosystem
    • Staking Pool Overview
    • Contributing to Cortensor
    • Incentives & Reward System
    • Governance & Compliance
    • Safety Measures and Restricted Addresses
    • Buyback Program
    • Liquidity Additions
    • Partnerships
      • Partnership Offering for Demand-Side Partnerships
    • Community Testing
      • Closed Alpha Testing Phase #1
        • Closed Alpha Testing Phase #1 Contest: Closing & Winners Announcement
      • Closed Alpha Testing Phase #2
      • Closed Alpha Testing Phase #3
      • Closed Alpha Testing Phase #4
      • Discord Roles & Mainnet Privileges
      • DevNet Mapping
      • DevNet Modules & Parameters
    • Jobs
      • Technical Writer
      • Communication & Social Media Manager
      • Web3 Frontend Developer
      • Distributed Systems Engineer
  • Integration Guide
    • Web2
      • REST API
      • WebSocket
      • Client SDK
    • Web3
      • Web3 SDK
  • Use Cases
  • Roadmap
    • Technical Roadmap: Launch to Next 365 Days Breakdown
    • Long-term Vision: Beyond Inference
  • Glossary
  • Legal
    • Terms of Use
    • Privacy Policy
    • Disclaimer
    • Agreement for Sale of Tokens
Powered by GitBook
On this page
  • Supported Models
  • Llama 3:
  1. Technical Architecture
  2. AI Inference

Open Source Models

Cortensor leverages open-source models to provide robust and flexible AI inference capabilities. By utilizing these models, Cortensor ensures that the network remains accessible, transparent, and adaptable to various use cases.

Supported Models

Llama 3:

  • Available in both quantized and regular versions.

  • Supports a wide range of hardware, from low-end devices to high-end GPUs.

  • Enables broad participation in AI inference tasks by accommodating diverse computational resources.

  • Quantization allows lower-end devices to perform inference tasks, promoting inclusivity and scalability.

Future Plans

  • Expansion of Llama 3 Models: Cortensor plans to add more variations of Llama 3-based models to enhance the network’s capabilities and provide greater flexibility for different tasks.

  • Integration of Additional Open Source Models: Beyond Llama 3, Cortensor is committed to integrating other open-source AI models. This will further diversify the network’s capabilities and ensure it remains at the forefront of AI technology.

Benefits of Open Source Models

  • Transparency: Open-source models allow for greater transparency and trust within the network, as their development and updates are publicly available.

  • Community-Driven Innovation: Leveraging open-source models encourages community contributions and collaboration, driving continuous improvement and innovation.

  • Cost-Effectiveness: Open-source models reduce the cost barriers for implementing advanced AI capabilities, making AI inference more accessible to a broader audience.

  • Flexibility: The use of open-source models ensures that Cortensor can adapt to new advancements and integrate various AI technologies as they evolve.

  • Quantization: Model quantization enables lower-end devices to participate in AI inferencing, enhancing the network’s inclusivity and resource utilization.

PreviousAI InferenceNextCentralized vs Decentralized Models

Last updated 8 months ago