# Session, Session Queue, Router, and Miner in Cortensor

Cortensor’s modular architecture is designed to facilitate efficient AI inference processing through a **microservices-inspired structure**, where each module acts as a **mediator between two or more parties**. This ensures **scalability, reliability, and modular communication** within the decentralized AI network.

Each module:

* Stores only **necessary datasets**, reducing data redundancy.
* **Relies on other modules** to fetch relevant data through function calls.
* Facilitates **seamless communication** between different network components.

#### **Examples of Module Interactions**

* **Cognitive Module** – Manages interactions between oracle nodes and miners to maintain network health.
* **Node Stats & Reputation Module** – Tracks performance and reliability of miners by directly gathering real-time data from miners, while oracle nodes assist in coordination and timing measurement.
* **Node Pool Module** – Maintains the state of available miner nodes, updating data through interactions with the node reputation module and node pool agents.

Now, let’s dive into the **Session and Session Queue modules**, which are primarily responsible for **serving user requests efficiently**.

***

### **Session & Session Queue: Serving User Requests**

The **Session Module and Session Queue** are central components in Cortensor that **handle AI inference requests from users**. They are designed to ensure optimal **task assignment, execution, and data integrity**.

#### **Module Interactions**

* **Router Node ↔ Client** → Handles user interactions, relays requests, and returns responses.
* **Session Module ↔ Router Node** → Manages user session creation and request handling.
* **Session Module ↔ Session Queue Module** → Facilitates task queuing and miner assignment.
* **Session Queue Module ↔ Miners** → Allocates jobs to miners, manages task execution, and ensures balanced workload distribution.
* **Session Queue Module ↔ Router Node** → Receives inference results from miners and forwards them back to the Router Node for client delivery.

#### **User Request Flow in Cortensor**

The **Session Module** plays a vital role in creating and managing AI inference sessions. Below is the **rough workflow** of how Cortensor handles user requests:

1. **Session Creation**
   * A **user initiates a session** through a **router node**.
   * The **router node forwards the session request** to the **Session Module**.
2. **Node Selection & Reservation**
   * The **Session Module queries the Node Pool** to find **available ephemeral nodes**.
   * **Selected nodes are reserved** in the **Session Queue** for the session.
3. **User Request Processing Begins**
   * Once a session is **fully assigned**, it is ready to process **inference requests**.
4. **User Submits Inference Request**
   * The user sends a **request via the Router Node** (either through **REST API** or directly via **smart contract**).
   * The **Router Node relays the request** to the **Session Module**.
5. **Task Assignment & Dispatch**
   * The **Session Module forwards the request** to the **Session Queue**.
   * The **Session Queue dispatches tasks** to the **miners** assigned to that session.
6. **Event Emission for Miners**
   * The **Session Module emits task events**, notifying miners in the session.
   * **Miners listen** to these events and begin working on the **inference request**.
7. **Miners Process & Relay Inference Data**
   * Miners execute inference and **stream results per token processed**.
   * **WebSockets relay the inference data** to the **Router Node**.
   * The **Router Node forwards the processed response** to the **Client SDK** through **REST Streaming or WebSockets**.

#### **Data Integrity & Validation**

To ensure **reliable inference data**, miners **commit their results in two steps**, similar to the **Cognitive Module’s state machine validation process**.

8. **Precommit Stage**
   * Miners **generate a hash** of the inference data and submit it.
   * This ensures **commitment without exposing the actual inference result**.
9. **Commit Stage**
   * Miners **submit the actual inference data** to the **Session Queue**.
   * The **Session Queue validates** the inference data before marking it as **finalized**.

***

### **Node Pool & Ephemeral Node State**

The **Node Pool** is responsible for managing **ephemeral state miners**—nodes that are available for AI inference tasks.

* **Ephemeral State**: A node **enters the ephemeral state** once it has passed **Cognitive Module validation** and is **ready to serve inference tasks**.
* **Session Assignment**: When a **session is created**, the **Session Module selects ephemeral nodes** from the **Node Pool**.
* **Node Marking & Release**:
  * **When a node is assigned** to a session, it is marked as **reserved**.
  * **Once the session is completed**, the node is **re-tested by the Cognitive Module** before being returned to the **ephemeral pool**.

This mechanism ensures **only high-quality nodes** serve user requests, maintaining **performance integrity** across the network.

***

### **Current Design & Future Considerations**

#### **Current State**

* **Session & Session Queue handle AI inference requests.**
* **Ephemeral Nodes dynamically serve user requests** with built-in integrity checks.
* **Router Nodes manage user interactions** and relay responses efficiently.

#### **Future Enhancements**

* **TBA**

***

### **Conclusion**

The **Session Module, Session Queue, Router, and Miner nodes** form the backbone of **Cortensor’s decentralized AI inference framework**. Through **modular interactions, real-time event processing, and a secure validation system**, Cortensor ensures **scalable, efficient, and reliable AI inference** while maintaining **network integrity**.
