# /delegate & /validate v4 Spec – Programmable Consensus & Receipts

> This builds on **v3** (explicit redundancy & consensus) and assumes `/delegate` and `/validate` v3 already return per-replica results plus a consensus block. v4 adds **programmable consensus** and **verifiable receipts** that external systems can depend on without re-running all work.

***

#### 4.0 Goals

v4 for `/delegate` and `/validate` aims to:

* Turn consensus from “a router-side detail” into a **programmable contract**.
* Produce **stable receipts** that:
  * Summarize the verdict/result.
  * Reference **all underlying computation evidence** (session + task IDs).
  * Can be checked later by external agents / protocols.
* Keep the **base v2/v3 contracts intact** so existing clients can ignore v4 features if they don’t need them.

***

#### 4.1 Programmable Consensus (v4 Request Shape)

On top of v3’s consensus block (replicas, session pool, strategy, disagreement policy), v4 lets callers **describe how consensus should be formed**, not just how many replicas to run.

Conceptually, the v4 payload adds a `consensus` section like:

* `replicas` – still 1 / 3 / 5 (same as v3).
* `session_pool` – candidate `session_id` list the router can choose from.
* `strategy` – now programmable:
  * `"majority"` (default)
  * `"weighted"`
  * `"median_of_means"`
  * `"ensemble_model"` (or similar)
* `weight_by` (optional) – hints for weighted strategies:
  * `"reputation"` (validator/miner performance)
  * `"stake"` (staked COR for this node/session)
  * `"latency"` (prefer fast replicas)
  * `"cost"` (prefer cheaper replicas)
* `disagreement_policy` – how to handle diverging results:
  * `"return_all"` – surface all replica outputs + let caller decide.
  * `"fail_hard"` – treat strong disagreement as an error.
  * `"best_effort"` – still produce a single consensus + attach disagreement metrics.

Key points:

* **v3**: you pick `replicas` and basic aggregation.
* **v4**: you specify **how** aggregation should work and which tradeoffs to prefer (cost vs reputation vs latency, etc.).

This applies symmetrically to:

* `/delegate` v4 – programmable consensus over multiple execution runs.
* `/validate` v4 – programmable consensus over multiple validation runs.
* `factcheck` v4 – programmable consensus over multiple fact-checking runs.

***

#### 4.2 Programmable Consensus – Router Behaviour

Given a v4 request with a programmable `consensus` block, the router:

1. **Selects replicas**
   * Chooses up to `replicas` sessions from `session_pool` that are healthy and correctly configured for the requested task.
   * May apply internal heuristics (e.g., node spec compatibility, reputation filters) as long as they are consistent with `strategy` and `weight_by`.
2. **Executes runs**
   * For each selected `session_id`, it runs the logical task:
     * `/delegate`: executes the workflow.
     * `/validate`: checks a claim.
     * `factcheck`: checks facts / external references.
3. **Collects raw evidence**
   * For each replica it records:
     * `session_id`, `task_id` (or equivalent execution handle).
     * Result / verdict.
     * Optional score, confidence, or metrics.
     * Timing, cost, model metadata.
4. **Applies strategy**
   * Uses the requested `strategy` and `weight_by` to compute:
     * A consolidated verdict / result.
     * Agreement rate and aggregate confidence.
   * Examples:
     * Majority: “2 of 3 VALID.”
     * Weighted by reputation: high-reputation replicas have more influence.
     * Median-of-means: mitigates outliers in numeric scores.
5. **Applies disagreement policy**
   * Depending on `disagreement_policy`, the router:
     * Returns all replica outputs if requested.
     * Throws an explicit “consensus failure” if disagreement is too high.
     * Still returns a single answer but surfaces disagreement metrics for the caller.

The core idea: **agents describe the consensus they want; the router implements it and returns both the final result and the per-replica evidence.**

***

#### 4.3 v4 Response – Consensus Block (Extended)

Building on v3, each v4 response includes a richer `consensus` block.

Conceptual fields:

* `replicas` – number of runs actually executed.
* `agreement` – fractional agreement (e.g. `0.67` for 2 of 3).
* `strategy` – the strategy actually used (may echo request or show a fallback).
* `weight_by` – which weighting dimension (if any) was applied.
* `confidence` – derived confidence score for the final verdict/result.
* `disagreement_policy` – how disagreements were handled.
* `replica_results` – array of per-replica summaries:
  * `session_id`
  * `task_id`
  * `verdict` or `status`
  * optional: numeric score, cost, latency, model info.

Agents no longer see “just an answer”; they see **how** that answer was formed.

***

#### 4.4 Receipts – External, Verifiable Summary Artifacts

The main v4 addition is **receipts**: stable, addressable artifacts that summarize **what happened**, **what the verdict/result was**, and **where the evidence lives**.

You can think of a receipt as a “single-page summary” for external systems.

**4.4.1 Receipt Production Model**

* A v4 `/delegate` or `/validate` call **can request that a receipt be produced** (or follow a policy that always produces receipts for high-risk tasks).
* The router constructs a **receipt payload** that includes:
  * **High-level summary** (human / machine-readable).
  * **Final verdict/result** (for `/validate`) or output summary (`/delegate`).
  * **Consensus metadata** (the v4 `consensus` block).
  * **References to original computations**:
    * For each replica: `(session_id, task_id)` or equivalent handles.
    * Optional: key model metadata, timings, costs.
* The router stores the receipt in an offchain store (similar to offchain payload v2) and returns:
  * A stable `receipt_id` (URN-style) to the caller.
  * Optionally, an inline short summary plus the handle to fetch the full receipt later.

Receipts are **not** the same as the raw consensus responses; they are **summarized, normalized views** designed for external verification and cross-system reference.

**4.4.2 Receipt Session / Task**

Internally, generation of a receipt can:

* Reuse an internal **“receipt session”** (a dedicated session configured for summarization/structuring tasks), or
* Be a **router-native computation** (no extra session, just structured assembly).

Either way, the receipt must:

* Include **original request context**:
  * `original_session_id`
  * `original_task_id` (if applicable)
  * endpoint type (`delegate`, `validate`, `factcheck`, etc.)
* Include **all evidence links**:
  * The list of `(session_id, task_id)` for each replica that produced the underlying evidence.
* Optionally include:
  * Hashes of the raw outputs.
  * Policy tier / risk profile that was used.
  * Timestamps for the consensus run.

This makes a receipt a **navigable map** back to the full evidence inside Cortensor.

***

#### 4.5 Receipt Retrieval & External Verification

Once a receipt is created, **external agents / protocols** can:

* Fetch the receipt via a dedicated read endpoint (e.g. `/api/v4/receipt/{receipt_id}` or similar).
* Inspect:
  * Final verdict/result (for `/validate`).
  * Consensus block (agreement, strategy, confidence).
  * Links to all original computations (session + task IDs).
* Optionally re-validate:
  * By re-running the referenced tasks (if policy allows).
  * Or by checking onchain / offchain anchors if receipts are bridged via agents like Corgent/Bardiel.

Important: v4 does **not** require external systems to speak Cortensor session/task IDs directly, but receipts make those IDs **visible and linkable**, so advanced integrations can follow them if they wish.

***

#### 4.6 Relationship to Corgent & Bardiel

* **Corgent**
  * Uses `/delegate` and `/validate` v4 as **infrastructure primitives** for ERC-8004 and other agent ecosystems.
  * For high-stakes flows, Corgent can:
    * Request programmable consensus (e.g. 5 replicas, weighted by reputation).
    * Request receipts for each validation.
    * Expose `receipt_id` (or a wrapped version) into ERC-8004-compatible contracts.
  * External agents can then reference **Corgent-backed receipts** as the trust layer for onchain or cross-agent decisions.
* **Bardiel**
  * Uses the same v4 endpoints but:
    * Adds **Virtual-native UX and recipes** (GAME/ACP flows).
    * May generate **user-facing receipt views** (dashboard cards) backed by the same `receipt_id`.
  * For Virtual/ACP ecosystems, Bardiel can treat receipts as:
    * Checkpoints in complex agent workflows.
    * Shared evidence objects for dispute resolution or multi-agent coordination.

In both cases, receipts are **Cortensor-native artifacts** that Corgent and Bardiel can wrap, relay, or index without re-doing the heavy consensus logic themselves.

***

#### 4.7 What v4 Adds on Top of v3

Quick comparison:

* **v3**
  * Adds **explicit redundancy and consensus**:
    * `replicas` (1 / 3 / 5)
    * `session_pool`, `aggregation`, `disagreement_policy`
  * Returns **consensus metadata** and all per-replica evidence in the response.
* **v4**
  * Adds **programmable consensus strategies**:
    * `strategy`, `weight_by`, and more advanced aggregation options.
  * Introduces **receipts**:
    * Stable, addressable summaries referencing all underlying sessions/tasks.
    * Built to be fetched and inspected by external agents/protocols.
  * Connects consensus and receipts to **Cortensor’s broader trust fabric**:
    * Corgent and Bardiel can use them as primary hooks for ERC-8004, Virtual, and A2A ecosystems.

Tagline:

> **v3** makes consensus explicit and agent-visible.\
> **v4** makes consensus programmable and **portable** via receipts that carry both the verdict and its evidence graph.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cortensor.network/community-and-ecosystem/products-and-agents/corgent/delegate-and-validate-v4-spec-programmable-consensus-and-receipts.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
