top of page

s3: RL Framework for Training Search Agents

This blog post summarizes a scientific paper that introduces s3, a novel framework for training search agents using reinforcement learning (RL) with minimal data. s3 focuses on optimizing retrieval-augmented generation (RAG) systems by decoupling the searcher from the generator and using a new reward function called Gain Beyond RAG (GBR).

Problem Definition

- Existing RAG systems have limitations:

- They rely on search-only metrics (e.g., NDCG) that don't account for downstream generation quality.

- They fine-tune the entire large language model (LLM), which entangles retrieval with generation.

- Classic RAG systems use static retrieval methods with fixed queries, which often underperform on complex reasoning tasks.

- Pre-RL-Zero methods enable LLM participation during inference but lack trainable components or outcome-based optimization.

- End-to-end approaches train LLMs to retrieve and generate jointly but require full model access.

Proposed Solution

- s3 is a lightweight, model-agnostic framework that addresses these issues:

- It decouples the searcher from the generator.

- It trains the searcher using the Gain Beyond RAG (GBR) reward.

- The GBR reward measures the improvement in generation accuracy over naive RAG.

- The framework employs a multi-turn search-select loop:

1. Query Generation: The searcher emits a query.

2. Search: Documents are retrieved using the generated query.

3. Select: Useful documents are selected.

4. Stop decision: The model decides whether to continue searching.

- The search policy is optimized via reinforcement learning using Proximal Policy Optimization (PPO).

- s3 is designed to optimize retrieval quality using a generation-aware reward, enabling lightweight and modular training compatible with black-box LLMs.

- A key component of s3 is the Gain Beyond RAG (GBR) reward, defined as:

- GBR(Q) = Acc(G(Q, Ds3), A) - Acc(G(Q, DRAG), A)

- Where Acc(·) is a task-specific metric, A is the gold-standard answer, DRAG is the top-k retrieval from the original question, and Ds3 is the set of selected documents.

Results

- s3 demonstrates strong performance across various benchmarks:

- It achieves state-of-the-art results on six general QA and five medical QA benchmarks.

- Data efficiency is a key advantage:

- s3 requires only 2.4k training samples to outperform baselines trained on over 170k samples.

- Ablation studies show the importance of key components:

- Removing the selection step degrades performance.

- Disabling "Begin with Search" consistently causes a significant drop in performance.

- The framework was evaluated using the Generation Accuracy (GenAcc) metric, which combines a fast span-matching test with an LLM-based correctness check.

- GenAcc = spancheck ∨ judgecheck

Importance

- s3 offers a modular approach to RAG:

- It separates search and generation, allowing the retrieval component to be optimized for any black-box LLM.

- The framework achieves targeted search policy learning, yielding substantial gains in both efficiency and generalization.

- The results indicate that s3 reaches peak performance with relatively few turns, demonstrating rapid learning of focused queries.

- s3 can also benefit from larger-scale training, making it a flexible framework that performs well both in low-resource and high-resource settings.

Source:
bottom of page