Anonymous Supplementary Materials

Image Materials

Supplementary Figures and Illustrations

R1.1 Incomplete Task Execution

Trash

R1.2 Deadlock due to Orientation

upper_hand

R1.3 Object Drop Failure

drop

R4.1 Curvature Histogram

Curvature Histogram

R4.2 Trajectory Examples

Trajectory Examples

R4.3 Within Obs Panel

Figure R1 Within Obs Panel

R4.4 Tau by Success Failure

Tau by Success Failure

R4.5 Per Stage Violin

Per Stage Violin

Video Materials

Supplementary Video Results

R4.6.1 Cook Hotdogs - Success

When the robot moves toward the cabinet storing the hotdogs, the first potential peak appears around 25 seconds. During the attempt to open the door, the arm is oriented incorrectly, leading to the first drop in potential and a trough around 52 seconds. When the robot finds the correct way to open the door and reaches toward the hotdogs, a second potential peak appears around 74 seconds. It then picks up the first hotdog, resulting in a new potential peak around 130 seconds, and picks up the second hotdog around 160 seconds, producing a third peak. Afterward, it moves toward the microwave, with the potential remaining high. During the process of placing the hotdogs, there is a brief drop in potential, reflecting intermediate adjustments and attempts. When the robot places both hotdogs into the microwave and initiates the door-closing action, a fourth potential peak appears around 400 seconds. After further attempts, it turns on the switch around 417 seconds, reaching the final potential peak. The task is then completed, and the robot retracts its arms, causing the potential to decrease.

R4.6.2 Cook Hotdogs - Failure

When the robot moves toward the cabinet storing the hotdogs, the first potential peak appears around 10 seconds. During the attempt to open the door, the arm is oriented incorrectly, leading to a trough in potential around 32 seconds. When the robot finds the correct way to open the door and reaches toward the hotdog, a second potential peak appears around 73 seconds, and the potential remains high during the door-opening process. However, when the robot attempts to grasp the hotdog, the gripper is positioned incorrectly, causing the potential to drop sharply. It then fails to pick up the hotdog, and throughout the repeated unsuccessful attempts, the potential remains at a low level.

R4.6.3 Picking Up Trash - Success

In the initial stage, the robot scans its surroundings, and the potential remains low. After locating the trash bin, it approaches and picks it up, reaching the first potential peak around 29 seconds. After a brief adjustment, it stabilizes the grasp on the trash bin, leading to a second potential peak around 41 seconds. It then moves toward the soda cans, with the potential remaining high. During the three instances of picking up the soda cans, the potential reaches smaller peaks around 67 seconds, 86 seconds, and 106 seconds. After completing the task, the robot places the trash bin on the ground, and the potential begins to decrease.

R4.6.4 Picking Up Trash - Failure

In the initial stage, the robot scans its surroundings, and the potential remains low. After locating the trash bin, it moves toward it, leading to a peak in potential. Around 43 seconds, it picks up the trash bin, reaching another peak in potential. It then approaches the soda cans, with the potential remaining high. During the two instances of picking up soda cans, the potential reaches smaller peaks around 78 seconds and 100 seconds. However, the robot forgets the existence of the third can. Around 107 seconds, it places the trash bin on the ground, resulting in a significant drop in potential, which then remains low until the episode terminates due to the time limit.

R4.6.5 Spraying Fruit Trees - Success

In the initial stage, the robot scans its surroundings, and the potential remains low. When it discovers and picks up the spray bottle, the potential reaches a peak around 19 seconds. It then carries the spray bottle toward the first target tree, during which the potential remains high. The potential remains at a relatively low level during the watering process. Therefore, the peak ends when the spray bottle first makes contact with the tree. After completing the first watering task, the robot turns off the spray bottle, leading to a potential peak around 105 seconds. When the robot begins moving toward the second tree, the potential rises again and remains high. Similarly, around 187 seconds, it reaches another potential peak at the moment it begins watering the tree, and the task is completed. Afterward, it continues circling around while watering, and the potential begins to decrease.

R4.6.6 Spraying Fruit Trees - Failure

In the initial stage, the robot scans its surroundings, and the potential remains low. When it discovers and picks up the spray bottle, the potential reaches a peak around 27 seconds. However, the spray bottle slips from its grasp, causing the potential to drop sharply starting at 28 seconds. In the subsequent attempts to pick it up again, there is a brief rise in potential around 86 seconds. But the attempt fails, and the potential decreases further. After that, the robot tries to water the tree without carrying the spray bottle, so the potential remains at a low level throughout.

R4.6.7 Turning on Radio - Success

In the initial stage, the robot scans its surroundings, and the potential fluctuates at a low level. After locating the radio, it approaches it, leading to a peak in potential around 14 seconds. After several attempts, it picks up the radio around 27 seconds, and the potential rises rapidly. It then adjusts the position of the radio and begins trying to turn on the switch, with the potential remaining high. Around 69 seconds, it turns on the radio, reaching a peak in potential and completing the task. The subsequent process of putting the radio back is not part of the task, so the potential correspondingly decreases.

R4.6.8 Turning on Radio - Failure

In the initial stage, the robot scans its surroundings, and the potential fluctuates at a low level. After locating the radio, it moves toward it, leading to a peak in potential around 21 seconds. However, starting at around 48 seconds, the subsequent attempts begin to deviate from a reasonable execution strategy, and the potential drops sharply. The robot then makes multiple low-quality attempts, during which the potential remains consistently low. At 142 seconds, it knocks over the radio, reaching a trough in potential. After that, it continues making low-quality attempts, and the potential stays at a low level.

R4.6.9 Wash a Baseball Cap - Success

In the initial stage, the robot scans its surroundings to locate and approach the washing machine, while the potential remains at a relatively high level. Around 35 seconds, it opens the washing machine door, reaching a peak in potential. After adjusting its direction, it detects the baseball caps, leading to another potential peak around 52 seconds. After several attempts, it picks up the two baseball caps around 98 seconds and 132 seconds, producing two additional potential peaks. It then places the two baseball caps into the washing machine in succession, resulting in potential peaks around 142 seconds and 161 seconds. At 188 seconds, it closes the washing machine door, reaching another potential peak. At 207 seconds, it turns on the washing machine, reaching yet another potential peak. Afterward, it retracts its arms, and the potential correspondingly decreases.

R4.6.10 Wash a Baseball Cap - Failure

In the initial stage, the robot scans its surroundings to locate and approach the washing machine, while the potential remains at a relatively high level. Around 31 seconds, it opens the washing machine door, reaching a peak in potential. After several attempts, it picks up two baseball caps around 68 seconds and 100 seconds, producing two additional potential peaks. It then places the two baseball caps into the washing machine in succession, leading to potential peaks around 118 seconds and 132 seconds. At 226 seconds, it closes the washing machine door, reaching another potential peak. However, afterward the robot forgets to turn on the washing machine and instead continues searching for the baseball caps, causing the potential to drop rapidly and remain at a low level.

Planned Revisions for Camera-Ready Manuscript

Flow-Based Potential Fields (FPF)

Rendering formulas, please wait...

We thank all reviewers for the constructive feedback across both rounds of discussion. Below we present the planned revisions for the camera-ready manuscript. Section 1 summarizes the most important changes addressing the key concerns raised by Reviewers 2 and 4. Section 2 provides the complete revision plan with exact original and replacement text for every modification.

Summary of Key Revisions

The revision involves 43 edits across Abstract, Introduction, Preliminaries, Method, Experiments, Conclusion, and Appendix. The changes fall into four categories.

Theoretical Framing Correction (Reviewer 4)

Concern. The paper cited OT-CFM (optimal-transport conditional flow matching) to justify the one-step estimator, but the implementation uses standard CFM with independent noise-data sampling, not OT coupling. Theorem A.1's original premise was therefore incorrect.

Revision.

  • All references to “optimal-transport conditional flow matching,” “OT probability path,” and “straight-path structure of OT-CFM” are removed (7 locations across Abstract, Introduction, Preliminaries, Method, and Appendix).
  • Theorem A.1 is restated under the correct premises: (i) independent sampling (\(\mathbf{x}_0 \perp \mathbf{x}_1 \mid \mathbf{c}\)), (ii) deterministic linear interpolation, and (iii) the population-level \(L^2\) minimizer. The proof steps are unchanged; only the premise label was incorrect.
  • A new Remark distinguishes the population-level optimum \(\mathbf{v}^*\) from the learned \(\mathbf{v}_\theta\) and notes that the main practical gap is finite network capacity, citing empirical validation (cross-state \(\rho = 0.997\) with NFE=20; within-observation Kendall \(\tau = 0.863\) against NFE=100).
Corrected Theorem A.1. Success Potential Recovery under Conditional FM

Consider conditional flow matching with independent noise-data sampling, where \(\mathbf{x}_0 \sim p_0\) and \(\mathbf{x}_1 \sim p_{\text{data}}(\cdot | \mathbf{c})\) are drawn independently for each context \(\mathbf{c}\), and deterministic linear interpolation \(\mathbf{x}_\sigma = (1-\sigma)\mathbf{x}_0 + \sigma \mathbf{x}_1\) (zero path variance). Define the population-level optimal velocity field as the conditional expectation:

$$ \mathbf{v}^*(\mathbf{x}, \sigma, \mathbf{c}) \triangleq \mathbb{E}[\mathbf{x}_1 - \mathbf{x}_0 \mid \mathbf{x}_\sigma = \mathbf{x}, \mathbf{c}], $$

which coincides with the \(L^2\) minimiser of \(\mathcal{L}_{\mathrm{CFM}}\) for \(\sigma\)-a.e. in the training distribution. Then at \(\sigma = 0\):

$$ \mathbf{x}_0 + \mathbf{v}^*(\mathbf{x}_0, 0, \mathbf{c}) = \mathbb{E}_{\mathbf{x}_1 \sim p_{\text{data}}(\cdot|\mathbf{c})}[\mathbf{x}_1] \quad \text{for } p_0\text{-a.e. } \mathbf{x}_0. $$

In particular, the progress component recovers the dataset-conditional success probability \(\mathbb{E}[y \mid \mathbf{c}] = P_{\mathcal{D}}(\mathrm{Success} \mid \mathbf{c})\).

Remark. Population-Level Nature and Practical Considerations

The theorem characterizes \(\mathbf{v}^*\) (the population conditional expectation), not the learned \(\mathbf{v}_\theta\). In practice, finite network capacity means \(\mathbf{v}_\theta\) only approximates \(\mathbf{v}^*\). We assess this gap empirically: cross-state correlation with NFE=20 yields \(\rho = 0.997\); within-observation ranking against NFE=100 yields Kendall \(\tau = 0.863\).

Contribution Repositioning (Reviewer 4)

Concern. Decoupled AWR was presented as a key conceptual contribution, but decoupling policy and value losses is standard in separated actor-critic architectures. The one-step estimator's positioning as a “value estimator” was overclaimed.

Revision.

  • The paper's primary contribution is repositioned as the unified embedded-critic architecture that eliminates separate critic networks.
  • Decoupled AWR and the one-step formulation are repositioned as practical design choices within this architecture, not standalone contributions.
  • Terminology is systematically updated: “value estimator” → “baseline proxy”; “value regression” → “progress regression”; \(Q(\mathbf{c}, \mathbf{a}) \to S(\mathbf{c}, \mathbf{a})\) for FPF-specific scores (18 locations).
  • “Theoretically grounded” → “motivated at the population level” throughout.
  • In Contribution #2, “We introduce / We propose” → “We employ / and adopt” to avoid implying standalone novelty.

Accessibility Improvements (Reviewer 2)

Concern. The paper was hard to understand for readers outside the robotics subcommunity. The motivation for decoupled weighting was not intuitive.

Revision.

  • A new method overview paragraph is added at the start of Section 3, enumerating the three design choices (augmented architecture, decoupled AWR, one-step baseline) in plain language before the technical details.
  • A concrete numerical walkthrough is added to Section 3.2.3: “With \(A = -2.0\) and \(\tau = 0.5\), the AWR weight is \(w = \exp(-4) \approx 0.018\). Under coupled weighting, the progress-head gradient is scaled by only \(0.018\); our decoupled formulation keeps unit weight.”
  • Per-step vs. per-chunk design rationale is explicitly stated (Section 3.1): per-step is a design choice motivated by the backbone's step-wise features, not a uniquely justified option.
  • Notation inconsistencies are fixed (\(v_\theta \to \mathbf{v}_\theta\) at L215; \(\mathcal{u}_\sigma \to \mathbf{u}_\sigma\) at L219).

Scope, Limitations, and Broader Impact (Reviewers 2 and 4)

Revision.

  • Empirical claims are narrowed to the tested setting (Pi0/Pi0.5) throughout Abstract, Introduction, and Conclusion.
  • The Limitations section is expanded to cover: (i) reliance on stage-level binary rewards; (ii) population-level motivation without finite-model guarantees; (iii) empirical scope limited to 5 simulation + 5 real-world tasks, with hyperparameter ranges explicitly listed (\(\tau \in \{0.3, 0.5, 0.7\}\), \(K \in \{1, 3, 5\}\), NFE \(\in \{1, 10\}\)).
  • A new Broader Impact paragraph discusses safety monitoring for autonomous systems and potential bias amplification from the ranking mechanism.
  • The flow time distribution in formal equations is generalized from \(\sigma \sim \mathcal{U}[0,1]\) to \(\sigma \sim p(\sigma)\) (3 locations), matching the notation already used in Algorithm 1.

Complete Revision Plan

Below we list every planned modification with exact location, original text, and replacement text. Changes are grouped by paper section. Red strikethrough marks removed text; green marks new text.

Abstract (L145)

Original:

Furthermore, by exploiting the straight-path structure of optimal-transport conditional flow matching, we derive a single-step value estimator that computes advantages in a single forward pass, making RL fine-tuning computationally comparable to supervised learning. We prove theoretically the consistency of this estimator ...

Revised:

Furthermore, we propose a low-cost one-step baseline proxy that computes advantages in a single forward pass, motivated by the population-level conditional-mean property of the flow matching regression objective under independent noise-data sampling. This makes RL fine-tuning computationally comparable to supervised learning. ...

Introduction

L156. consistentcorresponding quality estimate.

L158. unbiased calibration → calibrated progress estimation.

L167. we exploit a structural property of optimal-transport conditional flow matching: the optimal transport path is a straight line ...we use a one-step proxy at the noise boundary as a low-cost baseline estimator, motivated by the population-level conditional-mean property of the flow matching regression objective under independent noise-data sampling.

Contribution #1 (L183). value estimationprogress estimation.

Contribution #2 title (L184). efficient value estimationefficient baseline estimation.

Contribution #2 text (L185). We introduce ... we further derive a single-step value estimatorWe employ a decoupled AWR objective ... and adopt a low-cost one-step baseline proxy whose use for test-time self-guidance is empirically validated.

Contribution #3 (L187). a effective approximation of state valuesan effective approximation of progress scores.

Preliminaries

L215. \(v_\theta\)\(\mathbf{v}_\theta\) (notation fix). L219: \(\mathcal{u}_\sigma\)\(\mathbf{u}_\sigma\).

L215. optimal transport probability pathstandard linear interpolation path.

L221, L321. \(\sigma \sim\) \(\mathcal{U}[0,1]\)\(p(\sigma)\), matching Algorithm 1 notation.

L225. single-step value estimationbaseline estimation.

Method

Section 3 opening (L248, new). Added method overview paragraph: “FPF is built around three design choices. First, ... Second, ... Third, ...”

L255. intrinsic estimator of \(P(\text{success} | \mathbf{c}, \mathbf{a})\)learned scoring signal. At the population level, it recovers \(P_\mathcal{D}(\text{success} | \mathbf{c})\); in practice, it provides a chunk-dependent scoring signal for candidate ranking, validated empirically.

L258. \(Q(\mathbf{c}, \mathbf{a}) \to S(\mathbf{c}, \mathbf{a})\); “quality score” → “chunk-level score”.

L260 (new). Added per-step design rationale: “We note that per-step modeling is a design choice rather than a uniquely justified option ...”

L263. we introducewe employ.

L279. Section title: Single-Step State-Value EstimationSingle-Step Baseline Proxy.

L282--291. Full rewrite: removed OT-CFM references; core argument changed to conditional independence under independent sampling.

L301. theoretically grounded ... recovers true expected valuemotivated by the population-level conditional-mean property ... and empirically validated (\(\rho = 0.997\)).

Figure 2 caption (L307). Three changes: Single-step value estimationSingle-step baseline proxy; \(\mathcal{U}[0,1]\)\(p(\sigma)\); unbiased calibrationcalibrated progress estimation.

L324. Underbrace label: value regressionprogress regression.

L329 (new text after). Added numerical walkthrough: “For example, with \(A = -2.0\) and \(\tau = 0.5\), the AWR weight is \(w = \exp(-4) \approx 0.018\) ...”

L329. unbiased value learningcalibrated progress estimation.

L330. value estimatorprogress estimator.

L339. Added: “The effectiveness of this test-time ranking is validated empirically in Section 4.3.”

Experiments

L394. Approximation validityApproximation quality; single-step estimatorsingle-step proxy.

L473. validity of the efficient single-step value estimatorpractical fidelity of the efficient single-step baseline proxy.

Figure 6 (L523, L526). single-step state-value estimation / single-step approximationsingle-step baseline proxy / practical fidelity of the single-step proxy.

L530--531. Efficiency of single-step state-value estimation / single-step estimatorEfficiency of single-step baseline estimation / single-step proxy.

Conclusion (L556)

Restructured to: (1) unified architecture as primary contribution; (2) decoupled supervision and one-step baseline as practical design choices; (3) expanded limitations covering stage-level binary rewards, population-level motivation without finite-model guarantees, and explicit scope (\(5 + 5\) tasks, Pi0/Pi0.5, \(\tau \in \{0.3, 0.5, 0.7\}\), \(K \in \{1, 3, 5\}\), NFE \(\in \{1, 10\}\)).

New Broader Impact paragraph discussing safety monitoring and potential bias amplification from the ranking mechanism.

Appendix A

L575. theoretically groundedpopulation-level properties.

L580. Section title: Consistency of Success Potential LearningPopulation-Level Properties of Success Potential.

L583. proxy for the value functionproxy for the dataset-conditional success probability.

L585--612. Theorem A.1 fully restated (see Section above). Proof Step 3 now explicitly states \(\mathbf{x}_0 \perp \mathbf{x}_1 \mid \mathbf{c}\).

L609. strictly defined as the state value functionUnder standard assumptions, this coincides with \(V^{\pi_\beta}(\mathbf{c})\).

L611. unbiased estimator of \(V^{\pi_\beta}\)recovers \(P_\mathcal{D}(\text{Success} \mid \mathbf{c})\) at the population level.

L614--617. Remark A.2 fully rewritten (see Section above).

L656. A key contributionA practical design choice.

Appendix B (Algorithms)

L691. unbiased criticcalibrated scoring proxy.

L744. performing a Monte Carlo approximation of \(\pi^*\)performing approximate best-of-\(K\) selection, ranking by predicted progress score; scalar quality score \(Q^{(k)}\)progress score \(S^{(k)}\).

L766--767. \(Q^{(k)} \to S^{(k)}\); \(\arg\max_k Q^{(k)} \to \arg\max_k S^{(k)}\).