Supplementary Figures and Illustrations
Supplementary Video Results
When the robot moves toward the cabinet storing the hotdogs, the first potential peak appears around 25 seconds. During the attempt to open the door, the arm is oriented incorrectly, leading to the first drop in potential and a trough around 52 seconds. When the robot finds the correct way to open the door and reaches toward the hotdogs, a second potential peak appears around 74 seconds. It then picks up the first hotdog, resulting in a new potential peak around 130 seconds, and picks up the second hotdog around 160 seconds, producing a third peak. Afterward, it moves toward the microwave, with the potential remaining high. During the process of placing the hotdogs, there is a brief drop in potential, reflecting intermediate adjustments and attempts. When the robot places both hotdogs into the microwave and initiates the door-closing action, a fourth potential peak appears around 400 seconds. After further attempts, it turns on the switch around 417 seconds, reaching the final potential peak. The task is then completed, and the robot retracts its arms, causing the potential to decrease.
When the robot moves toward the cabinet storing the hotdogs, the first potential peak appears around 10 seconds. During the attempt to open the door, the arm is oriented incorrectly, leading to a trough in potential around 32 seconds. When the robot finds the correct way to open the door and reaches toward the hotdog, a second potential peak appears around 73 seconds, and the potential remains high during the door-opening process. However, when the robot attempts to grasp the hotdog, the gripper is positioned incorrectly, causing the potential to drop sharply. It then fails to pick up the hotdog, and throughout the repeated unsuccessful attempts, the potential remains at a low level.
In the initial stage, the robot scans its surroundings, and the potential remains low. After locating the trash bin, it approaches and picks it up, reaching the first potential peak around 29 seconds. After a brief adjustment, it stabilizes the grasp on the trash bin, leading to a second potential peak around 41 seconds. It then moves toward the soda cans, with the potential remaining high. During the three instances of picking up the soda cans, the potential reaches smaller peaks around 67 seconds, 86 seconds, and 106 seconds. After completing the task, the robot places the trash bin on the ground, and the potential begins to decrease.
In the initial stage, the robot scans its surroundings, and the potential remains low. After locating the trash bin, it moves toward it, leading to a peak in potential. Around 43 seconds, it picks up the trash bin, reaching another peak in potential. It then approaches the soda cans, with the potential remaining high. During the two instances of picking up soda cans, the potential reaches smaller peaks around 78 seconds and 100 seconds. However, the robot forgets the existence of the third can. Around 107 seconds, it places the trash bin on the ground, resulting in a significant drop in potential, which then remains low until the episode terminates due to the time limit.
In the initial stage, the robot scans its surroundings, and the potential remains low. When it discovers and picks up the spray bottle, the potential reaches a peak around 19 seconds. It then carries the spray bottle toward the first target tree, during which the potential remains high. The potential remains at a relatively low level during the watering process. Therefore, the peak ends when the spray bottle first makes contact with the tree. After completing the first watering task, the robot turns off the spray bottle, leading to a potential peak around 105 seconds. When the robot begins moving toward the second tree, the potential rises again and remains high. Similarly, around 187 seconds, it reaches another potential peak at the moment it begins watering the tree, and the task is completed. Afterward, it continues circling around while watering, and the potential begins to decrease.
In the initial stage, the robot scans its surroundings, and the potential remains low. When it discovers and picks up the spray bottle, the potential reaches a peak around 27 seconds. However, the spray bottle slips from its grasp, causing the potential to drop sharply starting at 28 seconds. In the subsequent attempts to pick it up again, there is a brief rise in potential around 86 seconds. But the attempt fails, and the potential decreases further. After that, the robot tries to water the tree without carrying the spray bottle, so the potential remains at a low level throughout.
In the initial stage, the robot scans its surroundings, and the potential fluctuates at a low level. After locating the radio, it approaches it, leading to a peak in potential around 14 seconds. After several attempts, it picks up the radio around 27 seconds, and the potential rises rapidly. It then adjusts the position of the radio and begins trying to turn on the switch, with the potential remaining high. Around 69 seconds, it turns on the radio, reaching a peak in potential and completing the task. The subsequent process of putting the radio back is not part of the task, so the potential correspondingly decreases.
In the initial stage, the robot scans its surroundings, and the potential fluctuates at a low level. After locating the radio, it moves toward it, leading to a peak in potential around 21 seconds. However, starting at around 48 seconds, the subsequent attempts begin to deviate from a reasonable execution strategy, and the potential drops sharply. The robot then makes multiple low-quality attempts, during which the potential remains consistently low. At 142 seconds, it knocks over the radio, reaching a trough in potential. After that, it continues making low-quality attempts, and the potential stays at a low level.
In the initial stage, the robot scans its surroundings to locate and approach the washing machine, while the potential remains at a relatively high level. Around 35 seconds, it opens the washing machine door, reaching a peak in potential. After adjusting its direction, it detects the baseball caps, leading to another potential peak around 52 seconds. After several attempts, it picks up the two baseball caps around 98 seconds and 132 seconds, producing two additional potential peaks. It then places the two baseball caps into the washing machine in succession, resulting in potential peaks around 142 seconds and 161 seconds. At 188 seconds, it closes the washing machine door, reaching another potential peak. At 207 seconds, it turns on the washing machine, reaching yet another potential peak. Afterward, it retracts its arms, and the potential correspondingly decreases.
In the initial stage, the robot scans its surroundings to locate and approach the washing machine, while the potential remains at a relatively high level. Around 31 seconds, it opens the washing machine door, reaching a peak in potential. After several attempts, it picks up two baseball caps around 68 seconds and 100 seconds, producing two additional potential peaks. It then places the two baseball caps into the washing machine in succession, leading to potential peaks around 118 seconds and 132 seconds. At 226 seconds, it closes the washing machine door, reaching another potential peak. However, afterward the robot forgets to turn on the washing machine and instead continues searching for the baseball caps, causing the potential to drop rapidly and remain at a low level.
Flow-Based Potential Fields (FPF)
We thank all reviewers for the constructive feedback across both rounds of discussion. Below we present the planned revisions for the camera-ready manuscript. Section 1 summarizes the most important changes addressing the key concerns raised by Reviewers 2 and 4. Section 2 provides the complete revision plan with exact original and replacement text for every modification.
The revision involves 43 edits across Abstract, Introduction, Preliminaries, Method, Experiments, Conclusion, and Appendix. The changes fall into four categories.
Concern. The paper cited OT-CFM (optimal-transport conditional flow matching) to justify the one-step estimator, but the implementation uses standard CFM with independent noise-data sampling, not OT coupling. Theorem A.1's original premise was therefore incorrect.
Revision.
Consider conditional flow matching with independent noise-data sampling, where \(\mathbf{x}_0 \sim p_0\) and \(\mathbf{x}_1 \sim p_{\text{data}}(\cdot | \mathbf{c})\) are drawn independently for each context \(\mathbf{c}\), and deterministic linear interpolation \(\mathbf{x}_\sigma = (1-\sigma)\mathbf{x}_0 + \sigma \mathbf{x}_1\) (zero path variance). Define the population-level optimal velocity field as the conditional expectation:
$$ \mathbf{v}^*(\mathbf{x}, \sigma, \mathbf{c}) \triangleq \mathbb{E}[\mathbf{x}_1 - \mathbf{x}_0 \mid \mathbf{x}_\sigma = \mathbf{x}, \mathbf{c}], $$which coincides with the \(L^2\) minimiser of \(\mathcal{L}_{\mathrm{CFM}}\) for \(\sigma\)-a.e. in the training distribution. Then at \(\sigma = 0\):
$$ \mathbf{x}_0 + \mathbf{v}^*(\mathbf{x}_0, 0, \mathbf{c}) = \mathbb{E}_{\mathbf{x}_1 \sim p_{\text{data}}(\cdot|\mathbf{c})}[\mathbf{x}_1] \quad \text{for } p_0\text{-a.e. } \mathbf{x}_0. $$In particular, the progress component recovers the dataset-conditional success probability \(\mathbb{E}[y \mid \mathbf{c}] = P_{\mathcal{D}}(\mathrm{Success} \mid \mathbf{c})\).
The theorem characterizes \(\mathbf{v}^*\) (the population conditional expectation), not the learned \(\mathbf{v}_\theta\). In practice, finite network capacity means \(\mathbf{v}_\theta\) only approximates \(\mathbf{v}^*\). We assess this gap empirically: cross-state correlation with NFE=20 yields \(\rho = 0.997\); within-observation ranking against NFE=100 yields Kendall \(\tau = 0.863\).
Concern. Decoupled AWR was presented as a key conceptual contribution, but decoupling policy and value losses is standard in separated actor-critic architectures. The one-step estimator's positioning as a “value estimator” was overclaimed.
Revision.
Concern. The paper was hard to understand for readers outside the robotics subcommunity. The motivation for decoupled weighting was not intuitive.
Revision.
Revision.
Below we list every planned modification with exact location, original text, and replacement text. Changes are grouped by paper section. Red strikethrough marks removed text; green marks new text.
Original:
Furthermore, by exploiting the straight-path structure of optimal-transport conditional flow matching, we derive a single-step value estimator that computes advantages in a single forward pass, making RL fine-tuning computationally comparable to supervised learning. We prove theoretically the consistency of this estimator ...
Revised:
Furthermore, we propose a low-cost one-step baseline proxy that computes advantages in a single forward pass, motivated by the population-level conditional-mean property of the flow matching regression objective under independent noise-data sampling. This makes RL fine-tuning computationally comparable to supervised learning. ...
L156. consistent → corresponding quality estimate.
L158. unbiased calibration → calibrated progress estimation.
L167. we exploit a structural property of optimal-transport conditional flow matching: the optimal transport path is a straight line ... → we use a one-step proxy at the noise boundary as a low-cost baseline estimator, motivated by the population-level conditional-mean property of the flow matching regression objective under independent noise-data sampling.
Contribution #1 (L183). value estimation → progress estimation.
Contribution #2 title (L184). efficient value estimation → efficient baseline estimation.
Contribution #2 text (L185). We introduce ... we further derive a single-step value estimator → We employ a decoupled AWR objective ... and adopt a low-cost one-step baseline proxy whose use for test-time self-guidance is empirically validated.
Contribution #3 (L187). a effective approximation of state values → an effective approximation of progress scores.
L215. \(v_\theta\) → \(\mathbf{v}_\theta\) (notation fix). L219: \(\mathcal{u}_\sigma\) → \(\mathbf{u}_\sigma\).
L215. optimal transport probability path → standard linear interpolation path.
L221, L321. \(\sigma \sim\) \(\mathcal{U}[0,1]\) → \(p(\sigma)\), matching Algorithm 1 notation.
L225. single-step value estimation → baseline estimation.
Section 3 opening (L248, new). Added method overview paragraph: “FPF is built around three design choices. First, ... Second, ... Third, ...”
L255. intrinsic estimator of \(P(\text{success} | \mathbf{c}, \mathbf{a})\) → learned scoring signal. At the population level, it recovers \(P_\mathcal{D}(\text{success} | \mathbf{c})\); in practice, it provides a chunk-dependent scoring signal for candidate ranking, validated empirically.
L258. \(Q(\mathbf{c}, \mathbf{a}) \to S(\mathbf{c}, \mathbf{a})\); “quality score” → “chunk-level score”.
L260 (new). Added per-step design rationale: “We note that per-step modeling is a design choice rather than a uniquely justified option ...”
L263. we introduce → we employ.
L279. Section title: Single-Step State-Value Estimation → Single-Step Baseline Proxy.
L282--291. Full rewrite: removed OT-CFM references; core argument changed to conditional independence under independent sampling.
L301. theoretically grounded ... recovers true expected value → motivated by the population-level conditional-mean property ... and empirically validated (\(\rho = 0.997\)).
Figure 2 caption (L307). Three changes: Single-step value estimation → Single-step baseline proxy; \(\mathcal{U}[0,1]\) → \(p(\sigma)\); unbiased calibration → calibrated progress estimation.
L324. Underbrace label: value regression → progress regression.
L329 (new text after). Added numerical walkthrough: “For example, with \(A = -2.0\) and \(\tau = 0.5\), the AWR weight is \(w = \exp(-4) \approx 0.018\) ...”
L329. unbiased value learning → calibrated progress estimation.
L330. value estimator → progress estimator.
L339. Added: “The effectiveness of this test-time ranking is validated empirically in Section 4.3.”
L394. Approximation validity → Approximation quality; single-step estimator → single-step proxy.
L473. validity of the efficient single-step value estimator → practical fidelity of the efficient single-step baseline proxy.
Figure 6 (L523, L526). single-step state-value estimation / single-step approximation → single-step baseline proxy / practical fidelity of the single-step proxy.
L530--531. Efficiency of single-step state-value estimation / single-step estimator → Efficiency of single-step baseline estimation / single-step proxy.
Restructured to: (1) unified architecture as primary contribution; (2) decoupled supervision and one-step baseline as practical design choices; (3) expanded limitations covering stage-level binary rewards, population-level motivation without finite-model guarantees, and explicit scope (\(5 + 5\) tasks, Pi0/Pi0.5, \(\tau \in \{0.3, 0.5, 0.7\}\), \(K \in \{1, 3, 5\}\), NFE \(\in \{1, 10\}\)).
New Broader Impact paragraph discussing safety monitoring and potential bias amplification from the ranking mechanism.
L575. theoretically grounded → population-level properties.
L580. Section title: Consistency of Success Potential Learning → Population-Level Properties of Success Potential.
L583. proxy for the value function → proxy for the dataset-conditional success probability.
L585--612. Theorem A.1 fully restated (see Section above). Proof Step 3 now explicitly states \(\mathbf{x}_0 \perp \mathbf{x}_1 \mid \mathbf{c}\).
L609. strictly defined as the state value function → Under standard assumptions, this coincides with \(V^{\pi_\beta}(\mathbf{c})\).
L611. unbiased estimator of \(V^{\pi_\beta}\) → recovers \(P_\mathcal{D}(\text{Success} \mid \mathbf{c})\) at the population level.
L614--617. Remark A.2 fully rewritten (see Section above).
L656. A key contribution → A practical design choice.
L691. unbiased critic → calibrated scoring proxy.
L744. performing a Monte Carlo approximation of \(\pi^*\) → performing approximate best-of-\(K\) selection, ranking by predicted progress score; scalar quality score \(Q^{(k)}\) → progress score \(S^{(k)}\).
L766--767. \(Q^{(k)} \to S^{(k)}\); \(\arg\max_k Q^{(k)} \to \arg\max_k S^{(k)}\).