Skip to content

Commit 308d5e4

Browse files
Adress the max's comment
Co-Authored-By: Max Balandat <[email protected]>
1 parent 5623d6b commit 308d5e4

File tree

1 file changed

+4
-12
lines changed

1 file changed

+4
-12
lines changed

docs/optimization.md

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -69,18 +69,10 @@ The wrapper function
6969
uses
7070
[`get_best_candidates()`](https://botorch.readthedocs.io/en/latest/generation.html#botorch.generation.gen.get_best_candidates)
7171
to process the output of `gen_candidates_scipy()` and return the best point
72-
found over the random restarts. For reasonable values of $b$ and $q$, jointly
73-
optimizing over random restarts can significantly reduce wall time by exploiting
74-
parallelism, while maintaining high quality solutions. Note, however, that
75-
convergence slows down at each random restart when sharing `scipy` optimizer
76-
states, as claimed by [^Irie2026], so the optimizer states are decoupled across
77-
random restarts.
78-
79-
[^Irie2026]:
80-
I. Couckuyt, D. Deschrijver and T. Dhaene. Towards Efficient Multiobjective
81-
Optimization: Multiobjective statistical criterions. Workshop on AI to
82-
Accelerate Science and Engineering at AAAI, Singapore, 2026.
83-
[paper](https://arxiv.org/abs/2511.13625)
72+
found over the random restarts.
73+
74+
By default, BoTorch will exploit hardware parallelism by batching the acquisition
75+
function evaluation across multiple random restarts.
8476

8577

8678
### Joint vs. Sequential Candidate Generation for Batch Acquisition Functions

0 commit comments

Comments
 (0)