You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
my original use case, although it's pretty complicated
i am writing a new rustc_driver tool that goes in the sysroot. i want to be able to run it as cargo +nightly foo instead of PATH=$PATH:$(rustc +nightly --print sysroot)/bin cargo +nightly foo. to that end, i've put a shell script in ~/.cargo/bin/cargo-foo that emulates a rustup proxy:
#!/bin/sh
me=$(basename $0)
exec rustup run "$RUSTUP_TOOLCHAIN" $me "$@"
This works fine if the tool is actually present in the sysroot. However, if it's not present, rustup falls back to path and re-executes this same script, eventually leading to a recursion error that it's nested too deeply. I would like to instead give a hard error. Today, i have to workaround rustup's behavior with more calls:
# Make sure this is actually installed for the given toolchain. `rustup run` falls back to PATH,
# which will recursively invoke this script; that's not what we want.
if ! rustup which $me --toolchain $RUSTUP_TOOLCHAIN 2>/dev/null; then
printf "\033[31;1merror:\033[0m '$me' is not installed for the toolchain '$RUSTUP_TOOLCHAIN'\n"
exit 1
fi
it would be nice to be able to avoid that.
Steps
rustup run 1.69 whoami (or any installed toolchain)
Possible Solution(s)
Break hard; only look in toolchains/1.69/bin
Add a feature flag to remove the PATH lookup
Document the difference somewhere
Notes
No response
Rustup version
rustup 1.26.0 (5af9b9484 2023-04-05)
Installed toolchains
Default host: aarch64-apple-darwinrustup home: /Users/jyn/.local/lib/rustupinstalled toolchains--------------------nightly-2022-12-07-aarch64-apple-darwinnightly-2023-03-14-aarch64-apple-darwinnightly-2023-04-12-aarch64-apple-darwinnightly-aarch64-apple-darwin (default)1.60-aarch64-apple-darwin1.64-aarch64-apple-darwin1.65-aarch64-apple-darwin1.68-aarch64-apple-darwin1.69-aarch64-apple-darwinstage1stage21.60.0-aarch64-apple-darwininstalled targets for active toolchain--------------------------------------aarch64-apple-darwinaarch64-unknown-linux-gnuactive toolchain----------------1.69-aarch64-apple-darwin (overridden by '/Users/jyn/src/redacted/rust-toolchain.toml')rustc 1.69.0 (84c898d65 2023-04-16)
The text was updated successfully, but these errors were encountered:
Offhand I think there is another subtlety; rustup which doesn't do cargo fallbacks. rustup run does. Its not clear whether which should know about that fallback logic or not.
Offhand I think there is another subtlety; rustup which doesn't do cargo fallbacks. rustup run does. Its not clear whether which should know about that fallback logic or not.
I would expect which to know about the fallback; ideally which would tell which 🥁 process rustup run would execute so I don't have to paper over cracks between the two.
Problem
This seems ... undesirable. In particular, it means that
rustup which
andrustup run
differ in behavior:As far as I can tell, the only difference is that
rustup run
sets RUSTUP_TOOLCHAIN and DYLD_FALLBACK_LIBRARY_PATH:my original use case, although it's pretty complicated
i am writing a new rustc_driver tool that goes in the sysroot. i want to be able to run it as
cargo +nightly foo
instead ofPATH=$PATH:$(rustc +nightly --print sysroot)/bin cargo +nightly foo
. to that end, i've put a shell script in ~/.cargo/bin/cargo-foo that emulates a rustup proxy:This works fine if the tool is actually present in the sysroot. However, if it's not present, rustup falls back to path and re-executes this same script, eventually leading to a recursion error that it's nested too deeply. I would like to instead give a hard error. Today, i have to workaround rustup's behavior with more calls:
it would be nice to be able to avoid that.
Steps
rustup run 1.69 whoami
(or any installed toolchain)Possible Solution(s)
toolchains/1.69/bin
Notes
No response
Rustup version
rustup 1.26.0 (5af9b9484 2023-04-05)
Installed toolchains
The text was updated successfully, but these errors were encountered: