Replies: 2 comments 1 reply
-
The simplest approach would be to use two different "try" blocks here for the async with asyncssh.connect(host1, ...) as conn1, \
asyncssh.connect(host2, ...) as conn2:
try:
proc = await conn1.create_process('tar -cf - directory', encoding=None)
except (OSError, asyncssh.Error) as e:
print(f'Create process failed: {e}')
else:
try:
result = await conn2.run('tar -xv -C /tmp', stdin=proc.stdout, encoding=None)
except (OSError, asyncssh.Error) as e:
print(f'Run failed: {e}') You could also do this with a "return" in the first "except" block (instead of the "else") if you want to keep the indentation the same between the two. |
Beta Was this translation helpful? Give feedback.
-
The create_process() call will only report a failure if the attempt to create a new session fails, which should pretty much only happen if conn1 was already dead before even requesting the new session (assuming conn1 is in a pool as you said). The run() call not only starts a new session but waits for it to finish and exit. As such, it's much more likely that would report as the failure. By the time it does so (whether it succeeds or fails), conn1 may already be dead, but you won't detect that until the next time you try to use it. If you want to know immediately when a connection is closed (at least in the case of a "clean" close), you could pass your own subclass of SSHClient in as a client_factory and implement the connection_lost() callback on that. That may not immediately catch certain kinds of network failures (where the server becomes completely unreachable at a TCP level), but traditional connection close events will trigger the callback and allow you to remove dead connections from the pool. You could still have your try..except blocks for cases where a connection is in the middle of closing when you try to reuse it, but in most cases it'll clean up the pool sooner than that and know whether you need to open a new connection or not. |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm using asyncssh to run two remote processes on separate SSH connections, where one process generates output, and the other reads from it, something like this:
If an exception occurs, how can I determine whether it was caused by conn1 (the process producing data) or conn2 (the process consuming data)?
In my actual code, I store conn1 and conn2 in a connection pool for reuse, so I need to know which connection is broken in order to reestablish it.
Beta Was this translation helpful? Give feedback.
All reactions