Description
When a request is made to api/kernels/<kernel_id>
, the MultiKernelManager
looks in its in-memory cache of known sessions. If it doesn't know about that session, it errors. I am wondering if, before erroring, it could check the on-disk cache of running kernels (e.g., in ~/Library/Jupyter/runtime
). If it find the kernel_id in question, it would start a new session connected to that existing kernel. To say it another way, I see that the cache of kernels known to the manager includes only those initiated by that particular server process, not any kernels started by some other processes. I'd like the manager to check for kernels started by another process when it doesn't find one of its own matching the requested ID.
Would that be a crazy thing to do? The goal is to enable something like ipython console --existing <kernel_id>
in the notebook server.
For example, suppose Alice and Bob are running separate notebook servers on the same machine. Alice wants to allow Bob to observe the inputs and outputs in a console or notebook that she is working on. She sends Bob the connection file info (perhaps via a hub extension). Bob opens a console and creates a new session connecting to that running kernel. Now Alice and Bob each have a session connected to the same kernel. Since the JupyterLab console now mirrors iopub messages from any session, not just the session that initiated execution, Bob can see Alice's inputs and outputs. He can also execute code. This is one simple mode of real-time collaboration.