Skip to content

Commit 8618cf9

Browse files
committed
Get to a working state
1 parent 8e7625f commit 8618cf9

File tree

16 files changed

+531
-455
lines changed

16 files changed

+531
-455
lines changed

CONTRIBUTING.md

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# Contributing
2+
3+
## Troubleshoot
4+
5+
If you are seeing the frontend extension, but it is not working, check
6+
that the server extension is enabled:
7+
8+
```bash
9+
jupyter server extension list
10+
```
11+
12+
If the server extension is installed and enabled, but you are not seeing
13+
the frontend extension, check the frontend extension is installed:
14+
15+
```bash
16+
jupyter labextension list
17+
```
18+
19+
## Contributing
20+
21+
### Development install
22+
23+
Note: You will need NodeJS to build the extension package.
24+
25+
The `jlpm` command is JupyterLab's pinned version of
26+
[yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use
27+
`yarn` or `npm` in lieu of `jlpm` below.
28+
29+
```bash
30+
# Clone the repo to your local environment
31+
# Change directory to the jupyterlab_magic_wand directory
32+
# Install package in development mode
33+
pip install -e "."
34+
# Link your development version of the extension with JupyterLab
35+
jupyter labextension develop . --overwrite
36+
# Server extension must be manually installed in develop mode
37+
jupyter server extension enable jupyterlab_magic_wand
38+
# Rebuild extension Typescript source after making changes
39+
jlpm build
40+
```
41+
42+
You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.
43+
44+
```bash
45+
# Watch the source directory in one terminal, automatically rebuilding when needed
46+
jlpm watch
47+
# Run JupyterLab in another terminal
48+
jupyter lab
49+
```
50+
51+
With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).
52+
53+
By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
54+
55+
```bash
56+
jupyter lab build --minimize=False
57+
```
58+
59+
### Development uninstall
60+
61+
```bash
62+
# Server extension must be manually disabled in develop mode
63+
jupyter server extension disable jupyterlab_magic_wand
64+
pip uninstall jupyterlab_magic_wand
65+
```
66+
67+
In development mode, you will also need to remove the symlink created by `jupyter labextension develop`
68+
command. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`
69+
folder is located. Then you can remove the symlink named `jupyterlab_magic_wand` within that folder.
70+
71+
### Packaging the extension
72+
73+
See [RELEASE](RELEASE.md)

README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,7 @@
55

66
An in-cell AI assistant for JupyterLab notebooks
77

8-
![alt text](docs/README.png "Title")
9-
8+
![alt text](docs/README.png 'Title')
109

1110
## Requirements
1211

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
11
from .insert_cell_below import insert_cell_below
22
from .show_diff import show_diff
33
from .update_cell_source import update_cell_source
4-
from .request_feedback import request_feedback
54

65
__all__ = [
76
insert_cell_below,
87
show_diff,
9-
update_cell_source,
10-
request_feedback
8+
update_cell_source
119
]

jupyterlab_magic_wand/agents/lab_commands/request_feedback.py

Lines changed: 0 additions & 16 deletions
This file was deleted.
Lines changed: 273 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,273 @@
1+
2+
"""
3+
A (ugly) demo/example of a langgraph that gets
4+
fired when the magic wand is clicked in a Jupyter(Lab) Notebook.
5+
6+
This is compatible with Jupyter AI.
7+
"""
8+
import json
9+
import uuid
10+
from typing import Sequence
11+
from langgraph.graph import StateGraph
12+
from langgraph.graph import END, START
13+
from langchain_core.runnables import RunnableConfig
14+
from jupyterlab_magic_wand.state import AIWorkflowState, ConfigSchema
15+
from jupyterlab_magic_wand.agents.lab_commands import (
16+
update_cell_source,
17+
show_diff,
18+
insert_cell_below
19+
)
20+
from jupyterlab_magic_wand.agents.base import Agent
21+
22+
from langchain_core.messages import HumanMessage
23+
24+
graph = StateGraph(AIWorkflowState, config_schema=ConfigSchema)
25+
26+
27+
def get_jupyter_ai_model(jupyter_ai_config):
28+
lm_provider = jupyter_ai_config.lm_provider
29+
return lm_provider(**jupyter_ai_config.lm_provider_params)
30+
31+
def get_cell(cell_id: str, state: AIWorkflowState) -> dict:
32+
content = state["context"]["content"]
33+
cells = content["cells"]
34+
35+
for cell in cells:
36+
if cell["id"] == cell_id:
37+
break
38+
return cell
39+
40+
41+
def get_exception(cell: dict):
42+
if cell.get("cell_type") == "code":
43+
outputs = cell.get("outputs")
44+
if outputs and len(outputs) > 0:
45+
last_output = outputs[-1]
46+
if last_output["output_type"] == "error":
47+
return last_output
48+
49+
50+
def sanitize_code(code: str) -> str:
51+
return (code
52+
.strip()
53+
.lstrip('```markdown')
54+
.lstrip('```python')
55+
.lstrip('```scala')
56+
.lstrip('```')
57+
.rstrip('```')
58+
.strip()
59+
)
60+
61+
62+
async def router(state: AIWorkflowState) -> Sequence[str]:
63+
cell_id = state["context"]["cell_id"]
64+
current = get_cell(cell_id, state)
65+
if current.get("cell_type") == "markdown":
66+
return ["route_markdown"]
67+
if current.get("cell_type") == "code":
68+
outputs = current.get("outputs")
69+
if outputs and len(outputs) > 0:
70+
last_output = outputs[-1]
71+
if last_output["output_type"] == "error":
72+
return ["route_exception"]
73+
return ["route_code"]
74+
75+
76+
SPELLCHECK_MARKDOWN = """
77+
The following input is markdown. Update the input to correct any grammar or spelling mistakes. But succint and brief.
78+
79+
Input:
80+
{input}
81+
"""
82+
83+
SUMMARIZE_CELL = """
84+
The following input comes from a code cell. Using only markdown, do not include ```, summarize what's happening in the code cell.
85+
86+
Input:
87+
{input}
88+
"""
89+
90+
async def route_markdown(state: AIWorkflowState, config: RunnableConfig) -> dict:
91+
llm = get_jupyter_ai_model(config["configurable"]["jupyter_ai_config"])
92+
cell_id = state["context"]["cell_id"]
93+
current = get_cell(cell_id, state)
94+
# Spell check
95+
if current["source"].strip() != "":
96+
response = (await llm.ainvoke(input=f"Does the following input look like a prompt to write code (answer 'code' only) or content to be editted (answer 'content' only)?\n Input: {current['source']}")).content
97+
if "code" in response.lower():
98+
response = (await llm.ainvoke(input=f"Write code based on the prompt. Then, update the code to make it more efficient, add code comments, and respond with only the code and comments.\n Input: {current['source']}")).content
99+
response = sanitize_code(response)
100+
messages = state.get("messages", []) or []
101+
messages.append(response)
102+
commands = state["commands"]
103+
new_cell_id = str(uuid.uuid4())
104+
commands.extend([
105+
insert_cell_below(cell_id, source=response, type="code", new_cell_id=new_cell_id),
106+
])
107+
return {"commands": commands, "messages": messages}
108+
prompt = SPELLCHECK_MARKDOWN.format(input=current["source"])
109+
response = (await llm.ainvoke(input=prompt)).content
110+
messages = state.get("messages", []) or []
111+
messages.append(response)
112+
commands = state["commands"]
113+
commands.extend([
114+
update_cell_source(cell_id, source=response),
115+
{
116+
"name": "notebook:run-cell",
117+
"args": {}
118+
}
119+
])
120+
return {"commands": commands, "messages": messages}
121+
122+
content = state["context"]["content"]
123+
cell_id = state["context"]["cell_id"]
124+
cells = content["cells"]
125+
126+
127+
for i, cell in enumerate(cells):
128+
if cell["id"] == cell_id:
129+
break
130+
131+
try:
132+
next_cell = cells[i+1]
133+
if next_cell["cell_type"] == "code":
134+
prompt = SUMMARIZE_CELL.format(input=next_cell["source"])
135+
response = (await llm.ainvoke(input=prompt)).content
136+
messages = state.get("messages", []) or []
137+
messages.append(response)
138+
commands = state["commands"]
139+
commands.extend([
140+
update_cell_source(cell_id, source=response),
141+
{
142+
"name": "notebook:run-cell",
143+
"args": {}
144+
}
145+
])
146+
return {"commands": commands, "messages": messages}
147+
except IndexError:
148+
return
149+
150+
151+
exception_prompt = """
152+
The code below came from a code cell in Jupyter. It raised the exception below. Update the code to fix the exception and add code comments explaining what you fixed. Respond with code only. Be succint.
153+
154+
Code:
155+
{code}
156+
157+
Exception Name:
158+
{exception_name}
159+
160+
Exception Value:
161+
{exception_value}
162+
"""
163+
164+
async def route_exception(state: AIWorkflowState, config: RunnableConfig) -> dict:
165+
llm = get_jupyter_ai_model(config["configurable"]["jupyter_ai_config"])
166+
cell_id = state["context"]["cell_id"]
167+
current = get_cell(cell_id, state)
168+
exception = get_exception(current)
169+
prompt = exception_prompt.format(
170+
code=current["source"],
171+
exception_name=exception["ename"],
172+
exception_value=exception["evalue"]
173+
)
174+
response = (await llm.ainvoke(input=prompt)).content
175+
response = sanitize_code(response)
176+
messages = state.get("messages", []) or []
177+
messages.append(response)
178+
commands = state["commands"]
179+
commands.extend([
180+
update_cell_source(cell_id, source=response),
181+
show_diff(cell_id, current["source"], response),
182+
{
183+
"name": "notebook:run-cell",
184+
"args": {}
185+
},
186+
])
187+
return {"commands": commands, "messages": messages}
188+
189+
190+
IMPROVE_PROMPT = """
191+
The input below came from a code cell in Jupyter. If the input does not look like code, but instead a prompt, write code based on the prompt. Then, update the code to make it more efficient, add code comments, and respond with only the code and comments.
192+
193+
The code:
194+
{code}
195+
"""
196+
197+
USE_CONTEXT_TO_WRITE_CELL = """
198+
You are working in a Jupyter Notebook. Use the previous ordered cells for context and write some code to add to a fourth cell. Look for opportunities to make a plot if data is involved. Respond with code only.
199+
"""
200+
201+
def prompt_new_cell_using_context(cell_id, state):
202+
content = state["context"]["content"]
203+
cells = content["cells"]
204+
205+
for i, cell in enumerate(cells):
206+
if cell["id"] == cell_id:
207+
break
208+
209+
previous_cells = []
210+
for j in range(1, 4):
211+
try:
212+
previous_cells.append(cells[i-j])
213+
except:
214+
pass
215+
216+
prompt = USE_CONTEXT_TO_WRITE_CELL
217+
for k, cell in enumerate(previous_cells):
218+
prompt += f"\ncCell {k} was a {cell['cell_type']} cell with source:\n{cell['source']}\n"
219+
220+
return prompt
221+
222+
223+
async def route_code(state: AIWorkflowState, config: RunnableConfig):
224+
llm = get_jupyter_ai_model(config["configurable"]["jupyter_ai_config"])
225+
226+
cell_id = state["context"]["cell_id"]
227+
current = get_cell(cell_id, state)
228+
source = current["source"]
229+
source = source.strip()
230+
if source:
231+
prompt = IMPROVE_PROMPT.format(code=source)
232+
response = (await llm.ainvoke(prompt)).content
233+
response = sanitize_code(response)
234+
messages = state.get("messages", []) or []
235+
messages.append(response)
236+
commands = state["commands"]
237+
commands.extend([
238+
update_cell_source(cell_id, source=response),
239+
show_diff(cell_id, current["source"], response),
240+
])
241+
return {"commands": commands, "messages": messages}
242+
243+
prompt = prompt_new_cell_using_context(cell_id, state)
244+
response = (await llm.ainvoke(input=prompt)).content
245+
response = sanitize_code(response)
246+
messages = state.get("messages", []) or []
247+
messages.append(response)
248+
commands = state["commands"]
249+
commands.extend([
250+
update_cell_source(cell_id, source=response),
251+
])
252+
return {"commands": commands, "messages": messages}
253+
254+
255+
graph.add_node('route_code', route_code)
256+
graph.add_node('route_markdown', route_markdown)
257+
graph.add_node('route_exception', route_exception)
258+
259+
graph.add_conditional_edges(
260+
START,
261+
router,
262+
['route_code', 'route_markdown', 'route_exception']
263+
)
264+
265+
266+
workflow = graph.compile()
267+
268+
agent = Agent(
269+
name = "Magic Button Agent",
270+
description = "Magic Button Agent",
271+
workflow = workflow,
272+
version = "0.0.1"
273+
)

0 commit comments

Comments
 (0)