You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tests/checklist.md
+45-18Lines changed: 45 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ Before testing, set-up the following in the old version to make sure that we can
16
16
-[ ] Change the `App Data` to some other folder
17
17
-[ ] Create a Custom Provider
18
18
-[ ] Disable some model providers
19
-
-[NEW] Change llama.cpp setting of 2 models
19
+
-[] Change llama.cpp setting of 2 models
20
20
#### Validate that the update does not corrupt existing user data or settings (before and after update show the same information):
21
21
-[ ] Threads
22
22
- [ ] Previously used model and assistants is shown correctly
@@ -73,35 +73,44 @@ Before testing, set-up the following in the old version to make sure that we can
73
73
- [ ] Ensure that when this value is changed, there is no broken UI caused by it
74
74
- [ ] Code Block
75
75
- [ ] Show Line Numbers
76
-
-[ENG] Ensure that when click on `Reset` in the `Appearance` section, it reset back to the default values
77
-
-[ENG] Ensure that when click on `Reset` in the `Code Block` section, it reset back to the default values
76
+
-[ ][0.7.0] Compact Token Counter will show token counter in side chat input when toggle, if not it will show a small token counter below the chat input
77
+
-[ ][ENG] Ensure that when click on `Reset` in the `Appearance` section, it reset back to the default values
78
+
-[ ][ENG] Ensure that when click on `Reset` in the `Code Block` section, it reset back to the default values
78
79
79
80
#### In `Model Providers`:
80
81
81
82
In `Llama.cpp`:
82
83
-[ ] After downloading a model from hub, the model is listed with the correct name under `Models`
83
84
-[ ] Can import `gguf` model with no error
85
+
-[ ][0.7.0] While importing, there should be an import indication appear under `Models`
84
86
-[ ] Imported model will be listed with correct name under the `Models`
87
+
-[ ][0.6.9] Take a `gguf` file and delete the `.gguf` extensions from the file name, import it into Jan and verify that it works.
88
+
-[ ][0.6.10] Can import vlm models and chat with images
89
+
-[ ][0.6.10] Import a file that is not `mmproj` in the `mmproj field` should show validation error
90
+
-[ ][0.6.10] Import `mmproj` from different models should error
91
+
-[ ][0.7.0] Users can customize model display names according to their own preferences.
85
92
-[ ] Check that when click `delete` the model will be removed from the list
86
93
-[ ] Deleted model doesn't appear in the selectable models section in chat input (even in old threads that use the model previously)
87
94
-[ ] Ensure that user can re-import deleted imported models
95
+
-[ ][0.6.8] Ensure that there is a recommended `llama.cpp` for each system and that it works out of the box for users.
96
+
-[ ][0.6.10] Change to an older version of llama.cpp backend. Click on `Check for Llamacpp Updates` it should alert that there is a new version.
97
+
-[ ][0.7.0] Users can cancel a backend download while it is in progress.
98
+
-[ ][0.6.10] Try `Install backend from file` for a backend and it should show as an option for backend
99
+
-[ ][0.7.0] User can install a backend from file in both .tar.gz and .zip formats, and the backend appears in the backend selection menu
100
+
-[ ][0.7.0] A manually installed backend is automatically selected after import, and the backend menu updates to show it as the latest imported backend.
88
101
-[ ] Enable `Auto-Unload Old Models`, and ensure that only one model can run / start at a time. If there are two model running at the time of enable, both of them will be stopped.
89
102
-[ ] Disable `Auto-Unload Old Models`, and ensure that multiple models can run at the same time.
90
103
-[ ] Enable `Context Shift` and ensure that context can run for long without encountering memory error. Use the `banana test` by turn on fetch MCP => ask local model to fetch and summarize the history of banana (banana has a very long history on wiki it turns out). It should run out of context memory sufficiently fast if `Context Shift` is not enabled.
104
+
105
+
In `Model Settings`:
91
106
-[ ][0.6.8] Ensure that user can change the Jinja chat template of individual model and it doesn't affect the template of other model
92
-
-[ ][0.6.8] Ensure that there is a recommended `llama.cpp` for each system and that it works out of the box for users.
93
107
-[ ][0.6.8] Ensure we can override Tensor Buffer Type in the model settings to offload layers between GPU and CPU => Download any MoE Model (i.e., gpt-oss-20b) => Set tensor buffer type as `blk\\.([0-30]*[02468])\\.ffn_.*_exps\\.=CPU` => check if those tensors are in cpu and run inference (you can view the app.log if it contains `--override-tensor", "blk\\\\.([0-30]*[02468])\\\\.ffn_.*_exps\\\\.=CPU`)
94
-
-[ ][0.6.9] Take a `gguf` file and delete the `.gguf` extensions from the file name, import it into Jan and verify that it works.
95
-
-[ ][0.6.10] Can import vlm models and chat with images
96
-
-[ ][0.6.10] Import model on mmproj field should show validation error
97
-
-[ ][0.6.10] Import mmproj from different models should not be able to chat with the models
98
-
-[ ][0.6.10] Change to an older version of llama.cpp backend. Click on `Check for Llamacpp Updates` it should alert that there is a new version.
99
-
-[ ][0.6.10] Try `Install backend from file` for a backend and it should show as an option for backend
100
108
101
109
In Remote Model Providers:
102
110
-[ ] Check that the following providers are presence:
103
111
- [ ] OpenAI
104
112
- [ ] Anthropic
113
+
- [ ] [0.7.0] Azure
105
114
- [ ] Cohere
106
115
- [ ] OpenRouter
107
116
- [ ] Mistral
@@ -113,12 +122,15 @@ In Remote Model Providers:
113
122
-[ ] Delete a model and ensure that it doesn't show up in the `Models` list view or in the selectable dropdown in chat input.
114
123
-[ ] Ensure that a deleted model also not selectable or appear in old threads that used it.
115
124
-[ ] Adding of new model manually works and user can chat with the newly added model without error (you can add back the model you just delete for testing)
116
-
-[ ][0.6.9] Make sure that Ollama set-up as a custom provider work with Jan
125
+
-[ ][0.7.0] Vision capabilities are now automatically detected for vision models
126
+
-[ ][0.7.0] New default models are available for adding to remote providers through a drop down (OpenAI, Mistral, Groq)
127
+
117
128
In Custom Providers:
118
129
-[ ] Ensure that user can create a new custom providers with the right baseURL and API key.
119
130
-[ ] Click `Refresh` should retrieve a list of available models from the Custom Providers.
120
131
-[ ] User can chat with the custom providers
121
132
-[ ] Ensure that Custom Providers can be deleted and won't reappear in a new session
133
+
-[ ][0.6.9] Make sure that Ollama set-up as a custom provider work with Jan
122
134
123
135
In general:
124
136
-[ ] Disabled Model Provider should not show up as selectable in chat input of new thread and old thread alike (old threads' chat input should show `Select Model` instead of disabled model)
@@ -162,9 +174,10 @@ Ensure that the following section information show up for hardware
162
174
-[ ] When the user click `Always Allow` on the pop up, the tool will retain permission and won't ask for confirmation again. (this applied at an individual tool level, not at the MCP server level)
163
175
-[ ] If `Allow All MCP Tool Permissions` is enabled, in every new thread, there should not be any confirmation dialog pop up when a tool is called.
164
176
-[ ] When the pop-up appear, make sure that the `Tool Parameters` is also shown with detail in the pop-up
165
-
-[ ][0.6.9] Go to Enter JSON configuration when created a new MCp => paste the JSON config inside => click `Save` => server works
177
+
-[ ][0.6.9] Go to Enter JSON configuration when created a new MCP => paste the JSON config inside => click `Save` => server works
166
178
-[ ][0.6.9] If individual JSON config format is failed, the MCP server should not be activated
167
179
-[ ][0.6.9] Make sure that MCP server can be used with streamable-http transport => connect to Smithery and test MCP server
180
+
-[ ][0.7.0] When deleting an MCP Server, a toast notification is shown
168
181
169
182
#### In `Local API Server`:
170
183
-[ ] User can `Start Server` and chat with the default endpoint
@@ -175,7 +188,8 @@ Ensure that the following section information show up for hardware
175
188
-[ ][0.6.9] When the startup configuration, the last used model is also automatically start (users does not have to manually start a model before starting the server)
176
189
-[ ][0.6.9] Make sure that you can send an image to a Local API Server and it also works (can set up Local API Server as a Custom Provider in Jan to test)
177
190
-[ ][0.6.10] Make sure you are still able to see API key when server local status is running
178
-
191
+
-[ ][0.7.0] Users can see the Jan API Server Swagger UI by opening the following path in their browser `http://<ip>:<port>`
192
+
-[ ][0.7.0] Users can set the trusted host to * in the server configuration to accept requests from all host or without host
179
193
#### In `HTTPS Proxy`:
180
194
-[ ] Model download request goes through proxy endpoint
181
195
@@ -188,6 +202,7 @@ Ensure that the following section information show up for hardware
188
202
-[ ] Clicking download work inside the Model card HTML
189
203
-[ ][0.6.9] Check that the model recommendation base on user hardware work as expected in the Model Hub
190
204
-[ ][0.6.10] Check that model of the same name but different author can be found in the Hub catalog (test with [https://huggingface.co/unsloth/Qwen3-4B-Thinking-2507-GGUF](https://huggingface.co/unsloth/Qwen3-4B-Thinking-2507-GGUF))
205
+
-[ ][0.7.0] Support downloading models with the same name from different authors, models not listed on the hub will be prefixed with the author name
191
206
192
207
## D. Threads
193
208
@@ -214,19 +229,30 @@ Ensure that the following section information show up for hardware
214
229
-[ ] User can send message with different type of text content (e.g text, emoji, ...)
215
230
-[ ] When request model to generate a markdown table, the table is correctly formatted as returned from the model.
216
231
-[ ] When model generate code, ensure that the code snippets is properly formatted according to the `Appearance -> Code Block` setting.
232
+
-[ ][0.7.0] LaTeX formulas now render correctly in chat. Both inline \(...\) and block \[...\] formats are supported. Code blocks and HTML tags are not affected
217
233
-[ ] Users can edit their old message and user can regenerate the answer based on the new message
218
234
-[ ] User can click `Copy` to copy the model response
235
+
-[ ][0.6.10] When click on copy code block from model generation, it will only copy one code-block at a time instead of multiple code block at once
219
236
-[ ] User can click `Delete` to delete either the user message or the model response.
220
237
-[ ] The token speed appear when a response from model is being generated and the final value is show under the response.
221
238
-[ ] Make sure that user when using IME keyboard to type Chinese and Japanese character and they press `Enter`, the `Send` button doesn't trigger automatically after each words.
222
-
-[ ][0.6.9] Attach an image to the chat input and see if you can chat with it using a remote model
223
-
-[ ][0.6.9] Attach an image to the chat input and see if you can chat with it using a local model
239
+
-[ ][0.6.9] Attach an image to the chat input and see if you can chat with it using a Remote model & Local model
224
240
-[ ][0.6.9] Check that you can paste an image to text box from your system clipboard (Copy - Paste)
225
-
-[ ][0.6.9] Make sure that user can favourite a model in the llama.cpp list and see the favourite model selection in chat input
241
+
-[ ][0.6.10] User can Paste (e.g Ctrl + v) text into chat input when it is a vision model
242
+
-[ ][0.6.9] Make sure that user can favourite a model in the Model list and see the favourite model selection in chat input
226
243
-[ ][0.6.10] User can click mode's setting on chat, enable Auto-Optimize Settings, and continue chatting with the model without interruption.
227
244
-[ ] Verify this works with at least two models of different sizes (e.g., 1B and 7B).
228
-
-[ ][0.6.10] User can Paste (e.g Ctrl + v) text into chat input when it is a vision model
229
-
-[ ][0.6.10] When click on copy code block from model generation, it will only copy one code-block at a time instead of multiple code block at once
245
+
-[ ][0.7.0] When chatting with a model, the UI displays a token usage counter showing the percentage of context consumed.
246
+
-[ ][0.7.0] When chatting with a model, the scroll no longer follows the model’s streaming response; it only auto-scrolls when the user sends a new message
247
+
#### In Project
248
+
249
+
-[ ][0.7.0] User can create new project
250
+
-[ ][0.7.0] User can add existing threads to a project
251
+
-[ ][0.7.0] When the user attempts to delete a project, a confirmation dialog must appear warning that this action will permanently delete the project and all its associated threads.
252
+
-[ ][0.7.0] The user can successfully delete a project, and all threads contained within that project are also permanently deleted.
253
+
-[ ][0.7.0] A thread that already belongs to a project cannot be re-added to the same project.
254
+
-[ ][0.7.0] Favorited threads retain their "favorite" status even after being added to a project
255
+
230
256
## E. Assistants
231
257
-[ ] There is always at least one default Assistant which is Jan
232
258
-[ ] The default Jan assistant has `stream = True` by default
@@ -238,6 +264,7 @@ Ensure that the following section information show up for hardware
238
264
239
265
In `Settings -> General`:
240
266
-[ ] Change the location of the `App Data` to some other path that is not the default path
267
+
-[ ][0.7.0] Users cannot set the data location to root directories (e.g., C:\, D:\ on Windows), but can select subfolders within those drives (e.g., C:\data, D:\data)
241
268
-[ ] Click on `Reset` button in `Other` to factory reset the app:
242
269
- [ ] All threads deleted
243
270
- [ ] All Assistant deleted except for default Jan Assistant
0 commit comments