Skip to content

Commit f537429

Browse files
authored
Merge pull request #6706 from menloresearch/qa/v0.7.0
feat: update checklist for 0.7.0
2 parents 87db633 + f6f9813 commit f537429

File tree

1 file changed

+45
-18
lines changed

1 file changed

+45
-18
lines changed

tests/checklist.md

Lines changed: 45 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Before testing, set-up the following in the old version to make sure that we can
1616
- [ ] Change the `App Data` to some other folder
1717
- [ ] Create a Custom Provider
1818
- [ ] Disable some model providers
19-
- [NEW] Change llama.cpp setting of 2 models
19+
- [ ] Change llama.cpp setting of 2 models
2020
#### Validate that the update does not corrupt existing user data or settings (before and after update show the same information):
2121
- [ ] Threads
2222
- [ ] Previously used model and assistants is shown correctly
@@ -73,35 +73,44 @@ Before testing, set-up the following in the old version to make sure that we can
7373
- [ ] Ensure that when this value is changed, there is no broken UI caused by it
7474
- [ ] Code Block
7575
- [ ] Show Line Numbers
76-
- [ENG] Ensure that when click on `Reset` in the `Appearance` section, it reset back to the default values
77-
- [ENG] Ensure that when click on `Reset` in the `Code Block` section, it reset back to the default values
76+
- [ ] [0.7.0] Compact Token Counter will show token counter in side chat input when toggle, if not it will show a small token counter below the chat input
77+
- [ ] [ENG] Ensure that when click on `Reset` in the `Appearance` section, it reset back to the default values
78+
- [ ] [ENG] Ensure that when click on `Reset` in the `Code Block` section, it reset back to the default values
7879

7980
#### In `Model Providers`:
8081

8182
In `Llama.cpp`:
8283
- [ ] After downloading a model from hub, the model is listed with the correct name under `Models`
8384
- [ ] Can import `gguf` model with no error
85+
- [ ] [0.7.0] While importing, there should be an import indication appear under `Models`
8486
- [ ] Imported model will be listed with correct name under the `Models`
87+
- [ ] [0.6.9] Take a `gguf` file and delete the `.gguf` extensions from the file name, import it into Jan and verify that it works.
88+
- [ ] [0.6.10] Can import vlm models and chat with images
89+
- [ ] [0.6.10] Import a file that is not `mmproj` in the `mmproj field` should show validation error
90+
- [ ] [0.6.10] Import `mmproj` from different models should error
91+
- [ ] [0.7.0] Users can customize model display names according to their own preferences.
8592
- [ ] Check that when click `delete` the model will be removed from the list
8693
- [ ] Deleted model doesn't appear in the selectable models section in chat input (even in old threads that use the model previously)
8794
- [ ] Ensure that user can re-import deleted imported models
95+
- [ ] [0.6.8] Ensure that there is a recommended `llama.cpp` for each system and that it works out of the box for users.
96+
- [ ] [0.6.10] Change to an older version of llama.cpp backend. Click on `Check for Llamacpp Updates` it should alert that there is a new version.
97+
- [ ] [0.7.0] Users can cancel a backend download while it is in progress.
98+
- [ ] [0.6.10] Try `Install backend from file` for a backend and it should show as an option for backend
99+
- [ ] [0.7.0] User can install a backend from file in both .tar.gz and .zip formats, and the backend appears in the backend selection menu
100+
- [ ] [0.7.0] A manually installed backend is automatically selected after import, and the backend menu updates to show it as the latest imported backend.
88101
- [ ] Enable `Auto-Unload Old Models`, and ensure that only one model can run / start at a time. If there are two model running at the time of enable, both of them will be stopped.
89102
- [ ] Disable `Auto-Unload Old Models`, and ensure that multiple models can run at the same time.
90103
- [ ] Enable `Context Shift` and ensure that context can run for long without encountering memory error. Use the `banana test` by turn on fetch MCP => ask local model to fetch and summarize the history of banana (banana has a very long history on wiki it turns out). It should run out of context memory sufficiently fast if `Context Shift` is not enabled.
104+
105+
In `Model Settings`:
91106
- [ ] [0.6.8] Ensure that user can change the Jinja chat template of individual model and it doesn't affect the template of other model
92-
- [ ] [0.6.8] Ensure that there is a recommended `llama.cpp` for each system and that it works out of the box for users.
93107
- [ ] [0.6.8] Ensure we can override Tensor Buffer Type in the model settings to offload layers between GPU and CPU => Download any MoE Model (i.e., gpt-oss-20b) => Set tensor buffer type as `blk\\.([0-30]*[02468])\\.ffn_.*_exps\\.=CPU` => check if those tensors are in cpu and run inference (you can view the app.log if it contains `--override-tensor", "blk\\\\.([0-30]*[02468])\\\\.ffn_.*_exps\\\\.=CPU`)
94-
- [ ] [0.6.9] Take a `gguf` file and delete the `.gguf` extensions from the file name, import it into Jan and verify that it works.
95-
- [ ] [0.6.10] Can import vlm models and chat with images
96-
- [ ] [0.6.10] Import model on mmproj field should show validation error
97-
- [ ] [0.6.10] Import mmproj from different models should not be able to chat with the models
98-
- [ ] [0.6.10] Change to an older version of llama.cpp backend. Click on `Check for Llamacpp Updates` it should alert that there is a new version.
99-
- [ ] [0.6.10] Try `Install backend from file` for a backend and it should show as an option for backend
100108

101109
In Remote Model Providers:
102110
- [ ] Check that the following providers are presence:
103111
- [ ] OpenAI
104112
- [ ] Anthropic
113+
- [ ] [0.7.0] Azure
105114
- [ ] Cohere
106115
- [ ] OpenRouter
107116
- [ ] Mistral
@@ -113,12 +122,15 @@ In Remote Model Providers:
113122
- [ ] Delete a model and ensure that it doesn't show up in the `Models` list view or in the selectable dropdown in chat input.
114123
- [ ] Ensure that a deleted model also not selectable or appear in old threads that used it.
115124
- [ ] Adding of new model manually works and user can chat with the newly added model without error (you can add back the model you just delete for testing)
116-
- [ ] [0.6.9] Make sure that Ollama set-up as a custom provider work with Jan
125+
- [ ] [0.7.0] Vision capabilities are now automatically detected for vision models
126+
- [ ] [0.7.0] New default models are available for adding to remote providers through a drop down (OpenAI, Mistral, Groq)
127+
117128
In Custom Providers:
118129
- [ ] Ensure that user can create a new custom providers with the right baseURL and API key.
119130
- [ ] Click `Refresh` should retrieve a list of available models from the Custom Providers.
120131
- [ ] User can chat with the custom providers
121132
- [ ] Ensure that Custom Providers can be deleted and won't reappear in a new session
133+
- [ ] [0.6.9] Make sure that Ollama set-up as a custom provider work with Jan
122134

123135
In general:
124136
- [ ] Disabled Model Provider should not show up as selectable in chat input of new thread and old thread alike (old threads' chat input should show `Select Model` instead of disabled model)
@@ -162,9 +174,10 @@ Ensure that the following section information show up for hardware
162174
- [ ] When the user click `Always Allow` on the pop up, the tool will retain permission and won't ask for confirmation again. (this applied at an individual tool level, not at the MCP server level)
163175
- [ ] If `Allow All MCP Tool Permissions` is enabled, in every new thread, there should not be any confirmation dialog pop up when a tool is called.
164176
- [ ] When the pop-up appear, make sure that the `Tool Parameters` is also shown with detail in the pop-up
165-
- [ ] [0.6.9] Go to Enter JSON configuration when created a new MCp => paste the JSON config inside => click `Save` => server works
177+
- [ ] [0.6.9] Go to Enter JSON configuration when created a new MCP => paste the JSON config inside => click `Save` => server works
166178
- [ ] [0.6.9] If individual JSON config format is failed, the MCP server should not be activated
167179
- [ ] [0.6.9] Make sure that MCP server can be used with streamable-http transport => connect to Smithery and test MCP server
180+
- [ ] [0.7.0] When deleting an MCP Server, a toast notification is shown
168181

169182
#### In `Local API Server`:
170183
- [ ] User can `Start Server` and chat with the default endpoint
@@ -175,7 +188,8 @@ Ensure that the following section information show up for hardware
175188
- [ ] [0.6.9] When the startup configuration, the last used model is also automatically start (users does not have to manually start a model before starting the server)
176189
- [ ] [0.6.9] Make sure that you can send an image to a Local API Server and it also works (can set up Local API Server as a Custom Provider in Jan to test)
177190
- [ ] [0.6.10] Make sure you are still able to see API key when server local status is running
178-
191+
- [ ] [0.7.0] Users can see the Jan API Server Swagger UI by opening the following path in their browser `http://<ip>:<port>`
192+
- [ ] [0.7.0] Users can set the trusted host to * in the server configuration to accept requests from all host or without host
179193
#### In `HTTPS Proxy`:
180194
- [ ] Model download request goes through proxy endpoint
181195

@@ -188,6 +202,7 @@ Ensure that the following section information show up for hardware
188202
- [ ] Clicking download work inside the Model card HTML
189203
- [ ] [0.6.9] Check that the model recommendation base on user hardware work as expected in the Model Hub
190204
- [ ] [0.6.10] Check that model of the same name but different author can be found in the Hub catalog (test with [https://huggingface.co/unsloth/Qwen3-4B-Thinking-2507-GGUF](https://huggingface.co/unsloth/Qwen3-4B-Thinking-2507-GGUF))
205+
- [ ] [0.7.0] Support downloading models with the same name from different authors, models not listed on the hub will be prefixed with the author name
191206

192207
## D. Threads
193208

@@ -214,19 +229,30 @@ Ensure that the following section information show up for hardware
214229
- [ ] User can send message with different type of text content (e.g text, emoji, ...)
215230
- [ ] When request model to generate a markdown table, the table is correctly formatted as returned from the model.
216231
- [ ] When model generate code, ensure that the code snippets is properly formatted according to the `Appearance -> Code Block` setting.
232+
- [ ] [0.7.0] LaTeX formulas now render correctly in chat. Both inline \(...\) and block \[...\] formats are supported. Code blocks and HTML tags are not affected
217233
- [ ] Users can edit their old message and user can regenerate the answer based on the new message
218234
- [ ] User can click `Copy` to copy the model response
235+
- [ ] [0.6.10] When click on copy code block from model generation, it will only copy one code-block at a time instead of multiple code block at once
219236
- [ ] User can click `Delete` to delete either the user message or the model response.
220237
- [ ] The token speed appear when a response from model is being generated and the final value is show under the response.
221238
- [ ] Make sure that user when using IME keyboard to type Chinese and Japanese character and they press `Enter`, the `Send` button doesn't trigger automatically after each words.
222-
- [ ] [0.6.9] Attach an image to the chat input and see if you can chat with it using a remote model
223-
- [ ] [0.6.9] Attach an image to the chat input and see if you can chat with it using a local model
239+
- [ ] [0.6.9] Attach an image to the chat input and see if you can chat with it using a Remote model & Local model
224240
- [ ] [0.6.9] Check that you can paste an image to text box from your system clipboard (Copy - Paste)
225-
- [ ] [0.6.9] Make sure that user can favourite a model in the llama.cpp list and see the favourite model selection in chat input
241+
- [ ] [0.6.10] User can Paste (e.g Ctrl + v) text into chat input when it is a vision model
242+
- [ ] [0.6.9] Make sure that user can favourite a model in the Model list and see the favourite model selection in chat input
226243
- [ ] [0.6.10] User can click mode's setting on chat, enable Auto-Optimize Settings, and continue chatting with the model without interruption.
227244
- [ ] Verify this works with at least two models of different sizes (e.g., 1B and 7B).
228-
- [ ] [0.6.10] User can Paste (e.g Ctrl + v) text into chat input when it is a vision model
229-
- [ ] [0.6.10] When click on copy code block from model generation, it will only copy one code-block at a time instead of multiple code block at once
245+
- [ ] [0.7.0] When chatting with a model, the UI displays a token usage counter showing the percentage of context consumed.
246+
- [ ] [0.7.0] When chatting with a model, the scroll no longer follows the model’s streaming response; it only auto-scrolls when the user sends a new message
247+
#### In Project
248+
249+
- [ ] [0.7.0] User can create new project
250+
- [ ] [0.7.0] User can add existing threads to a project
251+
- [ ] [0.7.0] When the user attempts to delete a project, a confirmation dialog must appear warning that this action will permanently delete the project and all its associated threads.
252+
- [ ] [0.7.0] The user can successfully delete a project, and all threads contained within that project are also permanently deleted.
253+
- [ ] [0.7.0] A thread that already belongs to a project cannot be re-added to the same project.
254+
- [ ] [0.7.0] Favorited threads retain their "favorite" status even after being added to a project
255+
230256
## E. Assistants
231257
- [ ] There is always at least one default Assistant which is Jan
232258
- [ ] The default Jan assistant has `stream = True` by default
@@ -238,6 +264,7 @@ Ensure that the following section information show up for hardware
238264

239265
In `Settings -> General`:
240266
- [ ] Change the location of the `App Data` to some other path that is not the default path
267+
- [ ] [0.7.0] Users cannot set the data location to root directories (e.g., C:\, D:\ on Windows), but can select subfolders within those drives (e.g., C:\data, D:\data)
241268
- [ ] Click on `Reset` button in `Other` to factory reset the app:
242269
- [ ] All threads deleted
243270
- [ ] All Assistant deleted except for default Jan Assistant

0 commit comments

Comments
 (0)