feat: Add support for data import in data browser#3244
feat: Add support for data import in data browser#3244mtrezza merged 21 commits intoparse-community:alphafrom
Conversation
|
🚀 Thanks for opening this pull request! |
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
|
Important Review skippedThis PR was authored by the user configured for CodeRabbit reviews. By default, CodeRabbit skips reviewing PRs authored by this user. It's recommended to use a dedicated user account to post CodeRabbit review feedback. To trigger a single review, invoke the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds a Data Import feature: new ImportDataDialog UI, parsing/import utilities, Data Browser import menu/handlers, FileInput buttonText behavior change, ParseApp maintenanceKey exposure, README updates, styles, and comprehensive import tests. Changes
Sequence DiagramsequenceDiagram
participant User
participant Dialog as ImportDataDialog
participant Browser as Browser Component
participant Utils as importData.js
participant Server as Parse Server
User->>Dialog: select file & options
User->>Dialog: confirm import
Dialog->>Utils: parseImportJSON/parseImportCSV(content)
Utils-->>Dialog: parsed rows or parse errors
Dialog->>Browser: onConfirm(content, fileType, options)
Browser->>Utils: checkDuplicates(objectIds, className, options)
Utils->>Server: GET /classes/<class>?where={"objectId":{"$in":[...]}}
Server-->>Utils: existing IDs
Utils-->>Browser: duplicate list
Browser->>Utils: buildBatchRequests(rows, className, options)
Utils-->>Browser: batch requests[]
Browser->>Utils: sendBatchImport(requests, options, progressCb)
Utils->>Server: POST /batch (chunked)
Server-->>Utils: per-batch responses
Utils-->>Browser: progress updates & final results
Browser->>Dialog: update progress/results
Dialog->>User: display summary (imported/skipped/failed)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 1759-1760: Update the "Preserve timestamps" row to explicitly
state where to configure the maintenance key by referencing the dashboard config
field apps[].maintenanceKey; e.g., change the note to: "Requires a
maintenanceKey (set in the dashboard config under apps[].maintenanceKey) to be
configured." Ensure the exact symbol apps[].maintenanceKey is mentioned so users
know where to apply it.
In `@src/dashboard/Data/Browser/Browser.react.js`:
- Around line 2456-2559: The code captures dialogRef = this.importDialogRef once
and then uses it across awaits (e.g., before/after sendBatchImport and in
onProgress), which can become stale if the dialog is closed/reopened; change
uses of the stale local dialogRef to always read this.importDialogRef at the
moment of use and null-check it (e.g., replace dialogRef.setImporting(),
dialogRef.setProgress(...), dialogRef.setResults(...) with something like const
dlg = this.importDialogRef; if (dlg && typeof dlg.setProgress === 'function')
dlg.setProgress(...)), and do the same for setImporting and setResults as well
as the onProgress callback passed to sendBatchImport so you never call methods
on an out-of-date instance.
- Around line 2448-2453: The import button currently allows importing when
viewing a relation and confirmImport uses this.props.params.className (parent)
instead of the relation target; update the logic so import is either disabled
when a relation is active or confirmImport uses the relation target class: in
Browser.react.js adjust confirmImport to use
this.state.relation?.targetClassName (falling back to
this.props.params.className) and/or change the BrowserToolbar import button
render logic to gate the button with !!this.state.relation (matching delete/new
row checks) so users cannot import into the wrong class; ensure you update
references to confirmImport and the BrowserToolbar import button rendering
accordingly.
In `@src/dashboard/Data/Browser/BrowserToolbar.react.js`:
- Line 395: The MenuItem currently invokes onImport via onClick={() =>
onImport()}, which will throw if the onImport prop is undefined; change the
handler to pass the callback directly (onClick={onImport}) and add a disabled
prop when onImport is not provided (e.g., disabled={!onImport}) so clicks are
no-ops when the callback is absent; update the MenuItem usage in
BrowserToolbar.react.js accordingly.
In `@src/dashboard/Data/Browser/ImportDataDialog.react.js`:
- Around line 80-100: The confirm flow in handleConfirm should be locked to
prevent duplicate submits and should handle FileReader failures: before creating
the FileReader, check and set a guard state (e.g., this.state.isImporting) to
prevent re-entry; if already importing, return early. Attach a reader.onerror
handler (and reader.onabort/onloadend as needed) to clear the isImporting flag
and surface/log the read error (and call any failure callback or set an error
state). Ensure reader.onload only calls this.props.onConfirm once and then
clears isImporting (this.setState({ isImporting: false })), and disable the
confirm UI based on isImporting so the user cannot resend while reading. Use the
existing symbols handleConfirm, this.state.file, this.getFileExtension, and
this.props.onConfirm to locate where to add the checks, setState calls, and
reader.onerror handler.
In `@src/lib/importData.js`:
- Line 167: Replace unused catch bindings and convert immutable let declarations
to const: update each catch (e) occurrences to either catch { ... } or catch
(_e) if you need a named parameter (references: the catch blocks in
importData.js currently written as catch (e)), and find any variables declared
with let that are never reassigned and change them to const (search for the
immutable let declarations in importData.js and update them accordingly); ensure
the three failing spots (the catch blocks and the immutable let) are fixed so
ESLint no-unused-vars and prefer-const rules pass.
- Around line 21-27: The current return path accepts any JSON array elements
(e.g., numbers or strings) which later become invalid rows; update the
validation after confirming Array.isArray(parsed) to ensure every element in
parsed is a plain object (check typeof item === 'object' && item !== null &&
!Array.isArray(item)); if any element fails, return { rows: null, error: 'All
items in the JSON array must be objects.' } otherwise return { rows: parsed,
error: null } so downstream code always receives object rows.
- Around line 65-66: The code currently pushes parsed CSV row objects
unconditionally (see rows.push(row)), which imports blank lines as {} — update
the loop around rows.push(row) to skip any row that is empty by checking the
parsed row object (e.g., if Object.keys(row).length === 0) or if all values are
empty/whitespace (e.g., Object.values(row).every(v => String(v).trim() === ''))
and only call rows.push(row) when the check fails; this prevents creating empty
objects from blank CSV rows.
- Around line 55-64: The loop that converts CSV cells (for ... headers) can
produce NaN for numeric types which later becomes null in JSON; modify the
conversion path (either here in the loop or inside convertCSVValue) to detect
numeric types (use colSchema/type and symbols headers, values, colSchema, type,
convertCSVValue, row) and guard against NaN by: after converting a numeric
value, check isNaN(result) and handle it explicitly (e.g., skip assignment and
log/collect an error, keep the original raw string, or throw) instead of
assigning the NaN so it doesn't serialize to null; ensure the chosen behavior is
applied consistently for Number/Integer/Float types.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
README.mdsrc/components/FileInput/FileInput.react.jssrc/dashboard/Data/Browser/Browser.react.jssrc/dashboard/Data/Browser/BrowserToolbar.react.jssrc/dashboard/Data/Browser/ImportDataDialog.react.jssrc/dashboard/Data/Browser/ImportDataDialog.scsssrc/lib/ParseApp.jssrc/lib/importData.jssrc/lib/tests/importData.test.js
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/dashboard/Data/Browser/Browser.react.js`:
- Around line 2464-2516: confirmImport() currently returns early on several
preflight failures (parseResult.error, unknown columns, duplicateHandling ===
'fail', and duplicateHandling === 'skip' when all rows exist) without clearing
the dialog's importing flag, which leaves the modal stuck; before every early
return in confirmImport() (specifically the branches that call
this.showNote(...) and return when parseResult.error, when unknownCols found,
when duplicate objectIds cause 'fail', and when all rows are filtered out for
'skip'), reset the dialog state that tracks importing (e.g. call this.setState({
importing: false }) or the component's equivalent) so the modal exits the
importing state before returning.
In `@src/lib/importData.js`:
- Around line 213-263: The map currently can produce POST requests with an empty
body after filtering/normalization; update the rows.map(...) block so that after
computing method/path/body (and after deleting objectId when applicable) you
check if Object.keys(body).length === 0 and method === 'POST' and, in that case,
skip emitting a request (e.g., return null/undefined or don't push); then ensure
you filter out falsy results (e.g., .filter(Boolean)) so no POST with empty
bodies are sent. Reference: the rows.map callback and the local variables
method, path, body, plus preserveObjectIds and duplicateHandling logic.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
README.mdsrc/dashboard/Data/Browser/Browser.react.jssrc/dashboard/Data/Browser/BrowserToolbar.react.jssrc/dashboard/Data/Browser/ImportDataDialog.react.jssrc/lib/importData.js
🚧 Files skipped from review as they are similar to previous changes (2)
- src/dashboard/Data/Browser/BrowserToolbar.react.js
- README.md
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/lib/importData.js (1)
216-256:⚠️ Potential issue | 🟠 MajorPrevent empty
POSTbodies after row filtering.After unknown-column filtering and timestamp/objectId normalization, some rows can become
{}but are still emitted asPOSTrequests. That can create unintended blank objects.💡 Proposed fix
export function buildBatchRequests(rows, className, options) { @@ - return rows.map(row => { + return rows + .map(row => { @@ - if (!preserveObjectIds) { + if (!preserveObjectIds) { delete body.objectId; } - return { method: 'POST', path: `/classes/${className}`, body }; - }); + if (Object.keys(body).length === 0) { + return null; + } + return { method: 'POST', path: `/classes/${className}`, body }; + }) + .filter(Boolean); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/importData.js` around lines 216 - 256, After filtering and normalizing each row (using allowed, ensureDateObject, preserveTimestamps, preserveObjectIds, duplicateHandling and className), guard against emitting empty POST bodies by checking Object.keys(body).length === 0 before returning the POST; if the body is empty (and you’re not taking the overwrite PUT branch for preserveObjectIds && duplicateHandling === 'overwrite'), skip this row (e.g., return null/undefined) so no POST with {} is emitted — ensure any caller of this map filters out falsy results.
🧹 Nitpick comments (2)
src/lib/tests/importData.test.js (1)
328-466: Add a regression test for empty-body request pruning.Given the request-builder edge case, add a test that verifies rows fully filtered by
unknownColumns: 'ignore'do not producePOST /classes/...requests.🧪 Suggested test
describe('buildBatchRequests', () => { @@ it('returns empty array for empty rows', () => { @@ }); + + it('skips rows that become empty after unknown-column filtering', () => { + const rows = [{ unknownOnly: 'x' }]; + const requests = buildBatchRequests(rows, 'GameScore', { + preserveObjectIds: false, + preserveTimestamps: false, + unknownColumns: 'ignore', + knownColumns: ['objectId', 'createdAt', 'updatedAt'], + }); + expect(requests).toEqual([]); + }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/tests/importData.test.js` around lines 328 - 466, Add a regression test to buildBatchRequests that ensures rows which are fully stripped by unknownColumns: 'ignore' produce no POST requests: create a test in src/lib/tests/importData.test.js calling buildBatchRequests with a rows array containing an item that only has unknown fields (e.g., { unknownField: 'x' }), set unknownColumns: 'ignore' and provide knownColumns that do not include those fields, and assert that the returned requests array does not include any request for that row (expect length 0 or that no POST to '/classes/GameScore' was created); reference buildBatchRequests and the options preserveObjectIds/preserveTimestamps/unknownColumns/knownColumns when adding the test.src/dashboard/Data/Browser/Browser.react.js (1)
2454-2481: UseSetfor unknown-column preflight to avoid quadratic scans.The current
includeschecks are fine for small files, but large imports will spend unnecessary time in nested linear lookups.♻️ Proposed refactor
const knownColumns = classColumns.map(c => c.name); + const knownColumnsSet = new Set(knownColumns); @@ if (importOptions.unknownColumns === 'fail') { - const unknownCols = []; + const unknownCols = new Set(); for (const row of parseResult.rows) { for (const key of Object.keys(row)) { - if (!knownColumns.includes(key) && !unknownCols.includes(key)) { - unknownCols.push(key); + if (!knownColumnsSet.has(key)) { + unknownCols.add(key); } } } - if (unknownCols.length > 0) { - this.showNote(`Unknown columns found: ${unknownCols.join(', ')}. Import aborted.`, true); + if (unknownCols.size > 0) { + this.showNote(`Unknown columns found: ${Array.from(unknownCols).join(', ')}. Import aborted.`, true); if (this.importDialogRef) { this.importDialogRef.resetForm(); } return; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/dashboard/Data/Browser/Browser.react.js` around lines 2454 - 2481, The unknown-column detection currently does nested .includes checks on knownColumns and unknownCols causing O(n*m) behavior; replace array lookups with Sets: create knownSet = new Set(classColumns.map(c => c.name)) and unknownSet = new Set(), then iterate parseResult.rows and for each key use knownSet.has(key) and unknownSet.has(key) to decide whether to unknownSet.add(key); after scanning, convert unknownSet to an array (e.g. Array.from(unknownSet)) if the rest of the code expects unknownCols as an array. Keep existing calls to this.showNote(...) and this.importDialogRef.resetForm() unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/importData.js`:
- Around line 382-392: The duplicate-checking loop that builds a huge GET query
(using CHUNK_SIZE, objectIds, and the where = JSON.stringify({ objectId: { $in:
chunk } }) + URLSearchParams) can exceed URL length limits; change the request
to send the query in the request body instead of the URL: keep the chunking
logic (optionally lower CHUNK_SIZE if desired), construct the same where
payload, and replace the GET + URLSearchParams call with a POST that sends JSON
(Content-Type: application/json) containing { where: JSON.stringify({ objectId:
{ $in: chunk } }), keys: 'objectId', limit: chunk.length } so the
Parse/duplicate-check endpoint receives the filter in the body and avoids
URL-length failures (adjust any fetch/axios call site accordingly).
---
Duplicate comments:
In `@src/lib/importData.js`:
- Around line 216-256: After filtering and normalizing each row (using allowed,
ensureDateObject, preserveTimestamps, preserveObjectIds, duplicateHandling and
className), guard against emitting empty POST bodies by checking
Object.keys(body).length === 0 before returning the POST; if the body is empty
(and you’re not taking the overwrite PUT branch for preserveObjectIds &&
duplicateHandling === 'overwrite'), skip this row (e.g., return null/undefined)
so no POST with {} is emitted — ensure any caller of this map filters out falsy
results.
---
Nitpick comments:
In `@src/dashboard/Data/Browser/Browser.react.js`:
- Around line 2454-2481: The unknown-column detection currently does nested
.includes checks on knownColumns and unknownCols causing O(n*m) behavior;
replace array lookups with Sets: create knownSet = new Set(classColumns.map(c =>
c.name)) and unknownSet = new Set(), then iterate parseResult.rows and for each
key use knownSet.has(key) and unknownSet.has(key) to decide whether to
unknownSet.add(key); after scanning, convert unknownSet to an array (e.g.
Array.from(unknownSet)) if the rest of the code expects unknownCols as an array.
Keep existing calls to this.showNote(...) and this.importDialogRef.resetForm()
unchanged.
In `@src/lib/tests/importData.test.js`:
- Around line 328-466: Add a regression test to buildBatchRequests that ensures
rows which are fully stripped by unknownColumns: 'ignore' produce no POST
requests: create a test in src/lib/tests/importData.test.js calling
buildBatchRequests with a rows array containing an item that only has unknown
fields (e.g., { unknownField: 'x' }), set unknownColumns: 'ignore' and provide
knownColumns that do not include those fields, and assert that the returned
requests array does not include any request for that row (expect length 0 or
that no POST to '/classes/GameScore' was created); reference buildBatchRequests
and the options preserveObjectIds/preserveTimestamps/unknownColumns/knownColumns
when adding the test.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/dashboard/Data/Browser/Browser.react.jssrc/dashboard/Data/Browser/ImportDataDialog.react.jssrc/lib/importData.jssrc/lib/tests/importData.test.js
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/importData.js`:
- Around line 305-313: The code is not normalizing serverURL which allows double
slashes in constructed endpoints; update the logic around serverPath and any
code that builds endpoint URLs (references: serverPath variable and places that
append '/batch', '/classes', etc.) to first normalize serverURL by trimming
trailing slashes (e.g., remove all trailing '/' from serverURL or ensure
serverPath always begins with a single '/') and then join paths using a single
slash (ensure you don't concatenate like serverURL + '/batch' when serverURL may
already end with '/'); apply the same normalization at the other endpoint
construction sites mentioned (the code that builds the batch and classes
endpoints).
- Line 45: In parseImportCSV, guard against a missing or undefined schema before
accessing schema[header]: check that the schema parameter is provided (or
default it to an empty object) and handle absent entries when iterating headers
so that accessing schema[header] cannot throw; update parseImportCSV to validate
or default the schema argument and treat unknown headers as having no schema
mapping (e.g., skip type-specific transforms or use a safe fallback) so the
function returns a parse result instead of crashing when schema is undefined.
- Around line 70-80: The CSV boolean handling currently coerces any non-"true"
string to false; update convertCSVValue to accept only explicit "true" or
"false" (case-insensitive, trimmed) for type === 'Boolean' and treat any other
literal as invalid (do not coerce). Implement this by returning a thrown Error
or a distinct sentinel from convertCSVValue for invalid booleans, and then
update the import loop (the code that calls convertCSVValue and uses variables
colSchema, type, converted, and header) to detect that invalid-boolean result
and return an error object like the number-check branch (e.g., {rows:null,
error: `Invalid boolean in row ${i}, column "${header}".`}) so bad boolean
literals fail the import instead of silently becoming false.
There was a problem hiding this comment.
♻️ Duplicate comments (4)
src/lib/importData.js (4)
169-170:⚠️ Potential issue | 🟠 MajorReject invalid Boolean literals instead of coercing them to
false.Any non-
"true"value (e.g.,"yes","1","foo") silently becomesfalse, which can corrupt imported data without user awareness.Consider validating Boolean literals explicitly:
case 'Boolean': - return value.toLowerCase() === 'true'; + const normalized = value.trim().toLowerCase(); + if (normalized === 'true') return true; + if (normalized === 'false') return false; + return { __invalid: true, type: 'Boolean', value };Then detect
__invalidinparseImportCSVand return an error similar to the Number check.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/importData.js` around lines 169 - 170, The Boolean branch currently coerces any non-"true" string to false; change the 'Boolean' case (in the value parsing switch in src/lib/importData.js) to validate explicit literals ("true" or "false") and return a sentinel (e.g., "__invalid") for invalid inputs instead of silently returning false; then update parseImportCSV to detect that "__invalid" result for Booleans and surface an import error the same way the Number validation does (reuse the same error-generation/path used for numeric validation) so invalid Boolean literals are rejected rather than coerced.
382-420:⚠️ Potential issue | 🟡 MinorApply the same
serverURLnormalization here.Line 400 has the same trailing-slash issue as
sendBatchImport. Apply consistent normalization:export async function checkDuplicates(objectIds, className, options) { // ... const { serverURL, applicationId, masterKey } = options || {}; + const normalizedServerURL = String(serverURL || '').replace(/\/+$/, ''); const CHUNK_SIZE = 100; // ... - const url = `${serverURL}/classes/${className}?${params.toString()}`; + const url = `${normalizedServerURL}/classes/${className}?${params.toString()}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/importData.js` around lines 382 - 420, The checkDuplicates function currently builds the request URL using serverURL directly which can produce a double slash if serverURL ends with a trailing slash; update checkDuplicates to normalize serverURL (same strategy used in sendBatchImport) right after extracting { serverURL, applicationId, masterKey } from options — trim any trailing slash (e.g. serverURL = serverURL.replace(/\/$/, '') or equivalent) before constructing url = `${serverURL}/classes/${className}?${params.toString()}` so requests are consistent and avoid duplicate slashes.
45-45:⚠️ Potential issue | 🟡 MinorGuard against undefined
schemain CSV parsing.If
schemaisundefined, accessingschema[header]at line 70 will throw a TypeError. Add a default value to prevent runtime errors.-export function parseImportCSV(content, schema) { +export function parseImportCSV(content, schema = {}) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/importData.js` at line 45, parseImportCSV currently assumes schema is defined and will throw when accessing schema[header]; guard against an undefined schema by providing a default (e.g., empty object) or checking before lookup. In function parseImportCSV, ensure schema is initialized (or replace direct accesses like schema[header] with (schema || {})[header] or a conditional check) so header lookups are safe when schema is undefined.
305-332:⚠️ Potential issue | 🟡 MinorNormalize
serverURLbefore building endpoint URLs.If
serverURLhas a trailing slash (e.g.,http://localhost:1337/parse/), line 332 createshttp://localhost:1337/parse//batch, which may fail with stricter proxies or load balancers.Suggested fix
export async function sendBatchImport(requests, options) { const { serverURL, applicationId, masterKey, maintenanceKey, continueOnError = true, onProgress, } = options || {}; + const normalizedServerURL = String(serverURL || '').replace(/\/+$/, ''); + const BATCH_SIZE = 50; // ... let serverPath = ''; try { - serverPath = new URL(serverURL).pathname.replace(/\/+$/, ''); + serverPath = new URL(normalizedServerURL).pathname.replace(/\/+$/, ''); } catch { // ... } // ... - const response = await fetch(`${serverURL}/batch`, { + const response = await fetch(`${normalizedServerURL}/batch`, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/importData.js` around lines 305 - 332, The code builds the batch endpoint using serverURL directly which can leave a trailing slash and produce double-slashes (e.g., `${serverURL}/batch`), so normalize serverURL before using it: compute a serverURLNormalized = serverURL.replace(/\/+$/, '') (or derive from new URL(serverURL).origin + serverPath) and use that when constructing the fetch URL and when prepending paths in the requests.map; update the mapping where path is set (inside the loop that uses BATCH_SIZE and requests.slice) and the fetch call that uses `${serverURL}/batch` to use `serverURLNormalized` (or the origin+serverPath combination) to ensure no duplicated slashes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/lib/importData.js`:
- Around line 169-170: The Boolean branch currently coerces any non-"true"
string to false; change the 'Boolean' case (in the value parsing switch in
src/lib/importData.js) to validate explicit literals ("true" or "false") and
return a sentinel (e.g., "__invalid") for invalid inputs instead of silently
returning false; then update parseImportCSV to detect that "__invalid" result
for Booleans and surface an import error the same way the Number validation does
(reuse the same error-generation/path used for numeric validation) so invalid
Boolean literals are rejected rather than coerced.
- Around line 382-420: The checkDuplicates function currently builds the request
URL using serverURL directly which can produce a double slash if serverURL ends
with a trailing slash; update checkDuplicates to normalize serverURL (same
strategy used in sendBatchImport) right after extracting { serverURL,
applicationId, masterKey } from options — trim any trailing slash (e.g.
serverURL = serverURL.replace(/\/$/, '') or equivalent) before constructing url
= `${serverURL}/classes/${className}?${params.toString()}` so requests are
consistent and avoid duplicate slashes.
- Line 45: parseImportCSV currently assumes schema is defined and will throw
when accessing schema[header]; guard against an undefined schema by providing a
default (e.g., empty object) or checking before lookup. In function
parseImportCSV, ensure schema is initialized (or replace direct accesses like
schema[header] with (schema || {})[header] or a conditional check) so header
lookups are safe when schema is undefined.
- Around line 305-332: The code builds the batch endpoint using serverURL
directly which can leave a trailing slash and produce double-slashes (e.g.,
`${serverURL}/batch`), so normalize serverURL before using it: compute a
serverURLNormalized = serverURL.replace(/\/+$/, '') (or derive from new
URL(serverURL).origin + serverPath) and use that when constructing the fetch URL
and when prepending paths in the requests.map; update the mapping where path is
set (inside the loop that uses BATCH_SIZE and requests.slice) and the fetch call
that uses `${serverURL}/batch` to use `serverURLNormalized` (or the
origin+serverPath combination) to ensure no duplicated slashes.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/lib/importData.jssrc/lib/tests/importData.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/tests/importData.test.js
# [9.1.0-alpha.4](9.1.0-alpha.3...9.1.0-alpha.4) (2026-03-02) ### Features * Add support for data import in data browser ([#3244](#3244)) ([16f60f4](16f60f4))
|
🎉 This change has been released in version 9.1.0-alpha.4 |
Pull Request
Issue
Add support for data import in data browser. With support for:
.jsonand.csv. Format derived from file extension.allowCustomObjectIdmust be enabled on the server.X-Parse-Maintenance-Key)Overwrite existing— PUT to update existing objectsSkip duplicates— Check existence first, skip if foundFail on duplicate— Check existence first, abort if foundAuto-create columns— Automatically add new columns based on imported data typesIgnore unknown columns— Skip fields that don't match existing schemaFail on unknown— Stop and show error if unknown columns foundContinue on errors— Collect failures, continue with remaining rows, show summaryStop on first error— Halt on first failure, report what was imported before the stopCloses #307
Closes #2360
Tasks
Summary by CodeRabbit
New Features
UI
Documentation
Style
Tests