Enable displaying FalkorDB version to customers in sas builder#130
Enable displaying FalkorDB version to customers in sas builder#130MuhammadQadora wants to merge 280 commits intomasterfrom
Conversation
…kordb-browser fix #16 open falkordb instance in falkordb browser
add fix for unknown timezone
…kordb-browser fix icon color
…eInstanceId].js Co-authored-by: Guy Korland <gkorland@gmail.com> Signed-off-by: Dudi <16744955+dudizimber@users.noreply.github.com>
…kordb-browser fix show button when instance is not running
Merge omnistrate changes into main
…rdb-browser send tls query param to browser
27 images not showing up in emails
…reating-instance fix cloud icon
rename open button to connect
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pages/api/action.js (2)
12-12: Empty default error message provides poor user experience.The
defaultErrorMessageis an empty string, which will result in unhelpful error responses like{ message: "" }. Other API routes in this codebase use meaningful messages such as"Something went wrong. Please retry".Proposed fix
-const defaultErrorMessage = ""; +const defaultErrorMessage = "Something went wrong. Please retry";
114-121: Fix error extraction in catch block—structure doesn't match actual thrown exceptions.The catch block (lines 114-121) attempts to extract
error?.response?.statusanderror?.response?.data?.message, but these properties don't exist on actual exceptions this block catches (network errors, timeouts, JavaScript runtime errors). The openapi-fetch errors are already handled in lines 88-95.This causes the error extraction to always fail, reverting to
errorCode = 500anderrorMessage = defaultErrorMessage—which works due to fallback values but represents dead code.For consistency with the try block's error handling (line 90-91:
fetchResponse?.status,error.message) and other API routes, simplify to:} catch (error) { console.error("Action Route error", error); return nextResponse.status(500).send({ message: defaultErrorMessage, }); }Or log the error detail for debugging if needed, without attempting to extract non-existent nested properties.
🤖 Fix all issues with AI agents
In `@app/`(dashboard)/components/Layout/Sidebar.tsx:
- Around line 175-191: The memoized booleans showCloudProvidersPage and
showCustomNetworksPage can throw if subscriptions or serviceOfferings are
undefined; update both to safely handle missing globals by using nullish
coalescing and array-safe iteration (e.g., (subscriptions ?? []).some(...) and
searching within (serviceOfferings ?? []) with optional chaining) instead of
directly calling .find on possibly undefined arrays, and keep the same matching
logic that checks offering?.serviceModelType and
offering?.serviceModelFeatures?.find(...).
In
`@app/`(dashboard)/instances/[serviceId]/[servicePlanId]/[resourceId]/[instanceId]/[subscriptionId]/page.tsx:
- Around line 389-395: The iframe embedding the Grafana dashboard in page.tsx
lacks an accessible title; update the iframe element that uses
process.env.NEXT_PUBLIC_GRAFANA_URL, instanceId and subscription.id to include a
descriptive title attribute (e.g., "Grafana dashboard for instance {instanceId}"
or similar) so screen readers can identify the content; ensure the title is
uniquely descriptive and added to the iframe tag where src is constructed.
- Around line 208-210: The lookup of componentName assumes
resourceInstanceData.detailedNetworkTopology and the selected entry exist and
uses an unsafe any; change the logic in the componentName computation (and the
similar block around the 269-281 range) to first guard that
resourceInstanceData?.detailedNetworkTopology is defined and is an object/array,
then iterate safely using typed narrowing (replace any with a proper type or
unknown and narrow to the expected shape with a type predicate) and use optional
chaining to check (entry.value as { clusterEndpoint?: string; resourceName?:
string })?.resourceName before calling startsWith; finally handle the case where
no matching entry is found (return undefined/null or a safe fallback) so
clicking cannot crash.
In `@app/`(dashboard)/instances/components/InstanceForm.tsx:
- Around line 311-318: Guard against selectedInstance?.result_params being
undefined before indexing: replace the direct use of oldResultParams in the
comparison inside the loop by either defaulting oldResultParams (e.g., const
oldResultParams = selectedInstance?.result_params || {}) or adding an explicit
check (e.g., if (!oldResultParams || oldResultParams[key] !== value) ...) so the
for-loop that builds requestParams (using data.requestParams and requestParams)
never attempts to read properties from undefined.
🧹 Nitpick comments (7)
app/(public)/(main-image)/signin/components/SignInForm.tsx (2)
63-63: Consider removing or documenting the commented-out code.The commented
enableReinitialize: trueline should either be removed if no longer needed, or documented with a comment explaining why it's kept for future reference.
129-129: Avoid usinganytype for error handling.The
error: anytype annotation violates the coding guideline to not useanytypes. Consider using a more specific error type orunknownwith proper type narrowing.♻️ Proposed fix
- onError: (error: any) => { - if (error.response.data && error.response.data.message) { + onError: (error: unknown) => { + if ( + error instanceof Error && + "response" in error && + typeof (error as any).response?.data?.message === "string" + ) { + const errorMessage = (error as { response: { data: { message: string } } }).response.data.message;Alternatively, define a proper error type:
type ApiError = { response?: { data?: { message?: string; }; }; };app/(dashboard)/components/Layout/Sidebar.tsx (1)
175-191: Eliminate duplicate offering lookup and centralize feature/model type checks.Both
showCloudProvidersPageandshowCustomNetworksPagerepeat the same subscription→offering lookup. Additionally, feature and model type identifiers are hardcoded string literals. Create a memoized Map for fast lookups and extract model type checks into utility functions following the existing pattern (e.g.,isCustomNetworkEnabledOnServiceOffering).♻️ Possible refactor
+ const offeringsByTierId = useMemo( + () => new Map((serviceOfferings ?? []).map((o) => [o.productTierID, o])), + [serviceOfferings] + ); + const showCloudProvidersPage = useMemo(() => { - const subs = subscriptions ?? []; - const offerings = serviceOfferings ?? []; + const subs = subscriptions ?? []; return subs.some((s) => { - const offering = offerings.find((o) => s.productTierId === o.productTierID); + const offering = offeringsByTierId.get(s.productTierId); return ( offering?.serviceModelType === "BYOA" || offering?.serviceModelType === "ON_PREM_COPILOT" ); }); - }, [serviceOfferings, subscriptions]); + }, [offeringsByTierId, subscriptions]); const showCustomNetworksPage = useMemo(() => { - const subs = subscriptions ?? []; - const offerings = serviceOfferings ?? []; + const subs = subscriptions ?? []; return subs.some((s) => { - const offering = offerings.find((o) => s.productTierId === o.productTierID); + const offering = offeringsByTierId.get(s.productTierId); return offering?.serviceModelFeatures?.some( (el) => el.feature === "CUSTOM_NETWORKS" ); }); - }, [serviceOfferings, subscriptions]); + }, [offeringsByTierId, subscriptions]);app/(dashboard)/instances/page.tsx (1)
135-160: Remove commented-out column blocks to reduce dead code.🧹 Suggested cleanup
- // columnHelper.accessor( - // (row) => { - // const subscription = subscriptionsObj[row.subscriptionId as string]; - // return subscription?.serviceName; - // }, - // { - // id: "serviceName", - // header: "Service Name", - // cell: (data) => { - // const subscription = - // subscriptionsObj[data.row.original.subscriptionId as string]; - // const serviceName = subscription?.serviceName; - // const serviceLogoURL = subscription?.serviceLogoURL; - // - // return ( - // <ServiceNameWithLogo - // serviceName={serviceName} - // serviceLogoURL={serviceLogoURL} - // /> - // ); - // }, - // meta: { - // minWidth: 230, - // }, - // } - // ), - // columnHelper.accessor( - // (row) => { - // const subscription = subscriptionsObj[row.subscriptionId as string]; - // return subscription?.productTierName; - // }, - // { - // id: "subscriptionPlan", - // header: "Subscription Plan", - // } - // ),Also applies to: 175-184
app/(dashboard)/instances/components/InstanceForm.tsx (1)
134-191: Avoid mutating Formik values and dropanyin the payload/schema map.
Clone first and type the payload/map so the submit path stays type-safe.♻️ Proposed fix
- // Clean up requestParams cloud_provider_native_network_id before submitting - if (values.requestParams?.cloud_provider_native_network_id === "") { - delete values.requestParams?.cloud_provider_native_network_id; - } + // Clean up requestParams cloud_provider_native_network_id before submitting + const data = cloneDeep(values) as InstanceFormValues; + if (data.requestParams?.cloud_provider_native_network_id === "") { + delete data.requestParams.cloud_provider_native_network_id; + } ... - const data: any = { - ...cloneDeep(values), - }; ... - const inputParametersObj = filterSchema.reduce((acc: any, param: any) => { + const inputParametersObj = filterSchema.reduce<Record<string, typeof filterSchema[number]>>((acc, param) => { acc[param.key] = param; return acc; }, {});Based on learnings: Form validation, submission handlers, and form data must be properly typed; and as per coding guidelines: Do not use
anytypes in TypeScript; use proper type definitions orunknownwhen appropriate.app/(dashboard)/instances/[serviceId]/[servicePlanId]/[resourceId]/[instanceId]/[subscriptionId]/page.tsx (2)
269-273: Use the enum for network type checks.
Keeps comparisons consistent with the rest of the file.♻️ Proposed fix
- {resourceInstanceData.networkType !== "INTERNAL" && <Button + {resourceInstanceData.networkType !== NetworkType.INTERNAL && <ButtonAs per coding guidelines: Use enums instead of string literals where applicable in TypeScript code.
45-55: UpdateCurrentTabto include the new Import/Export tab.
Avoids unsafe casts and keeps the union in sync withtabs.importExportRDB.♻️ Proposed fix
export type CurrentTab = | "Instance Details" | "Connectivity" | "Nodes" | "Metrics" | "Logs" | "Audit Logs" | "Backups" | "Snapshots" - | "Custom DNS"; + | "Custom DNS" + | "Import/Export RDB";Also applies to: 460-460
| const showCloudProvidersPage = useMemo(() => { | ||
| return Boolean( | ||
| serviceOfferings.find( | ||
| (offering) => offering.serviceModelType === "BYOA" || offering.serviceModelType === "ON_PREM_COPILOT" | ||
| ) | ||
| subscriptions.find(s => { | ||
| const offering = serviceOfferings.find(o => s.productTierId === o.productTierID); | ||
| return offering?.serviceModelType === "BYOA" || offering?.serviceModelType === "ON_PREM_COPILOT" | ||
| }) | ||
| ); | ||
| }, [serviceOfferings]); | ||
| }, [serviceOfferings, subscriptions]); | ||
|
|
||
| const showCustomNetworksPage = useMemo(() => { | ||
| return Boolean( | ||
| serviceOfferings.some((offering) => offering.serviceModelFeatures?.find((el) => el.feature === "CUSTOM_NETWORKS")) | ||
| subscriptions.find(s => { | ||
| const offering = serviceOfferings.find(o => s.productTierId === o.productTierID) | ||
| return offering?.serviceModelFeatures?.find((el) => el.feature === "CUSTOM_NETWORKS"); | ||
| }) | ||
| ); | ||
| }, [serviceOfferings]); | ||
| }, [serviceOfferings, subscriptions]); |
There was a problem hiding this comment.
Guard against undefined global data before calling .find.
If useGlobalData() returns undefined arrays while loading, the .find calls will throw and crash the sidebar. Use nullish coalescing (and some) to keep this safe.
✅ Proposed fix
const showCloudProvidersPage = useMemo(() => {
- return Boolean(
- subscriptions.find(s => {
- const offering = serviceOfferings.find(o => s.productTierId === o.productTierID);
- return offering?.serviceModelType === "BYOA" || offering?.serviceModelType === "ON_PREM_COPILOT"
- })
- );
+ const subs = subscriptions ?? [];
+ const offerings = serviceOfferings ?? [];
+ return subs.some((s) => {
+ const offering = offerings.find((o) => s.productTierId === o.productTierID);
+ return (
+ offering?.serviceModelType === "BYOA" ||
+ offering?.serviceModelType === "ON_PREM_COPILOT"
+ );
+ });
}, [serviceOfferings, subscriptions]);
const showCustomNetworksPage = useMemo(() => {
- return Boolean(
- subscriptions.find(s => {
- const offering = serviceOfferings.find(o => s.productTierId === o.productTierID)
- return offering?.serviceModelFeatures?.find((el) => el.feature === "CUSTOM_NETWORKS");
- })
- );
+ const subs = subscriptions ?? [];
+ const offerings = serviceOfferings ?? [];
+ return subs.some((s) => {
+ const offering = offerings.find((o) => s.productTierId === o.productTierID);
+ return offering?.serviceModelFeatures?.some(
+ (el) => el.feature === "CUSTOM_NETWORKS"
+ );
+ });
}, [serviceOfferings, subscriptions]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const showCloudProvidersPage = useMemo(() => { | |
| return Boolean( | |
| serviceOfferings.find( | |
| (offering) => offering.serviceModelType === "BYOA" || offering.serviceModelType === "ON_PREM_COPILOT" | |
| ) | |
| subscriptions.find(s => { | |
| const offering = serviceOfferings.find(o => s.productTierId === o.productTierID); | |
| return offering?.serviceModelType === "BYOA" || offering?.serviceModelType === "ON_PREM_COPILOT" | |
| }) | |
| ); | |
| }, [serviceOfferings]); | |
| }, [serviceOfferings, subscriptions]); | |
| const showCustomNetworksPage = useMemo(() => { | |
| return Boolean( | |
| serviceOfferings.some((offering) => offering.serviceModelFeatures?.find((el) => el.feature === "CUSTOM_NETWORKS")) | |
| subscriptions.find(s => { | |
| const offering = serviceOfferings.find(o => s.productTierId === o.productTierID) | |
| return offering?.serviceModelFeatures?.find((el) => el.feature === "CUSTOM_NETWORKS"); | |
| }) | |
| ); | |
| }, [serviceOfferings]); | |
| }, [serviceOfferings, subscriptions]); | |
| const showCloudProvidersPage = useMemo(() => { | |
| const subs = subscriptions ?? []; | |
| const offerings = serviceOfferings ?? []; | |
| return subs.some((s) => { | |
| const offering = offerings.find((o) => s.productTierId === o.productTierID); | |
| return ( | |
| offering?.serviceModelType === "BYOA" || | |
| offering?.serviceModelType === "ON_PREM_COPILOT" | |
| ); | |
| }); | |
| }, [serviceOfferings, subscriptions]); | |
| const showCustomNetworksPage = useMemo(() => { | |
| const subs = subscriptions ?? []; | |
| const offerings = serviceOfferings ?? []; | |
| return subs.some((s) => { | |
| const offering = offerings.find((o) => s.productTierId === o.productTierID); | |
| return offering?.serviceModelFeatures?.some( | |
| (el) => el.feature === "CUSTOM_NETWORKS" | |
| ); | |
| }); | |
| }, [serviceOfferings, subscriptions]); |
🤖 Prompt for AI Agents
In `@app/`(dashboard)/components/Layout/Sidebar.tsx around lines 175 - 191, The
memoized booleans showCloudProvidersPage and showCustomNetworksPage can throw if
subscriptions or serviceOfferings are undefined; update both to safely handle
missing globals by using nullish coalescing and array-safe iteration (e.g.,
(subscriptions ?? []).some(...) and searching within (serviceOfferings ?? [])
with optional chaining) instead of directly calling .find on possibly undefined
arrays, and keep the same matching logic that checks offering?.serviceModelType
and offering?.serviceModelFeatures?.find(...).
| const componentName = Object.entries(resourceInstanceData.detailedNetworkTopology).filter(([_, v]) => { | ||
| return (v as any).clusterEndpoint && !(v as any).resourceName.startsWith("Omnistrate"); | ||
| })?.[0]?.[0]; |
There was a problem hiding this comment.
Guard topology lookup before connect; avoid any.
detailedNetworkTopology or the selected component can be missing, which will crash on click.
🐛 Proposed fix
- const componentName = Object.entries(resourceInstanceData.detailedNetworkTopology).filter(([_, v]) => {
- return (v as any).clusterEndpoint && !(v as any).resourceName.startsWith("Omnistrate");
- })?.[0]?.[0];
+ type TopologyEntry = { clusterEndpoint?: string; clusterPorts?: number[]; resourceName?: string };
+ const topology = (resourceInstanceData?.detailedNetworkTopology ?? {}) as Record<string, TopologyEntry>;
+ const componentEntry = Object.entries(topology).find(
+ ([_, v]) => v?.clusterEndpoint && !v?.resourceName?.startsWith("Omnistrate")
+ );
+ const componentName = componentEntry?.[0];
+ const component = componentEntry?.[1];- {resourceInstanceData.networkType !== "INTERNAL" && <Button
+ {resourceInstanceData.networkType !== "INTERNAL" && <Button
variant="contained"
size="common"
- disabled={resourceInstanceData.status !== "RUNNING"}
+ disabled={resourceInstanceData.status !== "RUNNING" || !component?.clusterEndpoint}
disabledMessage="Instance must be running to connect"
- onClick={() =>
- connectToInstance({
- host: (resourceInstanceData.detailedNetworkTopology[componentName] as any)?.clusterEndpoint,
- port: (resourceInstanceData.detailedNetworkTopology[componentName] as any).clusterPorts?.[0],
- username: (resourceInstanceData.resultParameters as any)?.falkordbUser,
- region: resourceInstanceData.region,
- tls: (resourceInstanceData.resultParameters as any)?.enableTLS,
- })
- }
+ onClick={() => {
+ if (!component?.clusterEndpoint) return;
+ connectToInstance({
+ host: component.clusterEndpoint,
+ port: component.clusterPorts?.[0],
+ username: resourceInstanceData.resultParameters?.falkordbUser,
+ region: resourceInstanceData.region,
+ tls: resourceInstanceData.resultParameters?.enableTLS,
+ });
+ }}
>As per coding guidelines: Do not use any types in TypeScript; use proper type definitions or unknown when appropriate.
Also applies to: 269-281
🤖 Prompt for AI Agents
In
`@app/`(dashboard)/instances/[serviceId]/[servicePlanId]/[resourceId]/[instanceId]/[subscriptionId]/page.tsx
around lines 208 - 210, The lookup of componentName assumes
resourceInstanceData.detailedNetworkTopology and the selected entry exist and
uses an unsafe any; change the logic in the componentName computation (and the
similar block around the 269-281 range) to first guard that
resourceInstanceData?.detailedNetworkTopology is defined and is an object/array,
then iterate safely using typed narrowing (replace any with a proper type or
unknown and narrow to the expected shape with a type predicate) and use optional
chaining to check (entry.value as { clusterEndpoint?: string; resourceName?:
string })?.resourceName before calling startsWith; finally handle the case where
no matching entry is found (return undefined/null or a safe fallback) so
clicking cannot crash.
| <iframe | ||
| width="100%" | ||
| style={{ | ||
| minHeight: "700px", | ||
| }} | ||
| src={process.env.NEXT_PUBLIC_GRAFANA_URL + "/d/" + instanceId + "?orgId=" + subscription.id} | ||
| /> |
There was a problem hiding this comment.
Add an iframe title for accessibility.
✅ Proposed fix
- <iframe
+ <iframe
+ title="Instance metrics"
width="100%"
style={{
minHeight: "700px",
}}
src={process.env.NEXT_PUBLIC_GRAFANA_URL + "/d/" + instanceId + "?orgId=" + subscription.id}
/>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <iframe | |
| width="100%" | |
| style={{ | |
| minHeight: "700px", | |
| }} | |
| src={process.env.NEXT_PUBLIC_GRAFANA_URL + "/d/" + instanceId + "?orgId=" + subscription.id} | |
| /> | |
| <iframe | |
| title="Instance metrics" | |
| width="100%" | |
| style={{ | |
| minHeight: "700px", | |
| }} | |
| src={process.env.NEXT_PUBLIC_GRAFANA_URL + "/d/" + instanceId + "?orgId=" + subscription.id} | |
| /> |
🤖 Prompt for AI Agents
In
`@app/`(dashboard)/instances/[serviceId]/[servicePlanId]/[resourceId]/[instanceId]/[subscriptionId]/page.tsx
around lines 389 - 395, The iframe embedding the Grafana dashboard in page.tsx
lacks an accessible title; update the iframe element that uses
process.env.NEXT_PUBLIC_GRAFANA_URL, instanceId and subscription.id to include a
descriptive title attribute (e.g., "Grafana dashboard for instance {instanceId}"
or similar) so screen readers can identify the content; ensure the title is
uniquely descriptive and added to the iframe tag where src is constructed.
| const requestParams = {}, | ||
| oldResultParams = selectedInstance?.result_params; | ||
|
|
||
| for (const key in data.requestParams) { | ||
| const value = data.requestParams[key]; | ||
| if (oldResultParams[key] !== value) { | ||
| requestParams[key] = value; | ||
| } |
There was a problem hiding this comment.
Guard oldResultParams before indexing.
If selectedInstance?.result_params is undefined, this will throw at runtime.
🐛 Proposed fix
- const requestParams = {},
- oldResultParams = selectedInstance?.result_params;
+ const requestParams = {};
+ const oldResultParams = selectedInstance?.result_params ?? {};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const requestParams = {}, | |
| oldResultParams = selectedInstance?.result_params; | |
| for (const key in data.requestParams) { | |
| const value = data.requestParams[key]; | |
| if (oldResultParams[key] !== value) { | |
| requestParams[key] = value; | |
| } | |
| const requestParams = {}; | |
| const oldResultParams = selectedInstance?.result_params ?? {}; | |
| for (const key in data.requestParams) { | |
| const value = data.requestParams[key]; | |
| if (oldResultParams[key] !== value) { | |
| requestParams[key] = value; | |
| } |
🤖 Prompt for AI Agents
In `@app/`(dashboard)/instances/components/InstanceForm.tsx around lines 311 -
318, Guard against selectedInstance?.result_params being undefined before
indexing: replace the direct use of oldResultParams in the comparison inside the
loop by either defaulting oldResultParams (e.g., const oldResultParams =
selectedInstance?.result_params || {}) or adding an explicit check (e.g., if
(!oldResultParams || oldResultParams[key] !== value) ...) so the for-loop that
builds requestParams (using data.requestParams and requestParams) never attempts
to read properties from undefined.
merge omnistrate master
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server.js (1)
23-28: Return after rendering the setup-error page to avoid double responses.Without a
return,handle(...)may attempt to write again afterresponse.render(...).🔧 Suggested fix
- if (!areProviderCredentialsVerified && process.env.NODE_ENV === "development") { - response.render("pages/setup-error", { - envVariablesStatus, - }); - } - await handle(request, response); + if (!areProviderCredentialsVerified && process.env.NODE_ENV === "development") { + return response.render("pages/setup-error", { + envVariablesStatus, + }); + } + await handle(request, response);
🤖 Fix all issues with AI agents
In `@pages/api/action.js`:
- Around line 36-41: Replace the incorrect Function.prototype.call usage when
reading headers: where you use nextRequest.getHeader?.call("x-forwarded-for")
and nextRequest.getHeader?.call("user-agent") (variables xForwardedForHeader and
originalUserAgent), switch to the standard header accessor
nextRequest.get?.("X-Forwarded-For") and nextRequest.get?.("user-agent")
(matching casing used elsewhere) so the header name is passed as an argument and
header retrieval works correctly.
In `@server.js`:
- Around line 15-17: The console.log("Environment variables status",
envVariablesStatus) is unguarded and should not run in production; update the
call in the verifyEnvironmentVariables usage to either use the project's logger
with log levels (e.g., logger.debug or logger.info) or wrap the console output
in a development-only guard (check process.env.NODE_ENV !== 'production' or use
a feature flag) so that envVariablesStatus is only printed during development;
locate the invocation around verifyEnvironmentVariables(), and replace or gate
the console.log referencing envVariablesStatus and
areProviderCredentialsVerified accordingly.
- Around line 15-16: Wrap the await verifyEnvironmentVariables() call in a
try/catch inside server.js: call verifyEnvironmentVariables() inside the try,
assign its return to areProviderCredentialsVerified and envVariablesStatus when
successful, and in the catch log the error (include the caught error) and set a
safe deterministic state (e.g., areProviderCredentialsVerified = false and
envVariablesStatus = a user‑friendly failure object/string) so downstream
rendering/error page logic has predictable values; ensure the catch also records
the error for debugging and does not rethrow so startup can render the error
page deterministically.
| //check if all required environment variables are available | ||
| const { areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables(); | ||
| console.log("Environment variables status", envVariablesStatus); |
There was a problem hiding this comment.
Gate the environment-status log to avoid production console output.
console.log at Line 17 should be dev-only or moved to a proper logger with levels.
🔧 Suggested fix
- console.log("Environment variables status", envVariablesStatus);
+ if (process.env.NODE_ENV !== "production") {
+ console.debug("Environment variables status", envVariablesStatus);
+ }As per coding guidelines: Flag any console.log statements or debugging code left in production code; only allow console statements for error logging, development-only debugging with environment checks, or intentional user-facing logs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| //check if all required environment variables are available | |
| const { areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables(); | |
| console.log("Environment variables status", envVariablesStatus); | |
| //check if all required environment variables are available | |
| const { areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables(); | |
| if (process.env.NODE_ENV !== "production") { | |
| console.debug("Environment variables status", envVariablesStatus); | |
| } |
🤖 Prompt for AI Agents
In `@server.js` around lines 15 - 17, The console.log("Environment variables
status", envVariablesStatus) is unguarded and should not run in production;
update the call in the verifyEnvironmentVariables usage to either use the
project's logger with log levels (e.g., logger.debug or logger.info) or wrap the
console output in a development-only guard (check process.env.NODE_ENV !==
'production' or use a feature flag) so that envVariablesStatus is only printed
during development; locate the invocation around verifyEnvironmentVariables(),
and replace or gate the console.log referencing envVariablesStatus and
areProviderCredentialsVerified accordingly.
| //check if all required environment variables are available | ||
| const { areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables(); |
There was a problem hiding this comment.
Wrap environment verification in try/catch and set a safe error state.
await verifyEnvironmentVariables() at Line 16 can reject during startup and currently has no recovery path. Add try/catch, log the error, and populate a user‑friendly status so the error page renders deterministically.
🔧 Suggested fix
- const { areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables();
+ let areProviderCredentialsVerified = false;
+ let envVariablesStatus = {};
+ try {
+ ({ areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables());
+ } catch (error) {
+ console.error("Failed to verify environment variables", error);
+ envVariablesStatus = {
+ error: "Unable to verify environment configuration. Please check server logs.",
+ };
+ }As per coding guidelines: All async operations must use try-catch blocks, provide user-friendly error messages, manage error state, use error boundaries for React components, and log errors for debugging.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| //check if all required environment variables are available | |
| const { areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables(); | |
| //check if all required environment variables are available | |
| let areProviderCredentialsVerified = false; | |
| let envVariablesStatus = {}; | |
| try { | |
| ({ areProviderCredentialsVerified, envVariablesStatus } = await verifyEnvironmentVariables()); | |
| } catch (error) { | |
| console.error("Failed to verify environment variables", error); | |
| envVariablesStatus = { | |
| error: "Unable to verify environment configuration. Please check server logs.", | |
| }; | |
| } |
🤖 Prompt for AI Agents
In `@server.js` around lines 15 - 16, Wrap the await verifyEnvironmentVariables()
call in a try/catch inside server.js: call verifyEnvironmentVariables() inside
the try, assign its return to areProviderCredentialsVerified and
envVariablesStatus when successful, and in the catch log the error (include the
caught error) and set a safe deterministic state (e.g.,
areProviderCredentialsVerified = false and envVariablesStatus = a user‑friendly
failure object/string) so downstream rendering/error page logic has predictable
values; ensure the catch also records the error for debugging and does not
rethrow so startup can render the error page deterministically.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
server.js (2)
23-28: Missingreturnafter rendering error page causes double-response.After
response.render("pages/setup-error", ...)completes, execution continues toawait handle(request, response)on Line 28. This attempts to send a second response on an already-finished response object, which will throw a "Cannot set headers after they are sent to the client" error or cause undefined behavior.🐛 Proposed fix
if (!areProviderCredentialsVerified && process.env.NODE_ENV === "development") { - response.render("pages/setup-error", { + return response.render("pages/setup-error", { envVariablesStatus, }); }
29-31: Catch block swallows error without logging.The error is not captured or logged, making debugging difficult. Per coding guidelines, errors should be logged for debugging purposes.
🔧 Proposed fix
- } catch { + } catch (error) { + console.error("Request handling failed:", error); response.render("pages/error"); }
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pages/api/signin.js (1)
76-86: 🛠️ Refactor suggestion | 🟠 MajorDuplicate error logging.
The error is logged twice—once at line 76 (
"Error in sign in") and again at line 86 ("Error in signin"). Remove one to avoid log clutter. As per coding guidelines, only allow console statements for error logging, development-only debugging with environment checks, or intentional user-facing logs.♻️ Proposed fix to remove duplicate
console.error("Error in sign in", error); const defaultErrorMessage = "Failed to sign in. Either the credentials are incorrect or the user does not exist"; //Wait for a random duration b/w 0ms and 150ms to mask the difference b/w response times of api when a user is present vs not present const delayInMilliseconds = _.random(0, 150); await new Promise((resolve) => { setTimeout(() => { resolve(); }, delayInMilliseconds); }); - console.error("Error in signin", error);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pages/api/signin.js` around lines 76 - 86, Duplicate error logging in the sign-in handler: remove the second console.error("Error in signin", error) so the error is only logged once (keep the initial console.error("Error in sign in", error)), or replace console.error with the approved logger if required by project conventions; locate this in the sign-in API handler in pages/api/signin.js around the delay block and delete the redundant console.error call.
🧹 Nitpick comments (4)
src/server/utils/rateLimiter.js (2)
1-6: In-memory rate limiting does not scale to multi-instance deployments.The
attemptStoreMap is local to each process. In a clustered or load-balanced environment, attackers can bypass rate limits by hitting different instances. Consider using Redis or another shared store if horizontal scaling is planned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/utils/rateLimiter.js` around lines 1 - 6, The current in-memory rate limiter uses the process-local Map attemptStore in rateLimiter.js which won't enforce limits across multiple server instances; replace or augment attemptStore with a shared store (e.g., Redis) so failed-login counters and TTLs are persisted centrally; update the functions that read/write attemptStore (the code that increments attempts, checks thresholds, and performs TTL cleanup) to call the Redis client (using atomic INCR/EXPIRE or Lua for race-safety) and fall back to the in-memory Map only when a shared store is unavailable; ensure initialization exposes the Redis client and that any TTL cleanup logic uses Redis expiry instead of local timers.
30-33: MissingclearTimeoutwhen cleaning up expired entry.When deleting an expired record in
isRateLimited, the scheduled timeout fromrecordAttemptis not cleared. While thedeletecall in the timeout callback is harmless on a non-existent key, this leaves a dangling timer.♻️ Proposed fix
if (isExpired) { // Window expired, clean up + clearTimeout(record.timeout); attemptStore.delete(ip); return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/utils/rateLimiter.js` around lines 30 - 33, The expired-entry cleanup in isRateLimited removes the record from attemptStore but doesn’t clear the scheduled timer created in recordAttempt, leaving a dangling timeout; update the cleanup in isRateLimited to call clearTimeout on the stored timeout ID before calling attemptStore.delete(ip) (use the same record object stored in attemptStore that contains the timer ID), and ensure recordAttempt sets and stores that timeout ID on the record so clearTimeout can reference it.pages/api/signin.js (2)
35-36: Typo in variable name:requiresReCaptachValidation.Should be
requiresReCaptchaValidation(missing 'a' in "Captcha").✏️ Proposed fix
- const requiresReCaptachValidation = checkReCaptchaSetup() && checkRequiresReCaptcha(nextRequest.get?.("X-Api-Key") || ""); - if (requiresReCaptachValidation) { + const requiresReCaptchaValidation = checkReCaptchaSetup() && checkRequiresReCaptcha(nextRequest.get?.("X-Api-Key") || ""); + if (requiresReCaptchaValidation) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pages/api/signin.js` around lines 35 - 36, The variable name requiresReCaptachValidation is misspelled; rename it to requiresReCaptchaValidation and update all usages accordingly (the declaration and any references in the same scope) so the code using checkReCaptchaSetup(), checkRequiresReCaptcha(nextRequest.get?.("X-Api-Key") || "") and the subsequent if (requiresReCaptachValidation) check use the corrected identifier requiresReCaptchaValidation to avoid reference errors.
57-64: Consider extracting the random delay logic to a utility function.The timing-attack mitigation pattern (
_.random(0, 150)+setTimeoutpromise) is duplicated in both success and error paths. As per coding guidelines, flag duplicate logic that should use utility functions.♻️ Proposed refactor
// Utility function (could be in a separate file) async function randomDelay(minMs = 0, maxMs = 150) { const delay = _.random(minMs, maxMs); await new Promise(resolve => setTimeout(resolve, delay)); }Then replace both occurrences with:
await randomDelay(0, 150);Also applies to: 79-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pages/api/signin.js` around lines 57 - 64, Extract the duplicated timing-attack mitigation logic into a reusable async utility (e.g., async function randomDelay(minMs = 0, maxMs = 150)) that picks a random delay via _.random and awaits a Promise that resolves after setTimeout; then replace both inline blocks (the one around the existing anonymous Promise and the other at lines referenced) with await randomDelay(0, 150), ensuring callers import or define randomDelay where appropriate and keep the same default behavior and parameters.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pages/api/signin.js`:
- Around line 10-16: The checkRequiresReCaptcha function uses
String.prototype.includes causing substring matches against
SKIP_RECAPTCHA_API_KEYS; change it to parse SKIP_RECAPTCHA_API_KEYS (split by
comma, trim entries, ignore empties), build a Set (or array) of exact API keys,
and check membership with exact equality (e.g., Set.has or array.includes on
trimmed entries) for the apiKey parameter so only exact keys bypass reCAPTCHA;
ensure you handle undefined env by defaulting to empty string and preserve the
original return semantics.
- Around line 22-32: The current client IP extraction from
nextRequest.get("X-Forwarded-For") can yield an empty string so all headerless
requests share one rate-limit bucket; update the logic in the signin handler to
(1) treat an empty X-Forwarded-For as missing, (2) fall back to a socket-level
address (e.g., nextRequest.socket.remoteAddress or framework-equivalent) before
deciding the client IP used by isRateLimited, and (3) if no reliable IP can be
obtained, return a 400/422 response instead of using an empty string; refer to
the X-Forwarded-For extraction and the isRateLimited check to implement this
fallback and early error return.
- Around line 68-74: The catch block is currently calling
recordAttempt(clientIP) for all errors (including CAPTCHA failures); update the
flow so only real credential/authentication failures increment the rate limiter:
mark or throw a distinguishable CAPTCHA error (e.g., set isCaptchaFailure = true
or throw an error with name "CaptchaError" from the CAPTCHA verification code),
ensure clientIP is extracted before handlers, and in the catch around the signin
flow call recordAttempt(clientIP) only when the error is not a CAPTCHA error
(i.e., if (!isCaptchaFailure && error.name !== 'CaptchaError')
recordAttempt(clientIP)); keep existing logging and response behavior unchanged
for CAPTCHA failures.
In `@src/server/utils/rateLimiter.js`:
- Around line 59-62: The new record branch in the rate limiting logic returns
remainingTime in milliseconds (returns remainingTime: finalConfig.windowMs)
while the existing-record branch returns seconds (Math.ceil(remainingTime /
1000)); make both branches consistent by returning remainingTime in seconds (use
Math.ceil(finalConfig.windowMs / 1000) for the new-record return) so that the
callers of the rate limiter (e.g., the functions handling attempts and
messaging) always receive seconds for remainingTime.
---
Outside diff comments:
In `@pages/api/signin.js`:
- Around line 76-86: Duplicate error logging in the sign-in handler: remove the
second console.error("Error in signin", error) so the error is only logged once
(keep the initial console.error("Error in sign in", error)), or replace
console.error with the approved logger if required by project conventions;
locate this in the sign-in API handler in pages/api/signin.js around the delay
block and delete the redundant console.error call.
---
Nitpick comments:
In `@pages/api/signin.js`:
- Around line 35-36: The variable name requiresReCaptachValidation is
misspelled; rename it to requiresReCaptchaValidation and update all usages
accordingly (the declaration and any references in the same scope) so the code
using checkReCaptchaSetup(),
checkRequiresReCaptcha(nextRequest.get?.("X-Api-Key") || "") and the subsequent
if (requiresReCaptachValidation) check use the corrected identifier
requiresReCaptchaValidation to avoid reference errors.
- Around line 57-64: Extract the duplicated timing-attack mitigation logic into
a reusable async utility (e.g., async function randomDelay(minMs = 0, maxMs =
150)) that picks a random delay via _.random and awaits a Promise that resolves
after setTimeout; then replace both inline blocks (the one around the existing
anonymous Promise and the other at lines referenced) with await randomDelay(0,
150), ensuring callers import or define randomDelay where appropriate and keep
the same default behavior and parameters.
In `@src/server/utils/rateLimiter.js`:
- Around line 1-6: The current in-memory rate limiter uses the process-local Map
attemptStore in rateLimiter.js which won't enforce limits across multiple server
instances; replace or augment attemptStore with a shared store (e.g., Redis) so
failed-login counters and TTLs are persisted centrally; update the functions
that read/write attemptStore (the code that increments attempts, checks
thresholds, and performs TTL cleanup) to call the Redis client (using atomic
INCR/EXPIRE or Lua for race-safety) and fall back to the in-memory Map only when
a shared store is unavailable; ensure initialization exposes the Redis client
and that any TTL cleanup logic uses Redis expiry instead of local timers.
- Around line 30-33: The expired-entry cleanup in isRateLimited removes the
record from attemptStore but doesn’t clear the scheduled timer created in
recordAttempt, leaving a dangling timeout; update the cleanup in isRateLimited
to call clearTimeout on the stored timeout ID before calling
attemptStore.delete(ip) (use the same record object stored in attemptStore that
contains the timer ID), and ensure recordAttempt sets and stores that timeout ID
on the record so clearTimeout can reference it.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b8774d59-4ac1-432a-a687-5573f60eff1c
📒 Files selected for processing (3)
pages/api/reset-password.jspages/api/signin.jssrc/server/utils/rateLimiter.js
🚧 Files skipped from review as they are similar to previous changes (1)
- pages/api/reset-password.js
| function checkRequiresReCaptcha(apiKey) { | ||
|
|
||
| const skipRecaptchaApiKeys = process.env.SKIP_RECAPTCHA_API_KEYS ?? ""; | ||
| if (skipRecaptchaApiKeys && skipRecaptchaApiKeys.includes(apiKey)) return false; | ||
|
|
||
| return true; | ||
| } |
There was a problem hiding this comment.
Security: includes() performs substring matching, not exact key matching.
If SKIP_RECAPTCHA_API_KEYS="longapikey123,otherapikey", an attacker could bypass reCAPTCHA by sending X-Api-Key: longapi (a substring). Use exact matching instead.
🔒 Proposed fix using split and exact match
function checkRequiresReCaptcha(apiKey) {
-
const skipRecaptchaApiKeys = process.env.SKIP_RECAPTCHA_API_KEYS ?? "";
- if (skipRecaptchaApiKeys && skipRecaptchaApiKeys.includes(apiKey)) return false;
-
+ if (skipRecaptchaApiKeys && apiKey) {
+ const skipList = skipRecaptchaApiKeys.split(",").map(k => k.trim());
+ if (skipList.includes(apiKey)) return false;
+ }
return true;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pages/api/signin.js` around lines 10 - 16, The checkRequiresReCaptcha
function uses String.prototype.includes causing substring matches against
SKIP_RECAPTCHA_API_KEYS; change it to parse SKIP_RECAPTCHA_API_KEYS (split by
comma, trim entries, ignore empties), build a Set (or array) of exact API keys,
and check membership with exact equality (e.g., Set.has or array.includes on
trimmed entries) for the apiKey parameter so only exact keys bypass reCAPTCHA;
ensure you handle undefined env by defaulting to empty string and preserve the
original return semantics.
| //xForwardedForHeader has multiple IPs in the format <client>, <proxy1>, <proxy2> | ||
| //get the first IP (client IP) | ||
| const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || ""; | ||
| const clientIP = xForwardedForHeader.split(",").shift().trim(); | ||
|
|
||
| // Check rate limiting per IP | ||
| if (isRateLimited(clientIP)) { | ||
| return nextResponse.status(429).json({ | ||
| message: "Too many login attempts. Please try again later.", | ||
| }); | ||
| } |
There was a problem hiding this comment.
Empty clientIP causes all headerless requests to share one rate-limit bucket.
When X-Forwarded-For is missing or empty, clientIP becomes an empty string. All such requests will be rate-limited together, which could:
- Allow attackers to trigger rate limits affecting legitimate users
- Be bypassed if attackers can omit the header upstream
Consider falling back to a socket-level IP or returning a 400 for missing client IP.
💡 Proposed enhancement
//xForwardedForHeader has multiple IPs in the format <client>, <proxy1>, <proxy2>
//get the first IP (client IP)
const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || "";
- const clientIP = xForwardedForHeader.split(",").shift().trim();
+ const clientIP = xForwardedForHeader.split(",").shift()?.trim() || nextRequest.socket?.remoteAddress || "";
+
+ if (!clientIP) {
+ return nextResponse.status(400).json({
+ message: "Unable to determine client IP",
+ });
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| //xForwardedForHeader has multiple IPs in the format <client>, <proxy1>, <proxy2> | |
| //get the first IP (client IP) | |
| const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || ""; | |
| const clientIP = xForwardedForHeader.split(",").shift().trim(); | |
| // Check rate limiting per IP | |
| if (isRateLimited(clientIP)) { | |
| return nextResponse.status(429).json({ | |
| message: "Too many login attempts. Please try again later.", | |
| }); | |
| } | |
| //xForwardedForHeader has multiple IPs in the format <client>, <proxy1>, <proxy2> | |
| //get the first IP (client IP) | |
| const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || ""; | |
| const clientIP = xForwardedForHeader.split(",").shift()?.trim() || nextRequest.socket?.remoteAddress || ""; | |
| if (!clientIP) { | |
| return nextResponse.status(400).json({ | |
| message: "Unable to determine client IP", | |
| }); | |
| } | |
| // Check rate limiting per IP | |
| if (isRateLimited(clientIP)) { | |
| return nextResponse.status(429).json({ | |
| message: "Too many login attempts. Please try again later.", | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pages/api/signin.js` around lines 22 - 32, The current client IP extraction
from nextRequest.get("X-Forwarded-For") can yield an empty string so all
headerless requests share one rate-limit bucket; update the logic in the signin
handler to (1) treat an empty X-Forwarded-For as missing, (2) fall back to a
socket-level address (e.g., nextRequest.socket.remoteAddress or
framework-equivalent) before deciding the client IP used by isRateLimited, and
(3) if no reliable IP can be obtained, return a 400/422 response instead of
using an empty string; refer to the X-Forwarded-For extraction and the
isRateLimited check to implement this fallback and early error return.
| } catch (error) { | ||
| // Extract IP again for error handling (in case of early errors) | ||
| const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || ""; | ||
| const clientIP = xForwardedForHeader.split(",").shift().trim(); | ||
|
|
||
| // Record failed attempt for rate limiting | ||
| recordAttempt(clientIP); |
There was a problem hiding this comment.
Rate limiting records attempts for all errors, including CAPTCHA failures.
recordAttempt is called unconditionally in the catch block. This means CAPTCHA verification failures (not actual login attempts) will count toward the rate limit. An attacker could lock out legitimate IPs by sending requests with invalid CAPTCHA tokens without ever attempting real credentials.
Consider only recording attempts for actual credential failures.
💡 Proposed fix
} catch (error) {
// Extract IP again for error handling (in case of early errors)
const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || "";
const clientIP = xForwardedForHeader.split(",").shift().trim();
- // Record failed attempt for rate limiting
- recordAttempt(clientIP);
+ // Only record failed attempts for actual authentication failures, not CAPTCHA errors
+ if (error.name !== "CaptchaVerificationError") {
+ recordAttempt(clientIP);
+ }
console.error("Error in sign in", error);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| // Extract IP again for error handling (in case of early errors) | |
| const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || ""; | |
| const clientIP = xForwardedForHeader.split(",").shift().trim(); | |
| // Record failed attempt for rate limiting | |
| recordAttempt(clientIP); | |
| } catch (error) { | |
| // Extract IP again for error handling (in case of early errors) | |
| const xForwardedForHeader = nextRequest.get?.("X-Forwarded-For") || ""; | |
| const clientIP = xForwardedForHeader.split(",").shift().trim(); | |
| // Only record failed attempts for actual authentication failures, not CAPTCHA errors | |
| if (error.name !== "CaptchaVerificationError") { | |
| recordAttempt(clientIP); | |
| } | |
| console.error("Error in sign in", error); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pages/api/signin.js` around lines 68 - 74, The catch block is currently
calling recordAttempt(clientIP) for all errors (including CAPTCHA failures);
update the flow so only real credential/authentication failures increment the
rate limiter: mark or throw a distinguishable CAPTCHA error (e.g., set
isCaptchaFailure = true or throw an error with name "CaptchaError" from the
CAPTCHA verification code), ensure clientIP is extracted before handlers, and in
the catch around the signin flow call recordAttempt(clientIP) only when the
error is not a CAPTCHA error (i.e., if (!isCaptchaFailure && error.name !==
'CaptchaError') recordAttempt(clientIP)); keep existing logging and response
behavior unchanged for CAPTCHA failures.
| return { | ||
| attempts: 1, | ||
| remainingTime: finalConfig.windowMs, | ||
| }; |
There was a problem hiding this comment.
Inconsistent remainingTime units: milliseconds vs seconds.
For new records (line 61), remainingTime is returned in milliseconds (finalConfig.windowMs), but for existing records (line 89), it's converted to seconds (Math.ceil(remainingTime / 1000)). This inconsistency could cause incorrect "try again in X" messages displayed to users.
🐛 Proposed fix to use seconds consistently
if (!record) {
attemptStore.set(ip, {
attempts: 1,
firstAttemptTime: now,
timeout: setTimeout(() => {
attemptStore.delete(ip);
}, finalConfig.windowMs),
});
return {
attempts: 1,
- remainingTime: finalConfig.windowMs,
+ remainingTime: Math.ceil(finalConfig.windowMs / 1000), // Return in seconds
};
}
// Check if window has expired
if (now - record.firstAttemptTime > finalConfig.windowMs) {
// Reset the window
clearTimeout(record.timeout);
attemptStore.set(ip, {
attempts: 1,
firstAttemptTime: now,
timeout: setTimeout(() => {
attemptStore.delete(ip);
}, finalConfig.windowMs),
});
return {
attempts: 1,
- remainingTime: finalConfig.windowMs,
+ remainingTime: Math.ceil(finalConfig.windowMs / 1000), // Return in seconds
};
}Also applies to: 87-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/utils/rateLimiter.js` around lines 59 - 62, The new record branch
in the rate limiting logic returns remainingTime in milliseconds (returns
remainingTime: finalConfig.windowMs) while the existing-record branch returns
seconds (Math.ceil(remainingTime / 1000)); make both branches consistent by
returning remainingTime in seconds (use Math.ceil(finalConfig.windowMs / 1000)
for the new-record return) so that the callers of the rate limiter (e.g., the
functions handling attempts and messaging) always receive seconds for
remainingTime.
…t handling and CSP configuration
…ialization scripts
fix #125
Summary by CodeRabbit
New Features
Bug Fixes
Chores