-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
I'm hoping to use Terraform to manage my worker together with the other resources it will be using. Here's some background on Cloudflare with Terraform:
This is the resource I would use to upload the worker script:
https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/worker_script
As a test I've tried creating a vanilla sveltekit site with the cloudflare workers adaptor and uploaded the worker script directly - i.e. the output of esbuild directly without going through wrangler and webpack. The result was that the worker renders the page okay. The assets 404 since I haven't uploaded those to KV of course, but I'll come back to that...
Another option would be to invoke wrangler from terraform (e.g. using https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource), but I'd like to avoid this if possible for a couple of reasons:
- It would mean that we'd have to introduce the wrangler binary into our build system, rather than using terraform directly. We're using the official terraform docker images to run terraform, so this would mean having to build custom images that we'd have to manage.
- Our pipelines publish a hopefully identical artefact to a pre-production environment for testing before being published to production. If we run wrangler from terraform in each environment, which itself runs webpack, then there's the possibility that there will be differences in the result. Even if the output of webpack is stable (no idea if it is), we'd need to ensure versions of wrangler and webpack remain consistent through the pipeline. In my testing it isn't possible to fix the version of wrangler-js (the part of wrangler that invokes webpack): make it possible to preinstall wrangler-js cloudflare/wrangler-legacy#1868 (this also causes a delay for installation).
What would it look like without wrangler (either in the existing provider plugin or as a new one)? It seems the worker script itself is quite simple, although I don't know what complexities come up when you start including dependencies (#1377 (comment)). It seems like not having webpack in the mix might make dealing with such issues simpler?
The other part of this is the upload and management of assets to Workers KV. There are a couple issues here too:
-
I think the behaviour of wrangler is that it assumes a dedicated namespace per worker (here is the code that removes stale assets https://github.com/cloudflare/wrangler/blob/master/src/commands/publish.rs#L99-L122). There is a limit of 100 KV namespces per account: https://developers.cloudflare.com/workers/platform/limits#kv, so this limits the number of workers per account.
-
Workers KV is eventually consistent. https://developers.cloudflare.com/workers/learning/how-kv-works says:
Changes are immediately visible in the edge location at which they're made, but may take up to 60 seconds to propagate to all other edge locations.
Since wrangler uploads the worker script and the assets at the same time there's potentially a period where a worker can be invoked but the assets are not available within the edge location. There's also the possibility that a client loads the previous version of the app from a worker script before it is updated but then requests the assets after the update but they've been deleted - this would require a page refresh to fix (this is mitigated by far future caching of assets, but that doesn't guarantee it won't happen).
To solve these issues I would like to upload the assets to Workers KV ahead of the update to the worker script. Since our pipelines build the code once and then deploy into a number of environments, and that the assets are immutable following a given build, I'm thinking the assets can be uploaded as part of the build, and upload into a KV namespace that's shared between environments (and potentially by multiple workers) before the terraform runs in each environment that update the worker script. By splitting this from the deployment of the worker, the lifecycle of these KV assets can be managed independently. There are then options for how to manage the lifecycle of these assets, ensuring they eventually get deleted, but I feel like this is long enough already so I won't go into this further now.
The main question is whether you think solving these issues would make sense for the official cloudflare workers adaptor, or whether I should be looking at creating a new one?