Skip to content

S3/GCS realization uploading #165

@zombiezen

Description

@zombiezen

#43 covers using realizations from remote sources, but for a full user experience, we should have a built-in way of having the store server upload its build artifacts to a bucket. This feature...

  • should not block the build from proceeding
  • should not block the user from receiving build results
  • should block the store server from shutting down
  • must verify the integrity of the files it is sending to the bucket. If the integrity check fails, then the object will be deleted.
  • should be as atomic as possible. If an error occurs, then orphan objects should not be left behind.
  • must not re-upload the same objects (to save on bandwidth).
  • may support some basic path rewriting from discovery documents, but some
  • must allow interactive invocation outside of the store server
    • copy store object (and dependencies) from local store to bucket
    • copy realization and signatures (also for referenced build inputs) to bucket
  • should allow configuring multiple destinations
  • may be resumable on server exit. Not sure what happens if we reconfigure between runs.

Backends to support

  • Amazon (S3)
  • Google (GCS)
  • S3-compatible, custom endpoint (Minio, etc.)

In the future, we could consider supporting HTTP/WebDAV servers with PUT.

Metadata

Metadata

Assignees

Labels

Projects

Status

Todo

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions