-
Notifications
You must be signed in to change notification settings - Fork 521
services/galexie: add integration tests for S3 storage. #5749
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
eb86f15
to
0bc05d1
Compare
@overcat we've been a bit busy with the Protocol 23 release but we'll review this soon. Thanks! |
It's okay, this PR is not urgent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work integrating the S3 tests into the existing framework. The PR looks good.
.github/workflows/galexie.yml
Outdated
- name: Pull LocalStack image (for S3) | ||
if: ${{ matrix.storage_type == 'S3' }} | ||
shell: bash | ||
run: docker pull localstack/localstack:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to use a fixed version. the latest
tag is mutable and it's possible that there could be some updates which break our tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, I fixed the tag. e8aa374
@overcat this looks great! could you add one more test case to the integration tests? I don't think we have any coverage which exercises the code path where we receive a precondition failed error upon trying to insert a file that already exists. I think the following test case should exercise that path:
In step (3) galexie should try to insert ledger files 8, 9, and 10. But, it should receive a precondition failed error from AWS and handle that error by skipping over to the next files |
Hi @tamirms, I would like the existing TestAppend test to include this scenario. Theoretically, it is best if we can ensure that the object has not changed, but the current datastore interface does not provide this capability. If necessary, we can add a function to the datastore to return the object's identifier. |
sure, that sounds good!
I'm not sure what would be the equivalent of that for GCS. Alternatively, you could return the last modified timestamp of the object. I believe that is available for both GCS and S3 |
By this I'm assuming you mean that the object in the datastore is the same as the one we're attempting to upload. We could definitely check for that, but I'm not sure what we’d do with that information beyond just logging it, unless you're suggesting we go ahead and overwrite the object (upload without preconditions) in such a case. |
Hey @tamirms, I think 6804884 might be what you're looking for. Let me know if I'm on the right track so I can get those unit tests for |
@overcat that looks good to me! |
PR Checklist
PR Structure
otherwise).
services/friendbot
, orall
ordoc
if the changes are broad or impact manypackages.
Thoroughness
.md
files, etc... affected by this change). Take a look in the
docs
folder for a given service,like this one.
Release planning
CHANGELOG.md
within the component folder structure. For example, if I changed horizon, then I updated (services/horizon/CHANGELOG.md. I add a new line item describing the change and reference to this PR. If I don't update a CHANGELOG, I acknowledge this PR's change may not be mentioned in future release notes.semver, or if it's mainly a patch change. The PR is targeted at the next
release branch if it's not a patch change.
What
This is the follow-up PR for #5748. Let's merge #5748 first, and then we'll handle this PR.
Why
See #5748
Known limitations
N/A