-
Notifications
You must be signed in to change notification settings - Fork 3
benchmark ssdlite detection pipeline #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
results/20230403073901.log
Outdated
Summaries | ||
|
||
v2 / v1 | ||
Tensor 1.66 | ||
PIL 1.54 | ||
|
||
x / PIL, v1 | ||
Tensor, v1 0.92 | ||
Tensor, v2 1.53 | ||
PIL, v1 1.00 | ||
PIL, v2 1.54 | ||
Datapoint, v2 1.52 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
highlighting
transforms.py
Outdated
wrapper_factory = WRAPPER_FACTORIES[datasets.CocoDetection] | ||
mock_dataset = SimpleNamespace(ids=list(range(num_samples))) | ||
wrapper = wrapper_factory(mock_dataset) | ||
self.wrapper = functools.partial(wrapper, num_samples // 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why num_samples // 2
and why is it hardcoded to 117_266 above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole thing is mocking the dataset so we can re-use the dataset wrappers as transform. num_samples = 117_266
is the number of samples in the filtered COCO dataset so each sample has one annotation.
In the wrapper, we get passed the idx
of the sample
and use it like this:
Since we don't have access to the idx
here, I chose a proxy. Since ids
is a list, I used the half of the maximum, to have on average the same performance as doing the correct lookup.
@@ -23,6 +23,10 @@ def write(self, message): | |||
self.stdout.write(message) | |||
self.file.write(message) | |||
|
|||
def flush(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this needed?
transforms.py
Outdated
return image, target | ||
|
||
|
||
class DetectionReferenceRandomHorizontalFlipV1(transforms_v1.RandomHorizontalFlip): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to ease per-transform comparison visually, perhaps keep the shorter name RandomHorizontalFlipV1
(same for other transforms).
Ideally we would even drop the V1/V2 suffix
We identified the slowdown from detection v2 stems from two things:
Results here are run on torchvision nightly April 4. When pytorch/vision#7488 is merged and pytorch/vision#7489 addressed, we need to re-run. |
No description provided.