Skip to content

Feat/competition/eval #114

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Jun 21, 2024
Merged

Feat/competition/eval #114

merged 12 commits into from
Jun 21, 2024

Conversation

kirahowe
Copy link
Contributor

@kirahowe kirahowe commented Jun 21, 2024

Changelogs

Merging WIP evaluation updates into competitions feature branch to consolidate changes.


Checklist:

  • Was this PR discussed in an issue? It is recommended to first discuss a new feature into a GitHub issue before opening a PR.
  • Add tests to cover the fixed bug(s) or the newly introduced feature(s) (if appropriate).
  • Update the API documentation if a new function is added, or an existing one is deleted.
  • Write concise and explanatory changelogs above.
  • If possible, assign one of the following labels to the PR: feature, fix or test (or ask a maintainer to do it for you).

discussion related to that PR

@kirahowe kirahowe added the fix Annotates any PR that fixes bugs label Jun 21, 2024
@kirahowe kirahowe requested a review from cwognum as a code owner June 21, 2024 18:47
@kirahowe kirahowe merged commit 44c5d7f into feat/competitions Jun 21, 2024
@kirahowe kirahowe deleted the feat/competition/eval branch June 21, 2024 18:47
Andrewq11 added a commit that referenced this pull request Aug 19, 2024
* competition wip

* wip

* wip

* adding methods for interfacing w/ competitions

* Continuing to integrate polaris client with the Hub for comps

* comp wip

* updating date serializer

* Competition evaluation (#103)

* call hub evaluate endpoint from client evaluate_competitions method

* add super basic test for evaluating competitions

* be more specific in evaluate_benchmark signature

* Update polaris/hub/client.py

Co-authored-by: Andrew Quirke <[email protected]>

* start refactoring object dependencies out of evaluation logic

* refactor test subset object out of evaluation logic

* clean up as much as possible for now

* updating date serializer

* call hub evaluate endpoint from client evaluate_competitions method

* Update polaris/competition/_competition.py

Co-authored-by: Andrew Quirke <[email protected]>

* updating date serializer

* call hub evaluate endpoint from client evaluate_competitions method

* add super basic test for evaluating competitions

* comp wip

* updating date serializer

* call hub evaluate endpoint from client evaluate_competitions method

* fix bad merge resolution

* only send competition artifact ID to hub

---------

Co-authored-by: Andrew Quirke <[email protected]>
Co-authored-by: Andrew Quirke <[email protected]>

* Use evaluation logic directly in hub, no need for wrapper (#109)

* use evaluation logic directly in hub, no need for wrapper

* include evaluate_benchmark in package

* remove unnecessary imports

* read incoming scores sent as json

* light formatting updates

* updating fallback version for dev build

* integrating results for comps (#111)

* integrating results for comps

* Update polaris/hub/client.py

Co-authored-by: Cas Wognum <[email protected]>

* addressing comments & adding CompetitionResults class

* test competition evalution works for multi-column dataframes

* add single column test to competition evaluation

* fix multitask-single-test-set cases

* fix bug with multi-test-set benchmarks

* adding functions to serialize & deserialize pred objs for external eval

* updating return for evaluate_competition method in client

* updating evaluate_competition method to pass additional result info to hub

---------

Co-authored-by: Cas Wognum <[email protected]>
Co-authored-by: Kira McLean <[email protected]>

* updates to enable fetching & interacting with comps

* updating requirement for eval name

* Feat/competition/eval (#114)

* integrating results for comps

* Update polaris/hub/client.py

Co-authored-by: Cas Wognum <[email protected]>

* addressing comments & adding CompetitionResults class

* test competition evalution works for multi-column dataframes

* add single column test to competition evaluation

* fix multitask-single-test-set cases

* fix bug with multi-test-set benchmarks

* adding functions to serialize & deserialize pred objs for external eval

* updating return for evaluate_competition method in client

* updating evaluate_competition method to pass additional result info to hub

* refuse early to upload a competition with a zarr-based dataset

* removing merge conflicts

---------

Co-authored-by: Andrew Quirke <[email protected]>
Co-authored-by: Andrew Quirke <[email protected]>
Co-authored-by: Cas Wognum <[email protected]>

* test that all rows of a competition test set will have at least a value (#116)

* update competition evaluation to support y_prob

* run ruff on all files and fix issues

* fix wrong url printout after upload

* Clarifying typing for nested types

* removing if_exists arg from comps

* raising error for trying to make zarr comp

* updating name of ArtifactType to ArtifactSubtype

* updating comments & removing redundant class attributes

* moving split validator logic from comp spec to benchmark spec

* removing redundant checks from CompetitionDataset class

* creating pydantic model for comp predictions

* split validator logic, redundant pydantic checks, comp pred pydantic model

* changes for comps wrap up

* Adding CompetitionsPredictionsType

* adding conversion validator for comp prediction type

* setting predictions validator as class method

* Using self instead of cls for field validators

* removing model validation on fetch from hub

* Creating HubOwner object in comp result eval method

* Documentation & tutorials for competitions

* Removing create comp method, fixing failing tests, updating benchmark label struct

* Updating docs for create comp & benchmark pred structure

* tiny wording change in competition tutorial

* Addressing PR feedback

* fixing tests & removing dataset redefinition from CompetitionDataset class

* Commenting out line in tutorial to fix test

* fixing formatting

* small fixes & depending on tableContent for dataset storage info

---------

Co-authored-by: Andrew Quirke <[email protected]>
Co-authored-by: Andrew Quirke <[email protected]>
Co-authored-by: Cas Wognum <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fix Annotates any PR that fixes bugs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants