-
Notifications
You must be signed in to change notification settings - Fork 12
Open
Labels
Discussion topicTopic discussed at the workshopTopic discussed at the workshopUser's PerspectiveMachine Learning Experiences on the Web: A User's PerspectiveMachine Learning Experiences on the Web: A User's Perspective
Milestone
Description
In her talk, Jutta highlighted the risks for minorities or groups whose data are underrepresented in data used for training models, and approaches to reduce the bias (e.g. the "lawnmower" approach)
@JohnRochfordUMMS 's talk highlighted that privacy concerns make that phenomenon even stronger for people with disabilities, and highlighted tools that can help identify bias in training data.
Are there well known metrics or metadata that a model provider can (and ideally, should) attach to their models to help developers assess how much and what kind of bias they might be importing when they use a given model? Are there natural fora where discussions on these metadata are expected to happen?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Discussion topicTopic discussed at the workshopTopic discussed at the workshopUser's PerspectiveMachine Learning Experiences on the Web: A User's PerspectiveMachine Learning Experiences on the Web: A User's Perspective