Skip to content

Bias and model transparency #108

@dontcallmedom

Description

@dontcallmedom

In her talk, Jutta highlighted the risks for minorities or groups whose data are underrepresented in data used for training models, and approaches to reduce the bias (e.g. the "lawnmower" approach)

@JohnRochfordUMMS 's talk highlighted that privacy concerns make that phenomenon even stronger for people with disabilities, and highlighted tools that can help identify bias in training data.

Are there well known metrics or metadata that a model provider can (and ideally, should) attach to their models to help developers assess how much and what kind of bias they might be importing when they use a given model? Are there natural fora where discussions on these metadata are expected to happen?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Discussion topicTopic discussed at the workshopUser's PerspectiveMachine Learning Experiences on the Web: A User's Perspective

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions