Skip to content

Update token_classification notebook to use evaluate.load() instead of load_metric() from Datasets library (removed) #522

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions examples/token_classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
},
"outputs": [],
"source": [
"#! pip install datasets transformers seqeval"
"#! pip install datasets transformers seqeval evaluate"
]
},
{
Expand Down Expand Up @@ -171,7 +171,7 @@
"id": "W7QYTpxXIrIl"
},
"source": [
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. "
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data. This can be easily done with the `load_dataset` function. To get the metric we need to use for evaluation (to compare our model to the benchmark), we will use the [🤗 Evaluate](https://github.com/huggingface/evaluate) library."
]
},
{
Expand All @@ -182,7 +182,8 @@
},
"outputs": [],
"source": [
"from datasets import load_dataset, load_metric"
"from datasets import load_dataset\n",
"import evaluate"
]
},
{
Expand Down Expand Up @@ -1096,7 +1097,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The last thing to define for our `Trainer` is how to compute the metrics from the predictions. Here we will load the [`seqeval`](https://github.com/chakki-works/seqeval) metric (which is commonly used to evaluate results on the CONLL dataset) via the Datasets library."
"The last thing to define for our `Trainer` is how to compute the metrics from the predictions. Here we will load the [`seqeval`](https://github.com/chakki-works/seqeval) metric (which is commonly used to evaluate results on the CONLL dataset) via the Evaluate library."
]
},
{
Expand All @@ -1105,7 +1106,7 @@
"metadata": {},
"outputs": [],
"source": [
"metric = load_metric(\"seqeval\")"
"metric = evaluate.load(\"seqeval\")"
]
},
{
Expand Down