Skip to content

Add BOBSL, ISL-HS, Sign-BD datasets to the datasets table and references.bi #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 25, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions src/datasets/BOBSL.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"pub": {
"name": "BOBSL",
"year": 2022,
"publication": "dataset:momeniAutomaticDenseAnnotation2022",
"url": "https://www.robots.ox.ac.uk/~vgg/data/bobsl/"
},
"features": [
"video:RGB",
"text:English"
],
"language": "British Sign Language (BSL)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be "British"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

"#items": 2281,
"#samples": "1.2M Sentences",
"#signers": 37,
"license": "non-commercial authorized academics",
"licenseUrl": "https://www.bbc.co.uk/rd/projects/extol-dataset",
"contact": "Samuel Albanie albanie[AT]robots.ox.ac.uk"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i’d change the contact to a real email address

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, copy-pasted that from the project website, good suggestion

}
18 changes: 18 additions & 0 deletions src/datasets/ISL-HS.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
{
"pub": {
"name": "ISL-HS",
"year": 2017,
"publication": "dataset:oliveiraDatasetIrishSign2017",
"url": "https://github.com/marlondcu/ISL"
},
"features": [
"video:RGB",
"gloss:ISL-HandShapes"
],
"language": "Irish Sign Language (ISL)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be "Irish"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

"#items": 23,
"#samples": "468 videos available, 58,114 images extracted to show 23 handshapes",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While here I'd say that more details are better, please consider how this is displayed:
I think less information should be present, but if you still wanted to include the entire text, I'd say narrow it to
"468 videos → 58,114 images → 23 handshapes"
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, needs to be concise for the table

"#signers": 6,
"license": null,
"licenseUrl": null
}
33 changes: 32 additions & 1 deletion src/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -2076,4 +2076,35 @@ @inproceedings{muller-etal-2023-considerations
url = "https://aclanthology.org/2023.acl-short.60",
doi = "10.18653/v1/2023.acl-short.60",
pages = "682--693",
}
}

@inproceedings{dataset:momeniAutomaticDenseAnnotation2022,
title = {Automatic {{Dense Annotation}} of~{{Large-Vocabulary Sign Language Videos}}},
booktitle = {Computer {{Vision}} -- {{ECCV}} 2022},
author = {Momeni, Liliane and Bull, Hannah and Prajwal, K. R. and Albanie, Samuel and Varol, G{\"u}l and Zisserman, Andrew},
editor = {Avidan, Shai and Brostow, Gabriel and Ciss{\'e}, Moustapha and Farinella, Giovanni Maria and Hassner, Tal},
year = {2022},
pages = {671--690},
publisher = {Springer Nature Switzerland},
address = {Cham},
doi = {10.1007/978-3-031-19833-5_39},
abstract = {Recently, sign language researchers have turned to sign language interpreted TV broadcasts, comprising (i) a video of continuous signing and (ii) subtitles corresponding to the audio content, as a readily available and large-scale source of training data. One key challenge in the usability of such data is the lack of sign annotations. Previous work exploiting such weakly-aligned data only found sparse correspondences between keywords in the subtitle and individual signs. In this work, we propose a simple, scalable framework to vastly increase the density of automatic annotations. Our contributions are the following: (1)~we significantly improve previous annotation methods by making use of synonyms and subtitle-signing alignment; (2)~we show the value of pseudo-labelling from a sign recognition model as a way of sign spotting; (3)~we propose a novel approach for increasing our annotations of known and unknown classes based on in-domain exemplars; (4)~on the BOBSL BSL sign language corpus, we increase the number of confident automatic annotations from 670K to 5M. We make these annotations publicly available to support the sign language research community.},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think for making this file not huge, we decided to not include abstracts

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense. Next time I can add that as an "exclude" field from my BetterBibTex plugin on Zotero

isbn = {978-3-031-19833-5},
langid = {english},
keywords = {Automatic dataset construction,Novel class discovery,Sign language recognition}
}

@inproceedings{dataset:samsSignBDWordVideoBasedBangla2023,
title = {{{SignBD-Word}}: {{Video-Based Bangla Word-Level Sign Language}} and {{Pose Translation}}},
shorttitle = {{{SignBD-Word}}},
booktitle = {2023 14th {{International Conference}} on {{Computing Communication}} and {{Networking Technologies}} ({{ICCCNT}})},
author = {Sams, Ataher and Akash, Ahsan Habib and Rahman, S. M. Mahbubur},
year = {2023},
month = jul,
pages = {1--7},
issn = {2473-7674},
doi = {10.1109/ICCCNT56998.2023.10306914},
urldate = {2024-03-13},
abstract = {Bangla sign language (BdSL) is a complete and independent natural sign language with its own linguistic characteristics. While there exists video datasets for well-known sign languages, there is currently no available dataset for word-level BdSL. In this study, we present a video-based word-level dataset for Bangla sign language, called SignBD-Word, consisting of 6000 sign videos representing 200 unique words. The dataset includes full and upper-body views of the signers, along with 2D body pose information. To evaluate the dataset, a number of deep learning-based algorithms for video classification and pose translation are used. The experimental results reveal that the recently developed SlowFast and I3D methods achieve higher accuracy for RGB and body pose data respectively, compared to the traditional attention-based approach. Furthermore, the experiments indicate that pix2pixHD outperforms CycleGAN and SPADE for human- to-human sign pose translation from bodypose data.},
keywords = {Assistive technologies,BdSL,Deep learning,Face recognition,Generative adversarial networks,Generative Adversarial Networks,Gesture recognition,Linguistics,Pose Translation,Production,Sign Language,Vocabulary}
}