Skip to content

Add LSA-T: "The first continuous Argentinian Sign Language dataset for Sign Language Translation" #32

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cleong110 opened this issue Apr 16, 2024 · 0 comments · Fixed by #88

Comments

@cleong110
Copy link
Contributor

Sign language translation (SLT) is an active field of study that encompasses human-computer interaction, computer vision, natural language processing and machine learning. Progress on this field could lead to higher levels of integration of deaf people. This paper presents, to the best of our knowledge, the first continuous Argentinian Sign Language (LSA) dataset. It contains 14,880 sentence level videos of LSA extracted from the CN Sordos YouTube channel with labels and keypoints annotations for each signer. We also present a method for inferring the active signer, a detailed analysis of the characteristics of the dataset, a visualization tool to explore the dataset and a neural SLT model to serve as baseline for future experiments.

Info:

They also made their own dataloader in PyTorch.

Poses calculated with AlphaPose: https://github.com/MVIG-SJTU/AlphaPose, "with the Halpe full-body keypoints format" from https://github.com/Fang-Haoshu/Halpe-FullBody, introduced in AlphaPose. Helpfully they also release their models and code at https://github.com/midusi/keypoint-models

@cleong110 cleong110 mentioned this issue Jun 19, 2024
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant