You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sign language translation (SLT) is an active field of study that encompasses human-computer interaction, computer vision, natural language processing and machine learning. Progress on this field could lead to higher levels of integration of deaf people. This paper presents, to the best of our knowledge, the first continuous Argentinian Sign Language (LSA) dataset. It contains 14,880 sentence level videos of LSA extracted from the CN Sordos YouTube channel with labels and keypoints annotations for each signer. We also present a method for inferring the active signer, a detailed analysis of the characteristics of the dataset, a visualization tool to explore the dataset and a neural SLT model to serve as baseline for future experiments.
Info:
They also made their own dataloader in PyTorch.
Poses calculated with AlphaPose: https://github.com/MVIG-SJTU/AlphaPose, "with the Halpe full-body keypoints format" from https://github.com/Fang-Haoshu/Halpe-FullBody, introduced in AlphaPose. Helpfully they also release their models and code at https://github.com/midusi/keypoint-models
The text was updated successfully, but these errors were encountered: