Sign Language Detection using ACTION RECOGNITION with Python | LSTM Deep Learning Model
Want to take your sign language model a little further?
In this video, you’ll learn how to leverage action detection to do so!
You’ll be able to leverage a keypoint detection model to build a sequence of keypoints which can then be passed to an action detection model to decode sign language! As part of the model building process you’ll be able to leverage Tensorflow and Keras to build a deep neural network that leverages LSTM layers to handle the sequence of keypoints.
In this video you’ll learn how to:
1. Extract MediaPipe Holistic Keypoints
2. Build a Sign Language model using a Action Detection powered by LSTM layers
3. Predict sign language in real time using video sequences
Get the code:
Chapters
0:00 - Start
0:38 - Gameplan
1:38 - How it Works
2:13 - Tutorial Start
3:53 - 1. Install and Import Dependencies
8:17 - 2. Detect Face, Hand and Pose Landmarks
40:29 - 3. Extract Keypoints
57:35 - 4. Setup Folders for Data Collection
1:06:00 -
25 views
7
0
2 months ago 00:06:54 1
Creepy German Fairy Tales That Will Give You Nightmares 👻 | Dark Stories by the Brothers Grimm
2 months ago 00:05:31 1
Holy Forever in Hebrew | Kadosh Lanetzach - Emanuel Roro (Official Music Video)
2 months ago 00:11:42 1
Dmitrij Bortnjanskij (1751-1825) - Sinfonia ’Il Quinto Fabio’ (1778)
2 months ago 00:10:49 1
iMac Announcement - October 28
2 months ago 00:51:26 1
Chip War, the Race for Semiconductor Supremacy | Full Documentary (2023)