You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ All project settings are managed through `conf.py`, offering a single configurat
27
27
-`FRAME_SKIP`: Controls frame sampling rate for efficient processing
28
28
-`MAX_WORKERS`: Manages parallel processing to optimize performance
29
29
30
-
-`POSE_IDX`, `FACE_IDX`, `HAND_IDX`: Selected landmark indices for extracting relevant points for sign language analysis
30
+
-`POSE_IDX`, `FACE_IDX`, `HAND_IDX`: Selected landmark indices for extracting relevant points for sign language analysis. Devault value is the index defined in YouTube-ASL Dataset's research paper.
31
31
32
32
## How to Use
33
33
@@ -47,7 +47,7 @@ All project settings are managed through `conf.py`, offering a single configurat
47
47
- The script processes each video segment according to its timestamp, extracting only the most relevant body keypoints for sign language analysis. It uses parallel processing to handle multiple video efficiently. Results are saved as NumPy arrays.
48
48
49
49
### How2Sign
50
-
1. Download **Green Screen RGB videos** and **English Translation (manually re-aligned)** from the How2Sign website.
50
+
1. Download **Green Screen RGB videos** and **English Translation (manually re-aligned)** from the [How2Sign Website](https://how2sign.github.io/).
51
51
2. Place the directory and .csv file in the correct path or amend the path in `conf.py`.
52
52
3. Run **Step 3: Feature Extraction** (`s3_mediapipe_labelling.py`) only.
0 commit comments