Initial commit: handshapes multiclass project
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
24
first_attempt_landmark_hands/what_to_do.txt
Normal file
24
first_attempt_landmark_hands/what_to_do.txt
Normal file
@@ -0,0 +1,24 @@
|
||||
# 1) Create dirs
|
||||
# ./make_seq_dirs.sh A B J Z
|
||||
|
||||
# 2) Capture clips (0.8s each by default)
|
||||
python capture_sequence.py --label A --split train
|
||||
python capture_sequence.py --label A --split val
|
||||
python capture_sequence.py --label B --split train
|
||||
python capture_sequence.py --label B --split val
|
||||
python capture_sequence.py --label J --split train
|
||||
python capture_sequence.py --label J --split val
|
||||
python capture_sequence.py --label Z --split train
|
||||
python capture_sequence.py --label Z --split val
|
||||
|
||||
# 3) Preprocess to 32 frames (auto-picks classes from sequences/train/*)
|
||||
python prep_sequence_resampled.py --in sequences --out landmarks_seq32 --frames 32
|
||||
|
||||
# 4) Train GRU (multiclass on A/B/J/Z)
|
||||
python train_seq.py --landmarks landmarks_seq32 --epochs 40 --batch 64 --lr 1e-3 --out asl_seq32_gru_ABJZ.pt
|
||||
|
||||
# 5) Live inference
|
||||
python infer_seq_webcam.py --model asl_seq32_gru_ABJZ.pt --threshold 0.6 --smooth 0.2
|
||||
|
||||
# If you later add more letters (e.g., C, D),
|
||||
# just create those folders, record clips, re-run the prep step, then train again — the pipeline will include whatever letters exist under sequences/train/.
|
||||
Reference in New Issue
Block a user