参考链接

Win10配置SlowFast全过程并使用slowfast进行视频行为识别检测
【slowfast复现 训练】训练过程 制作ava数据集 复现 SlowFast Networks for Video Recognition 训练 train
AVA时空检测数据集下载—AVA_Actions&AVA_Kinetics

软件版本

  • conda Python 3.8
  • VS Code 1.66.2

数据集下载:

Download-AVA_Kinetics-and-AVA_Actions

硬件环境

  • RTX 3070

具体步骤

Python 环境

conda create - slowfast python=3.8
conda activate slowfast

PyTorch

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

基础类

pip install 'git+https://github.com/facebookresearch/fvcore'
pip install 'git+https://github.com/facebookresearch/fairscale'
pip install simplejson
pip install -U iopath
pip install psutil tensorboard opencv-python moviepy pytorchvideo

Detectron2

ninja.exe 放入 C:\Windows\System32 文件夹中。

git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2

SlowFast

将setup.py中的 PIL 改为 Pillow

python setup.py build develop

运行Demo

conda install pywin32

测试

在demo/AVA下建立 ava.json 文件,输入如下内容:

{
    "bend/bow (at the waist)": 0,
    "crawl": 1,
    "crouch/kneel": 2,
    "dance": 3,
    "fall down": 4,
    "get up": 5,
    "jump/leap": 6,
    "lie/sleep": 7,
    "martial art": 8,
    "run/jog": 9,
    "sit": 10,
    "stand": 11,
    "swim": 12,
    "walk": 13,
    "answer phone": 14,
    "brush teeth": 15,
    "carry/hold (an object)": 16,
    "catch (an object)": 17,
    "chop": 18,
    "climb (e.g., a mountain)": 19,
    "clink glass": 20,
    "close (e.g., a door, a box)": 21,
    "cook": 22,
    "cut": 23,
    "dig": 24,
    "dress/put on clothing": 25,
    "drink": 26,
    "drive (e.g., a car, a truck)": 27,
    "eat": 28,
    "enter": 29,
    "exit": 30,
    "extract": 31,
    "fishing": 32,
    "hit (an object)": 33,
    "kick (an object)": 34,
    "lift/pick up": 35,
    "listen (e.g., to music)": 36,
    "open (e.g., a window, a car door)": 37,
    "paint": 38,
    "play board game": 39,
    "play musical instrument": 40,
    "play with pets": 41,
    "point to (an object)": 42,
    "press": 43,
    "pull (an object)": 44,
    "push (an object)": 45,
    "put down": 46,
    "read": 47,
    "ride (e.g., a bike, a car, a horse)": 48,
    "row boat": 49,
    "sail boat": 50,
    "shoot": 51,
    "shovel": 52,
    "smoke": 53,
    "stir": 54,
    "take a photo": 55,
    "text on/look at a cellphone": 56,
    "throw": 57,
    "touch (an object)": 58,
    "turn (e.g., a screwdriver)": 59,
    "watch (e.g., TV)": 60,
    "work on a computer": 61,
    "write": 62,
    "fight/hit (a person)": 63,
    "give/serve (an object) to (a person)": 64,
    "grab (a person)": 65,
    "hand clap": 66,
    "hand shake": 67,
    "hand wave": 68,
    "hug (a person)": 69,
    "kick (a person)": 70,
    "kiss (a person)": 71,
    "lift (a person)": 72,
    "listen to (a person)": 73,
    "play with kids": 74,
    "push (another person)": 75,
    "sing to (e.g., self, a person, a group)": 76,
    "take (an object) from (a person)": 77,
    "talk to (e.g., self, a person, a group)": 78,
    "watch (a person)": 79
}

在官网 ModelZoo 下载模型权重文件

在这里插入图片描述

修改 demo/AVA/SLOWFAST_32x2_R101_50_50.yaml 文件,将CHECKPOINT_FILE_PATH、LABEL_FILE_PATH分别修改为上述两个文件的路径(为了避免出错,最好为绝对路径);添加输入和输出视频路径INPUT_VIDEO、OUTPUT_FILE;并进行如下所示的注释。

# TENSORBOARD:
#   MODEL_VIS:
#     TOPK: 2
# WEBCAM: 0

最终文件如下所示:

TRAIN:
  ENABLE: False
  DATASET: ava
  BATCH_SIZE: 16
  EVAL_PERIOD: 1
  CHECKPOINT_PERIOD: 1
  AUTO_RESUME: True
  CHECKPOINT_FILE_PATH: "D:/slowfast/demo/models/SLOWFAST_32x2_R101_50_50.pkl" #path to pretrain model
  CHECKPOINT_TYPE: pytorch
DATA:
  NUM_FRAMES: 32
  SAMPLING_RATE: 2
  TRAIN_JITTER_SCALES: [256, 320]
  TRAIN_CROP_SIZE: 224
  TEST_CROP_SIZE: 256
  INPUT_CHANNEL_NUM: [3, 3]
DETECTION:
  ENABLE: True
  ALIGNED: False
AVA:
  BGR: False
  DETECTION_SCORE_THRESH: 0.8
  TEST_PREDICT_BOX_LISTS: ["person_box_67091280_iou90/ava_detection_val_boxes_and_labels.csv"]
SLOWFAST:
  ALPHA: 4
  BETA_INV: 8
  FUSION_CONV_CHANNEL_RATIO: 2
  FUSION_KERNEL_SZ: 5
RESNET:
  ZERO_INIT_FINAL_BN: True
  WIDTH_PER_GROUP: 64
  NUM_GROUPS: 1
  DEPTH: 101
  TRANS_FUNC: bottleneck_transform
  STRIDE_1X1: False
  NUM_BLOCK_TEMP_KERNEL: [[3, 3], [4, 4], [6, 6], [3, 3]]
  SPATIAL_DILATIONS: [[1, 1], [1, 1], [1, 1], [2, 2]]
  SPATIAL_STRIDES: [[1, 1], [2, 2], [2, 2], [1, 1]]
NONLOCAL:
  LOCATION: [[[], []], [[], []], [[6, 13, 20], []], [[], []]]
  GROUP: [[1, 1], [1, 1], [1, 1], [1, 1]]
  INSTANTIATION: dot_product
  POOL: [[[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]]]
BN:
  USE_PRECISE_STATS: False
  NUM_BATCHES_PRECISE: 200
SOLVER:
  MOMENTUM: 0.9
  WEIGHT_DECAY: 1e-7
  OPTIMIZING_METHOD: sgd
MODEL:
  NUM_CLASSES: 80
  ARCH: slowfast
  MODEL_NAME: SlowFast
  LOSS_FUNC: bce
  DROPOUT_RATE: 0.5
  HEAD_ACT: sigmoid
TEST:
  ENABLE: False
  DATASET: ava
  BATCH_SIZE: 8
DATA_LOADER:
  NUM_WORKERS: 2
  PIN_MEMORY: True

NUM_GPUS: 1
NUM_SHARDS: 1
RNG_SEED: 0
OUTPUT_DIR: .
# TENSORBOARD:
#   MODEL_VIS:
#     TOPK: 2
DEMO:
  ENABLE: True
  LABEL_FILE_PATH: "D:/slowfast/demo/AVA/ava.json"# Add local label file path here.
  INPUT_VIDEO: "D:/slowfast/demo/AVA/1.mp4"
  OUTPUT_FILE: "D:/slowfast/demo/AVA/1_output.mp4"
  # WEBCAM: 0
  DETECTRON2_CFG: "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
  DETECTRON2_WEIGHTS: detectron2://COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/model_final_280758.pkl

将数据集中的任意视频命名为1.mp4,在slowfast根目录下输入如下命令即可。

python .\tools\run_net.py --cfg .\demo\AVA\SLOWFAST_32x2_R101_50_50.yaml  
Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐