该工程基于PaddleDetection实现人员摔倒识别。先通过特定命令预测视频关键点,得到json结果文件和可视化视频;再用source.py中代码判断摔倒,输出摔倒帧对应时间,还会在视频检测框左上角标注结果并保存为output.mp4,示例中检测到78帧摔倒。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

基于PaddleDetection的人员摔倒识别
本工程是基于PaddleDetection人员关键点预测的摔倒识别案例
关于PaddleDetection的使用详情请参考github链接 PaddleDetection
1 获取视频关键点预测结果
在PaddleDetection下使用如下命令预测视频结果(相关模型可以从github中model_zoo获取):
python deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/ppyolo_r50vd_dcn_2x_coco/ --keypoint_model_dir=output_inference/hrnet_w32_256x192/ --video_file=./pose_demo/test.mp4 --device=gpu --save_res=True
预测完成后会在当前目录下得到预测结果json文件:det_keypoint_unite_video_results.json, 即此处使用的kpts_results.json文件.
在output目录下得到预测结果可视化文件:test.mp4, 即此处的video.mp4文件.
2 根据关键点结果判断是否摔倒
执行最下面命令行的命令,对关键点预测结果kpts_results.json做摔倒逻辑的判断,摔倒帧对应时间会打印显示。同时会在视频上对应时间的检测框左上角打印fall_down结果,结果视频保存为output.mp4。
逻辑判断代码在source.py文件中In [1]
import osimport sysimport cv2import numpy as npimport jsonimport collectionsfrom source import check_fall_down, videovis
In [3]
#1)脚本第一个参数为关键点预测结果json文件jsonf = "kpts_results.json"with open(jsonf, "r") as rf: kpts_data = json.load(rf)print("all data length: {}".format(len(kpts_data)))#2)如果需要视频打印摔倒文字,关键点可视化结果文件放在同路径videof = "video.mp4"#3)读取关键点结果后放入判断文件fallframes = check_fall_down(kpts_data)#4)根据检测的摔倒帧在视频显示videovis(videof, kpts_data, fallframes)
all data length: 468fall_down frames: 78time: 5.5s, fall down detectedtime: 6.0s, fall down detectedtime: 6.5s, fall down detectedtime: 7.0s, fall down detectedtime: 7.5s, fall down detectedtime: 8.0s, fall down detectedtime: 8.5s, fall down detectedtime: 9.0s, fall down detectedtime: 9.5s, fall down detectedtime: 10.0s, fall down detectedtime: 10.5s, fall down detectedtime: 11.0s, fall down detectedtime: 11.5s, fall down detectedtime: 12.0s, fall down detectedtime: 12.5s, fall down detectedtime: 18.0s, fall down detectedtime: 18.5s, fall down detectedtime: 19.5s, fall down detectedtime: 20.5s, fall down detectedtime: 21.0s, fall down detectedtime: 21.5s, fall down detectedtime: 23.5s, fall down detectedtime: 25.5s, fall down detectedtime: 26.0s, fall down detectedtime: 29.5s, fall down detectedtime: 30.0s, fall down detectedtime: 32.0s, fall down detectedtime: 32.5s, fall down detectedtime: 33.0s, fall down detectedtime: 35.0s, fall down detectedtime: 35.5s, fall down detectedtime: 36.0s, fall down detectedtime: 38.0s, fall down detectedtime: 38.5s, fall down detectedprint fall down result in video: video.mp4fps: 12, frame_count: 468
以上就是基于关键点检测的摔倒识别的详细内容,更多请关注创想鸟其它相关文章!
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 chuangxiangniao@163.com 举报,一经查实,本站将立刻删除。
发布者:程序猿,转转请注明出处:https://www.chuangxiangniao.com/p/62415.html
微信扫一扫
支付宝扫一扫