目的
USBカメラ画像からopenposeによる姿勢検出をpythonコードを用いて行った際の備忘録です
準備
Ubuntu16.04 PC
Webカメラ
OpenposeのPYTHON APIの記事を参考にテストコード(1_extract_pose.py)を動かすまで準備を進めます。
CUDA/cuDNNのバージョンは以下を使っています。
CUDA ver8.0,
cuDNN ver6.0
OpenPoseを動かしてみた。を参考にさせて頂きました。
コード
test.py
# From Python
# It requires OpenCV installed for Python
import sys
import cv2
import os
from sys import platform
import cv2
# Remember to add your installation path here
# Option a
dir_path = os.path.dirname(os.path.realpath(__file__))
if platform == "win32": sys.path.append(dir_path + '/../../python/openpose/');
else: sys.path.append('../../python');
# Option b
# If you run `make install` (default path is `/usr/local/python` for Ubuntu), you can also access the OpenPose/python module from there. This will install OpenPose and the python library at your desired installation path. Ensure that this is in your python path in order to use it.
# sys.path.append('/usr/local/python')
# Parameters for OpenPose. Take a look at C++ OpenPose example for meaning of components. Ensure all below are filled
try:
from openpose import *
except:
raise Exception('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?')
params = dict()
params["logging_level"] = 3
params["output_resolution"] = "-1x-1"
params["net_resolution"] = "-1x368"
params["model_pose"] = "BODY_25"
params["alpha_pose"] = 0.6
params["scale_gap"] = 0.3
params["scale_number"] = 1
params["render_threshold"] = 0.05
# If GPU version is built, and multiple GPUs are available, set the ID here
params["num_gpu_start"] = 0
params["disable_blending"] = False
# Ensure you point to the correct path where models are located
#params["default_model_folder"] = dir_path + "/../../../models/"
params["default_model_folder"] = "../../../models/"
# Construct OpenPose object allocates GPU memory
openpose = OpenPose(params)
cap = cv2.VideoCapture(0)
while 1:
# Read new image
ret, img = cap.read()
# img = cv2.imread("../../../examples/media/COCO_val2014_000000000192.jpg")
# Output keypoints and the image with the human skeleton blended on it
keypoints, output_image = openpose.forward(img, True)
# Print the human pose keypoints, i.e., a [#people x #keypoints x 3]-dimensional numpy object with the keypoints of all the people on that image
print(keypoints)
# Display the image
cv2.imshow("output", output_image)
cv2.waitKey(15)
cap.release()
cv2.destroyAllWindows()
テスト
以下でWEBカメラから入力した画像からopenposeで姿勢検出した結果が出力されればOKです
/openpose/build/examples/tutorial_python$ python test.py
CodingError対策
Prototxt file not found: /usr/local/python/openpose/../../../models エラー
$ python 1_extract_pose.py
Error:
Prototxt file not found: /usr/local/python/openpose/../../../models/pose/body_25/pose_deploy.prototxt.
Possible causes:
1. Not downloading the OpenPose trained models.
2. Not running OpenPose from the same directory where the `model` folder is located.
3. Using paths with spaces.
実行時のディレクトリからmodelsが参照できないため、
1_extract_pose.pyを、openpose/modelsが参照できるように修正すればOK
#params["default_model_folder"] = dir_path + "/../../../models/"
params["default_model_folder"] = "../../../models/"