2
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

OpenPoseのFaceEstimationをwrapperを使わずに書いてみた

Posted at

概要

openposeのFaceEstimationは、example/openpose/openpose.cppに
wrapperで書かれているがそのままでは、外部から呼び出しにくく、扱いづらい。
簡単な呼び出しで書けないかググったが見つからなかったので、
examples/tutorial_pose/1_extract_from_image.cppを改造し、
実装してみた。

#コード

1_extract_from_image.cpp
DEFINE_string(face_net_resolution,           "368x368", "face net resolution");

~~

    // Step 3 - Initialize all required classes
    op::ScaleAndSizeExtractor scaleAndSizeExtractor(netInputSize, outputSize, FLAGS_scale_number, FLAGS_scale_gap);
    op::CvMatToOpInput cvMatToOpInput{poseModel};
    op::CvMatToOpOutput cvMatToOpOutput;
    op::PoseExtractorCaffe poseExtractorCaffe{poseModel, FLAGS_model_folder, FLAGS_num_gpu_start};
    // faceNetInputSize
    const auto faceNetInputSize = op::flagsToPoint(FLAGS_face_net_resolution, "368x368 (multiples of 16)");
    const auto facenetOutputSize = faceNetInputSize;
    op::FaceExtractorCaffe faceExtractorCaffe{faceNetInputSize, facenetOutputSize, FLAGS_model_folder, FLAGS_num_gpu_start};
    op::PoseCpuRenderer poseRenderer{poseModel, (float)FLAGS_render_threshold, !FLAGS_disable_blending, (float)FLAGS_alpha_pose};

    op::FaceCpuRenderer faceRenderer{(float)FLAGS_render_threshold};
    op::OpOutputToCvMat opOutputToCvMat;
    op::FrameDisplayer frameDisplayer{"OpenPose Tutorial - Example 1", outputSize};
    // Step 4 - Initialize resources on desired thread (in this case single thread, i.e. we init resources here)
    poseExtractorCaffe.initializationOnThread();
    poseRenderer.initializationOnThread();
    faceExtractorCaffe.initializationOnThread();
    faceRenderer.initializationOnThread();

    // ------------------------- POSE ESTIMATION AND RENDERING -------------------------
    // Step 1 - Read and load image, error if empty (possibly wrong path)
    // Alternative: cv::imread(FLAGS_image_path, CV_LOAD_IMAGE_COLOR);
    cv::Mat inputImage = op::loadImage(FLAGS_image_path, CV_LOAD_IMAGE_COLOR);
    if(inputImage.empty())
        op::error("Could not open or find the image: " + FLAGS_image_path, __LINE__, __FUNCTION__, __FILE__);
    const op::Point<int> imageSize{inputImage.cols, inputImage.rows};
    // Step 2 - Get desired scale sizes
    std::vector<double> scaleInputToNetInputs;
    std::vector<op::Point<int>> netInputSizes;
    double scaleInputToOutput;
    op::Point<int> outputResolution;
    std::tie(scaleInputToNetInputs, netInputSizes, scaleInputToOutput, outputResolution)
        = scaleAndSizeExtractor.extract(imageSize);
    // Step 3 - Format input image to OpenPose input and output formats
    const auto netInputArray = cvMatToOpInput.createArray(inputImage, scaleInputToNetInputs, netInputSizes);
    auto outputArray = cvMatToOpOutput.createArray(inputImage, scaleInputToOutput, outputResolution);
    // Step 4 - Estimate poseKeypoints
    poseExtractorCaffe.forwardPass(netInputArray, imageSize, scaleInputToNetInputs);
    const auto poseKeypoints = poseExtractorCaffe.getPoseKeypoints();
    
    //printf("aaa %d %d %d %d\n", inputImage.cols, inputImage.rows, inputImage.type(), CV_8UC3);
    cv::Mat grayImg;
    cvtColor( inputImage, grayImg, CV_BGR2GRAY );
    cv::CascadeClassifier cascade;
    std::string cascadeName = "haarcascade_frontalface_alt.xml";
    cascade.load( cascadeName );
    std::vector<cv::Rect> faces;
    cascade.detectMultiScale( grayImg, faces, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, cv::Size(60, 60));
    for( std::vector<cv::Rect>::const_iterator r = faces.begin(); r != faces.end(); r++) {
        //cv::rectangle( inputImage, cvPoint(r->x, r->y), cvPoint(r->x + r->width-1, r->y + r->height-1), CV_RGB(255,0,0), 2, 8, 0);
    }
    //cv::imshow("image", inputImage);![result.png](https://qiita-image-store.s3.amazonaws.com/0/257741/7dda41fc-9a21-4562-085a-0fda3497bce3.png)

    //cv::waitKey(30);
    
    std::vector<op::Rectangle<float> > faceRectangles;
    op::Rectangle<float> fr;
    fr.x = faces[0].x;
    fr.y = faces[0].y;
    fr.width = faces[0].width;
    fr.height = faces[0].height;
    faceRectangles.push_back(fr);
    faceExtractorCaffe.forwardPass(faceRectangles, inputImage);
    const auto face_poseKeypoints = faceExtractorCaffe.getFaceKeypoints();
    
    // Step 5 - Render poseKeypoints
    poseRenderer.renderPose(outputArray, poseKeypoints, scaleInputToOutput);
    faceRenderer.renderFace(outputArray, face_poseKeypoints, scaleInputToOutput);

FaceExtractorCaffeとFaceCpuRendererの定義、初期化を追加し、
顔位置は、opencvで検出して、
検出枠、画像データを入力として、faceExtractorCaffe.forwardPassを実行。
faceRendererでキーポイントを描画している。

#結果
result.png

顔部位が検出できました!

2
4
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?