菜鸟教程小白 发表于 2022-12-12 10:01:13

iphone - 如何使用 CIFeatures 将图像放在眼睛和嘴巴上


                                            <p><p>我找到了 <a href="http://www.icapps.be/face-detection-with-core-image-on-live-video/" rel="noreferrer noopener nofollow">link</a>将图像放在面部点上。就像我需要检测眼睛并在那里放置图像一样?</p>

<p>为了简单起见,我需要在人眼上放置一个图像。我该怎么做?任何提示将不胜感激!</p></p>
                                    <br><hr><h1><strong>Best Answer-推荐答案</ strong></h1><br>
                                            <p><pre><code>for ( CIFaceFeature *ff in features ) {

    // find the correct position for the square layer within the previewLayer

    // the feature box originates in the bottom left of the video frame.

    // (Bottom right if mirroring is turned on)

    CGRect faceRect = ;



    // flip preview width and height

    CGFloat temp = faceRect.size.width;

    faceRect.size.width = faceRect.size.height;

    faceRect.size.height = temp;

    temp = faceRect.origin.x;

    faceRect.origin.x = faceRect.origin.y;

    faceRect.origin.y = temp;

    // scale coordinates so they fit in the preview box, which may be scaled

    CGFloat widthScaleBy = previewBox.size.width / clap.size.height;

    CGFloat heightScaleBy = previewBox.size.height / clap.size.width;

    faceRect.size.width *= widthScaleBy;

    faceRect.size.height *= heightScaleBy;

    faceRect.origin.x *= widthScaleBy;

    faceRect.origin.y *= heightScaleBy;



    if ( isMirrored )

      faceRect = CGRectOffset(faceRect, previewBox.origin.x + previewBox.size.width - faceRect.size.width - (faceRect.origin.x * 2), previewBox.origin.y);

    else

      faceRect = CGRectOffset(faceRect, previewBox.origin.x, previewBox.origin.y);
</code></pre>

<p>你可以得到正确的脸,但你需要细化图像的眼睛</p>

<p>这将帮助你获得每个位置</p>

<pre><code>-(void)markFaces:(CIImage *)image
{
    // draw a CI image with the previously loaded face detection picture
    @autoreleasepool {
      CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                                context:nil options:];

      // create an array containing all the detected faces from the detector
      NSArray* features = ;

      NSLog(@&#34;The Address Of CIImage In: %p %s&#34;,image,__FUNCTION__);
      NSLog(@&#34;Array Count %d&#34;,);


      NSUserDefaults *prefs = ;


      if(==0)
      {
         //No image is present
      }
      else
      {
            for(CIFaceFeature* faceFeature in features)
            {

                if(faceFeature.hasMouthPosition)
                {
                   // Your code based on the mouth position
                }


            if (faceFeature.hasLeftEyePosition) {
               // Write your code Note: points are mirrored point so u need to take care of that

                }
                if (faceFeature.hasRightEyePosition) {
               // Write your code Note: points are mirrored point so u need to take care of that

                }

                }
}
}
</code></pre></p>
                                   
                                                <p style="font-size: 20px;">关于iphone - 如何使用 CIFeatures 将图像放在眼睛和嘴巴上,我们在Stack Overflow上找到一个类似的问题:
                                                        <a href="https://stackoverflow.com/questions/15757372/" rel="noreferrer noopener nofollow" style="color: red;">
                                                                https://stackoverflow.com/questions/15757372/
                                                        </a>
                                                </p>
                                       
页: [1]
查看完整版本: iphone - 如何使用 CIFeatures 将图像放在眼睛和嘴巴上