菜鸟教程小白 发表于 2022-12-11 18:41:26

ios - 如何在 AVCaptureSession 中为视频的每一帧应用过滤器?


                                            <p><p>我正在编写一个应用程序,它需要对使用 AVCaptureSession 捕获的视频应用过滤器。过滤后的输出被写入输出文件。我目前使用 CIFilter 和 CIImage 过滤每个视频帧。
代码如下:</p>

<pre><code>func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    ...
    let pixelBuffer = CMSampleBufferGetImageBuffer(samples)!
    let options =
    let cameraImage = CIImage(cvImageBuffer: pixelBuffer, options: options)
    let filter = CIFilter(name: &#34;CIGaussianBlur&#34;)!
    filter.setValue((70.0), forKey: kCIInputRadiusKey)
    filter.setValue(cameraImage, forKey: kCIInputImageKey)
    let result = filter.outputImage!
    var pixBuffer:CVPixelBuffer? = nil;
    let fmt = CVPixelBufferGetPixelFormatType(pixelBuffer)
    CVPixelBufferCreate(kCFAllocatorSystemDefault,
                        CVPixelBufferGetWidth(pixelBuffer),
                        CVPixelBufferGetHeight(pixelBuffer),
                        fmt,
                        CVBufferGetAttachments(pixelBuffer, .shouldPropagate),
                        &amp;pixBuffer);

    CVBufferPropagateAttachments(pixelBuffer, pixBuffer!)
    let eaglContext = EAGLContext(api: EAGLRenderingAPI.openGLES3)!
    eaglContext.isMultiThreaded = true
    let contextOptions =
    let context = CIContext(eaglContext: eaglContext, options: contextOptions)
    CVPixelBufferLockBaseAddress( pixBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    context.render(result, to: pixBuffer!)
    CVPixelBufferUnlockBaseAddress( pixBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    var timeInfo = CMSampleTimingInfo(duration: sampleBuffer.duration,
                                    presentationTimeStamp: sampleBuffer.presentationTimeStamp,
                                    decodeTimeStamp: sampleBuffer.decodeTimeStamp)
    var sampleBuf:CMSampleBuffer? = nil;
    CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault,
                                             pixBuffer!,
                                             samples.formatDescription!,
                                             &amp;timeInfo,
                                             &amp;sampleBuf)

    // write to video file
    let ret = assetWriterInput.append(sampleBuf!)
    ...
}
</code></pre>

<p>来自 AVAssetWriterInput.append 的 ret 始终为 false。我在这里做错了什么?另外,我使用的方法效率很低。沿途会创建一些临时副本。可以就地实现吗?</p></p>
                                    <br><hr><h1><strong>Best Answer-推荐答案</ strong></h1><br>
                                            <p><p>我使用了几乎相同的代码来解决同样的问题。我发现为渲染创建的像素缓冲区有问题。 <code>append(sampleBuffer:)</code> 总是返回 false 而 <code>assetWriter.error</code> 是</p>

<blockquote>
<p>Error Domain=AVFoundationErrorDomain Code=-11800 &#34;The operation could
not be completed&#34; UserInfo={NSUnderlyingError=0x17024ba30 {Error
Domain=NSOSStatusErrorDomain Code=-12780 &#34;(null)&#34;},
NSLocalizedFailureReason=An unknown error occurred (-12780),
NSLocalizedDescription=The operation could not be completed}</p>
</blockquote>

<p>他们说这是一个错误(如 <a href="https://stackoverflow.com/a/46350466/5272316" rel="noreferrer noopener nofollow">here</a> 所述),已发布:<a href="https://bugreport.apple.com/web/?problemID=34574848" rel="noreferrer noopener nofollow">https://bugreport.apple.com/web/?problemID=34574848</a> .</p>

<p>但出乎意料的是,当使用原始像素缓冲区进行渲染时,我发现问题消失了。见以下代码:</p>

<pre><code>let sourcePixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let sourceImage = CIImage(cvImageBuffer: sourcePixelBuffer)
let filter = CIFilter(name: &#34;CIGaussianBlur&#34;, withInputParameters: )!
let filteredImage = filter.outputImage!

var pixelBuffer: CVPixelBuffer? = nil
let width = CVPixelBufferGetWidth(sourcePixelBuffer)
let height = CVPixelBufferGetHeight(sourcePixelBuffer)
let pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer)
let attributes = CVBufferGetAttachments(sourcePixelBuffer, .shouldPropagate)!
CVPixelBufferCreate(nil, width, height, pixelFormat, attributes, &amp;pixelBuffer)
CVBufferPropagateAttachments(sourcePixelBuffer, pixelBuffer!)

var filteredPixelBuffer = pixelBuffer!      // this never works
filteredPixelBuffer = sourcePixelBuffer   // 0_0

let context = CIContext(options: )
context.render(filteredImage, to: filteredPixelBuffer)// modifying original image buffer here!

let presentationTimestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
var timing = CMSampleTimingInfo(duration: kCMTimeInvalid, presentationTimeStamp: presentationTimestamp, decodeTimeStamp: kCMTimeInvalid)

var processedSampleBuffer: CMSampleBuffer? = nil
var formatDescription: CMFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(nil, filteredPixelBuffer, &amp;formatDescription)
CMSampleBufferCreateReadyWithImageBuffer(nil, filteredPixelBuffer, formatDescription!, &amp;timing, &amp;processedSampleBuffer)

print(assetInput!.append(processedSampleBuffer!))
</code></pre>

<p>当然,我们都知道您不允许修改样本缓冲区,但不知何故,这种方法可以提供正常处理的视频。 Trick 很脏,我不能说在你有预览层或一些并发处理例程的情况下它是否会好。</p></p>
                                   
                                                <p style="font-size: 20px;">关于ios - 如何在 AVCaptureSession 中为视频的每一帧应用过滤器?,我们在Stack Overflow上找到一个类似的问题:
                                                        <a href="https://stackoverflow.com/questions/46216751/" rel="noreferrer noopener nofollow" style="color: red;">
                                                                https://stackoverflow.com/questions/46216751/
                                                        </a>
                                                </p>
                                       
页: [1]
查看完整版本: ios - 如何在 AVCaptureSession 中为视频的每一帧应用过滤器?