菜鸟教程小白 发表于 2022-12-12 11:40:45

ios - CVPixelBufferGetBaseAddress 返回空 VTDecompressionSessionDecodeFrame 回调


                                            <p><p>我正在使用 VTDecompressionSession 通过网络解码 H.264 流。我需要从给定的图像缓冲区复制 YUV 缓冲区。我已经验证给定的 imageBuffer 的 typeID 等于 <code>CVPixelBufferGetTypeID()</code>。 </p>

<p>但每当我尝试检索缓冲区或任何平面的基地址时,它们总是返回 NULL。 iOS 传递的 OSStatus 为 0,所以我的假设是这里没有任何问题。也许我不知道如何提取数据。有人可以帮忙吗?</p>

<pre><code>void decompressionCallback(void * CM_NULLABLE decompressionOutputRefCon,
                           void * CM_NULLABLE sourceFrameRefCon,
                           OSStatus status,
                           VTDecodeInfoFlags infoFlags,
                           CM_NULLABLE CVImageBufferRef imageBuffer,
                           CMTime presentationTimeStamp,
                           CMTime presentationDuration )
{
    CFShow(imageBuffer);
    size_t dataSize = CVPixelBufferGetDataSize(imageBuffer);
    void * decodedBuffer = CVPixelBufferGetBaseAddress(imageBuffer);
    memcpy(pYUVBuffer, decodedBuffer, dataSize);
}
</code></pre>

<p>编辑:这里也是 CVImageBufferRef 对象的转储。看起来可疑的一件事是我希望有 3 个平面(Y、U 和 V)。但是只有两架飞机。我的期望是使用 <code>CVPixelBufferGetBaseAddressOfPlane</code> 来提取每个数据平面。我正在实现它以消除对单独软件编解码器的依赖,因此我需要以这种方式提取每个平面,因为我的渲染管道的其余部分需要它。</p>

<blockquote>
<p>
{type = immutable
dict, count = 5, entries =&gt;   0 : {contents = &#34;PixelFormatDescription&#34;} = {type = immutable dict, count = 10, entries
=&gt;    0 : {contents = &#34;Planes&#34;} = {type = mutable-small, count = 2,
values = (    0 : {type = mutable
dict, count = 3, entries =&gt;   0 : {contents = &#34;FillExtendedPixelsCallback&#34;} = {length = 24, capacity = 24, bytes =
0x000000000000000030139783010000000000000000000000}   1 : {contents = &#34;BitsPerBlock&#34;} = {value = +8, type =
kCFNumberSInt32Type}2 : {contents = &#34;BlackBlock&#34;} = {length = 1, capacity = 1, bytes = 0x10} }</p>

<p>1 : {type = mutable dict,
count = 5, entries =&gt;   2 : {contents = &#34;HorizontalSubsampling&#34;} = {value = +2, type =
kCFNumberSInt32Type}3 : {contents = &#34;BlackBlock&#34;} = {length = 2, capacity = 2, bytes = 0x8080}4 :
{contents = &#34;BitsPerBlock&#34;} =
{value = +16, type =
kCFNumberSInt32Type}5 : {contents = &#34;VerticalSubsampling&#34;} = {value = +2, type =
kCFNumberSInt32Type}6 : {contents = &#34;FillExtendedPixelsCallback&#34;} = {length = 24, capacity = 24, bytes =
0x0000000000000000ac119783010000000000000000000000} }</p>

<p>)}    2 : {contents =
&#34;IOSurfaceOpenGLESFBOCompatibility&#34;} = {value = true}3 : {contents = &#34;ContainsYCbCr&#34;} = {value = true}4 : {contents = &#34;IOSurfaceOpenGLESTextureCompatibility&#34;} =
{value = true}   5 : {contents = &#34;ComponentRange&#34;} = {contents = &#34;VideoRange&#34;}   6 : {contents = &#34;PixelFormat&#34;} = {value = +875704438, type =
kCFNumberSInt32Type}7 : {contents = &#34;IOSurfaceCoreAnimationCompatibility&#34;} =
{value = true}   9 : {contents = &#34;ContainsAlpha&#34;} = {value = false}   10 : {contents = &#34;ContainsRGB&#34;} = {value = false}   11 : {contents = &#34;OpenGLESCompatibility&#34;} = {value = true} }</p>

<p>2 : {contents =
&#34;ExtendedPixelsRight&#34;} = {value = +8, type = kCFNumberSInt32Type}    3 : {contents = &#34;ExtendedPixelsTop&#34;} = {value = +0, type =
kCFNumberSInt32Type}4 : {contents = &#34;ExtendedPixelsLeft&#34;} = {value = +0, type =
kCFNumberSInt32Type}5 : {contents = &#34;ExtendedPixelsBottom&#34;} = {value = +0, type =
kCFNumberSInt32Type} }propagatedAttachments={type = mutable dict, count = 7, entries =&gt;   0 :
{contents =
&#34;CVImageBufferChromaLocationTopField&#34;} = Left   1 : {contents = &#34;CVImageBufferYCbCrMatrix&#34;} =
{contents = &#34;ITU_R_601_4&#34;}    2 :
{contents = &#34;ColorInfoGuessedBy&#34;}
= {contents = &#34;VideoToolbox&#34;}   5 : {contents =
&#34;CVImageBufferColorPrimaries&#34;} = SMPTE_C8 : {contents = &#34;CVImageBufferTransferFunction&#34;} = {contents = &#34;ITU_R_709_2&#34;}10 : {contents =
&#34;CVImageBufferChromaLocationBottomField&#34;} = Left12 : {contents = &#34;CVFieldCount&#34;} = {value = +1, type =
kCFNumberSInt32Type} }nonPropagatedAttachments={type = mutable dict, count = 0, entries =&gt;
}</p>
</blockquote></p>
                                    <br><hr><h1><strong>Best Answer-推荐答案</ strong></h1><br>
                                            <p><p>所以您的格式是 <code>kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v'</code> 并且两个平面对于 4:2:0 YUV 数据有意义,因为第一个平面是全尺寸 Y 单 channel 位图,第二个平面是半宽半高UV双 channel 位图。</p>

<p>你是对的,对于平面数据,你应该调用 <code>CVPixelBufferGetBaseAddressOfPlane</code>,虽然你应该能够使用 <code>CVPixelBufferGetBaseAddress</code>,将其结果解释为 <code>CVPlanarPixelBufferInfo_YCbCrBiPlanar</code>,所以问题可能是您没有在 <code>CVPixelBufferGetBaseAddress*</code> 之前调用 <code>CVPixelBufferLockBaseAddress</code> 也没有在之后调用 <code>CVPixelBufferUnlockBaseAddress</code>。</p>

<p>通过编写一些有趣的 YUV->RGB 着色器代码,您可以使用 Metal 或 OpenGL 有效地显示 2 个 YUV 平面。</p></p>
                                   
                                                <p style="font-size: 20px;">关于ios - CVPixelBufferGetBaseAddress 返回空 VTDecompressionSessionDecodeFrame 回调,我们在Stack Overflow上找到一个类似的问题:
                                                        <a href="https://stackoverflow.com/questions/37887639/" rel="noreferrer noopener nofollow" style="color: red;">
                                                                https://stackoverflow.com/questions/37887639/
                                                        </a>
                                                </p>
                                       
页: [1]
查看完整版本: ios - CVPixelBufferGetBaseAddress 返回空 VTDecompressionSessionDecodeFrame 回调