Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
121 views
in Technique[技术] by (71.8m points)

What is the distinct difference between an ImageAnalyzer and VisionProcessor in Android MLKit, if any?

I'm new to MLKit.

One of the first thing I've noticed from looking at the docs as well as the sample MLKit apps is that there seems to be multiple ways to attach/use image processors/analyzers.

In some cases they demonstrate using the ImageAnalyzer api https://developers.google.com/ml-kit/vision/image-labeling/custom-models/android

private class YourImageAnalyzer : ImageAnalysis.Analyzer {

override fun analyze(imageProxy: ImageProxy) {
    val mediaImage = imageProxy.image
    if (mediaImage != null) {
        val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
        // Pass image to an ML Kit Vision API
        // ...
    }
}

}

It seems like analyzers can be bound to the lifecycle of CameraProviders

cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageCapture, imageAnalyzer)

In other cases shown in MLKit showcase apps, the CameraSource has a frame processor that can be set.

cameraSource?.setFrameProcessor(
        if (PreferenceUtils.isMultipleObjectsMode(this)) {
            MultiObjectProcessor(graphicOverlay!!, workflowModel!!)
        } else {
            ProminentObjectProcessor(graphicOverlay!!, workflowModel!!)
        }
    )

So are these simply two different approaches of doing the same thing? Can they be mixed and matched? Are there performance benefits in choosing one over the other?

As a concrete example: if I wanted to use the MLKit ImageLabeler, should I wrap it in a processor and set it as the ImageProcessor for CameraSource, or use it in the Image Analysis callback and bind that to the CameraProvider?

Lastly in the examples where CameraSource is used (MLKit Material showcase app) there is no use of CameraProvider... is this simply because CameraSource makes it irrelevant and unneeded? In that case, is binding an ImageAnalyzer to a CameraProvider not even an option? Would one simply set different ImageProcessors to the CameraSource on demand as they ran through different scenarios such as ImageLabelling, Object Detection, Text Recognition etc ?

question from:https://stackoverflow.com/questions/66053016/what-is-the-distinct-difference-between-an-imageanalyzer-and-visionprocessor-in

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The difference is due to the underlying camera implementation. The analyzer interface is from CameraX while the processor needs to be written by developer for camera1.

If you want to use android.hardware.Camera, you need to follow the example to create a processor and feed camera output to MLKit. If you want to use cameraX, then you can follow the example in the vision sample app and find CameraXLivePreviewActivity.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...