本文整理汇总了Java中org.tensorflow.demo.env.ImageUtils类的典型用法代码示例。如果您正苦于以下问题:Java ImageUtils类的具体用法?Java ImageUtils怎么用?Java ImageUtils使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ImageUtils类属于org.tensorflow.demo.env包,在下文中一共展示了ImageUtils类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: onPreviewSizeChosen
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
public void onPreviewSizeChosen(final Size size, final int rotation) {
previewWidth = size.getWidth();
previewHeight = size.getHeight();
final Display display = getWindowManager().getDefaultDisplay();
final int screenOrientation = display.getRotation();
LOGGER.i("Sensor orientation: %d, Screen orientation: %d", rotation, screenOrientation);
sensorOrientation = rotation + screenOrientation;
LOGGER.i("Initializing at size %dx%d", previewWidth, previewHeight);
rgbBytes = new int[previewWidth * previewHeight];
rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Bitmap.Config.ARGB_8888);
croppedBitmap = Bitmap.createBitmap(INPUT_SIZE, INPUT_SIZE, Bitmap.Config.ARGB_8888);
frameToCropTransform =
ImageUtils.getTransformationMatrix(previewWidth, previewHeight, INPUT_SIZE, INPUT_SIZE,
sensorOrientation, MAINTAIN_ASPECT);
Matrix cropToFrameTransform = new Matrix();
frameToCropTransform.invert(cropToFrameTransform);
yuvBytes = new byte[3][];
}
开发者ID:flipper83,项目名称:SortingHatAndroid,代码行数:27,代码来源:CameraActivity.java
示例2: draw
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
public synchronized void draw(final Canvas canvas) {
// TODO(andrewharp): This may not work for non-90 deg rotations.
final float multiplier =
Math.min(canvas.getWidth() / (float) frameHeight, canvas.getHeight() / (float) frameWidth);
frameToCanvasMatrix =
ImageUtils.getTransformationMatrix(
frameWidth,
frameHeight,
(int) (multiplier * frameHeight),
(int) (multiplier * frameWidth),
sensorOrientation,
false);
for (final TrackedRecognition recognition : trackedObjects) {
final RectF trackedPos =
(objectTracker != null)
? recognition.trackedObject.getTrackedPositionInPreviewFrame()
: new RectF(recognition.location);
getFrameToCanvasMatrix().mapRect(trackedPos);
boxPaint.setColor(recognition.color);
final float cornerSize = Math.min(trackedPos.width(), trackedPos.height()) / 8.0f;
canvas.drawRoundRect(trackedPos, cornerSize, cornerSize, boxPaint);
final String labelString =
!TextUtils.isEmpty(recognition.title)
? String.format("%s %.2f", recognition.title, recognition.detectionConfidence)
: String.format("%.2f", recognition.detectionConfidence);
borderedText.drawText(canvas, trackedPos.left + cornerSize, trackedPos.bottom, labelString);
}
}
开发者ID:Jamjomjara,项目名称:snu-artoon,代码行数:32,代码来源:MultiBoxTracker.java
示例3: onClick
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override
public void onClick(final View v) {
if (textureCopyBitmap != null) {
// TODO(andrewharp): Save as jpeg with guaranteed unique filename.
ImageUtils.saveBitmap(textureCopyBitmap, "stylized" + frameNum + ".png");
Toast.makeText(
StylizeActivity.this,
"Saved image to: /sdcard/tensorflow/" + "stylized" + frameNum + ".png",
Toast.LENGTH_LONG)
.show();
}
}
开发者ID:apacha,项目名称:TensorflowAndroidDemo,代码行数:13,代码来源:StylizeActivity.java
示例4: draw
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
public synchronized void draw(final Canvas canvas) {
if (objectTracker == null) {
return;
}
// TODO(andrewharp): This may not work for non-90 deg rotations.
final float multiplier =
Math.min(canvas.getWidth() / (float) frameHeight, canvas.getHeight() / (float) frameWidth);
frameToCanvasMatrix =
ImageUtils.getTransformationMatrix(
frameWidth,
frameHeight,
(int) (multiplier * frameHeight),
(int) (multiplier * frameWidth),
sensorOrientation,
false);
for (final TrackedRecognition recognition : trackedObjects) {
final ObjectTracker.TrackedObject trackedObject = recognition.trackedObject;
final RectF trackedPos = trackedObject.getTrackedPositionInPreviewFrame();
getFrameToCanvasMatrix().mapRect(trackedPos);
boxPaint.setColor(recognition.color);
final float cornerSize = Math.min(trackedPos.width(), trackedPos.height()) / 8.0f;
canvas.drawRoundRect(trackedPos, cornerSize, cornerSize, boxPaint);
final String labelString =
!TextUtils.isEmpty(recognition.title)
? String.format("%s %.2f", recognition.title, recognition.detectionConfidence)
: String.format("%.2f", recognition.detectionConfidence);
borderedText.drawText(canvas, trackedPos.left + cornerSize, trackedPos.bottom, labelString);
}
}
开发者ID:apacha,项目名称:TensorflowAndroidDemo,代码行数:34,代码来源:MultiBoxTracker.java
示例5: processImage
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override
protected void processImage() {
rgbFrameBitmap.setPixels(getRgbBytes(), 0, previewWidth, 0, 0, previewWidth, previewHeight);
final Canvas canvas = new Canvas(croppedBitmap);
canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null);
// For examining the actual TF input.
if (SAVE_PREVIEW_BITMAP) {
ImageUtils.saveBitmap(croppedBitmap);
}
runInBackground(
new Runnable() {
@Override
public void run() {
final long startTime = SystemClock.uptimeMillis();
final List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap);
lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;
LOGGER.i("Detect: %s", results);
cropCopyBitmap = Bitmap.createBitmap(croppedBitmap);
if (resultsView == null) {
resultsView = (ResultsView) findViewById(R.id.results);
}
resultsView.setResults(results);
requestRender();
readyForNextImage();
}
});
}
开发者ID:Nilhcem,项目名称:tensorflow-classifier-android,代码行数:29,代码来源:ClassifierActivity.java
示例6: onPreviewSizeChosen
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override
public void onPreviewSizeChosen(final Size size, final int rotation) {
final float textSizePx =
TypedValue.applyDimension(
TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics());
borderedText = new BorderedText(textSizePx);
borderedText.setTypeface(Typeface.MONOSPACE);
if (TensorFlowYoloDetector.selectedModel == 0) {
classifier = TensorFlowYoloDetector.create(
getAssets(),
YOLO_MODEL_FILE_FACE,
YOLO_INPUT_SIZE,
YOLO_INPUT_NAME,
YOLO_OUTPUT_NAMES,
YOLO_BLOCK_SIZE);
} else {
classifier = TensorFlowYoloDetector.create(
getAssets(),
YOLO_MODEL_FILE_HAND,
YOLO_INPUT_SIZE,
YOLO_INPUT_NAME,
YOLO_OUTPUT_NAMES,
YOLO_BLOCK_SIZE);
}
previewWidth = size.getWidth();
previewHeight = size.getHeight();
final Display display = getWindowManager().getDefaultDisplay();
final int screenOrientation = display.getRotation();
LOGGER.i("Sensor orientation: %d, Screen orientation: %d", rotation, screenOrientation);
sensorOrientation = rotation + screenOrientation;
LOGGER.i("Initializing at size %dx%d", previewWidth, previewHeight);
rgbBytes = new int[previewWidth * previewHeight];
rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888);
croppedBitmap = Bitmap.createBitmap(YOLO_INPUT_SIZE, YOLO_INPUT_SIZE, Config.ARGB_8888);
frameToCropTransform =
ImageUtils.getTransformationMatrix(
previewWidth, previewHeight,
YOLO_INPUT_SIZE, YOLO_INPUT_SIZE,
sensorOrientation, MAINTAIN_ASPECT);
cropToFrameTransform = new Matrix();
frameToCropTransform.invert(cropToFrameTransform);
yuvBytes = new byte[3][];
}
开发者ID:Jamjomjara,项目名称:snu-artoon,代码行数:56,代码来源:ARToonActivity.java
示例7: onPreviewSizeChosen
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override
public void onPreviewSizeChosen(final Size size, final int rotation) {
final float textSizePx =
TypedValue.applyDimension(
TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics());
borderedText = new BorderedText(textSizePx);
borderedText.setTypeface(Typeface.MONOSPACE);
classifier =
TensorFlowImageClassifier.create(
getAssets(),
MODEL_FILE,
LABEL_FILE,
INPUT_SIZE,
IMAGE_MEAN,
IMAGE_STD,
INPUT_NAME,
OUTPUT_NAME);
resultsView = (ResultsView) findViewById(R.id.results);
previewWidth = size.getWidth();
previewHeight = size.getHeight();
final Display display = getWindowManager().getDefaultDisplay();
final int screenOrientation = display.getRotation();
LOGGER.i("Sensor orientation: %d, Screen orientation: %d", rotation, screenOrientation);
sensorOrientation = rotation + screenOrientation;
LOGGER.i("Initializing at size %dx%d", previewWidth, previewHeight);
rgbBytes = new int[previewWidth * previewHeight];
rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888);
croppedBitmap = Bitmap.createBitmap(INPUT_SIZE, INPUT_SIZE, Config.ARGB_8888);
frameToCropTransform =
ImageUtils.getTransformationMatrix(
previewWidth, previewHeight,
INPUT_SIZE, INPUT_SIZE,
sensorOrientation, MAINTAIN_ASPECT);
cropToFrameTransform = new Matrix();
frameToCropTransform.invert(cropToFrameTransform);
yuvBytes = new byte[3][];
addCallback(
new DrawCallback() {
@Override
public void drawCallback(final Canvas canvas) {
renderDebug(canvas);
}
});
}
开发者ID:apacha,项目名称:TensorflowAndroidDemo,代码行数:55,代码来源:ClassifierActivity.java
示例8: onImageAvailable
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override
public void onImageAvailable(final ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
if (image == null) {
return;
}
if (computing) {
image.close();
return;
}
computing = true;
Trace.beginSection("imageAvailable");
final Plane[] planes = image.getPlanes();
fillBytes(planes, yuvBytes);
final int yRowStride = planes[0].getRowStride();
final int uvRowStride = planes[1].getRowStride();
final int uvPixelStride = planes[1].getPixelStride();
ImageUtils.convertYUV420ToARGB8888(
yuvBytes[0],
yuvBytes[1],
yuvBytes[2],
rgbBytes,
previewWidth,
previewHeight,
yRowStride,
uvRowStride,
uvPixelStride,
false);
image.close();
} catch (final Exception e) {
if (image != null) {
image.close();
}
LOGGER.e(e, "Exception!");
Trace.endSection();
return;
}
rgbFrameBitmap.setPixels(rgbBytes, 0, previewWidth, 0, 0, previewWidth, previewHeight);
final Canvas canvas = new Canvas(croppedBitmap);
canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null);
// For examining the actual TF input.
if (SAVE_PREVIEW_BITMAP) {
ImageUtils.saveBitmap(croppedBitmap);
}
runInBackground(
new Runnable() {
@Override
public void run() {
final long startTime = SystemClock.uptimeMillis();
final List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap);
lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;
cropCopyBitmap = Bitmap.createBitmap(croppedBitmap);
resultsView.setResults(results);
requestRender();
computing = false;
}
});
Trace.endSection();
}
开发者ID:apacha,项目名称:TensorflowAndroidDemo,代码行数:74,代码来源:ClassifierActivity.java
示例9: onPreviewFrame
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
/**
* Callback for android.hardware.Camera API
*/
@Override
public void onPreviewFrame(final byte[] bytes, final Camera camera) {
if (isProcessingFrame) {
LOGGER.w("Dropping frame!");
return;
}
try {
// Initialize the storage bitmaps once when the resolution is known.
if (rgbBytes == null) {
Camera.Size previewSize = camera.getParameters().getPreviewSize();
previewHeight = previewSize.height;
previewWidth = previewSize.width;
rgbBytes = new int[previewWidth * previewHeight];
onPreviewSizeChosen(new Size(previewSize.width, previewSize.height), 90);
}
} catch (final Exception e) {
LOGGER.e(e, "Exception!");
return;
}
isProcessingFrame = true;
lastPreviewFrame = bytes;
yuvBytes[0] = bytes;
yRowStride = previewWidth;
imageConverter =
new Runnable() {
@Override
public void run() {
ImageUtils.convertYUV420SPToARGB8888(bytes, previewWidth, previewHeight, rgbBytes);
}
};
postInferenceCallback =
new Runnable() {
@Override
public void run() {
camera.addCallbackBuffer(bytes);
isProcessingFrame = false;
}
};
processImage();
}
开发者ID:Nilhcem,项目名称:tensorflow-classifier-android,代码行数:48,代码来源:CameraActivity.java
示例10: onImageAvailable
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
/**
* Callback for Camera2 API
*/
@Override
public void onImageAvailable(final ImageReader reader) {
//We need wait until we have some size from onPreviewSizeChosen
if (previewWidth == 0 || previewHeight == 0) {
return;
}
if (rgbBytes == null) {
rgbBytes = new int[previewWidth * previewHeight];
}
try {
final Image image = reader.acquireLatestImage();
if (image == null) {
return;
}
if (isProcessingFrame) {
image.close();
return;
}
isProcessingFrame = true;
Trace.beginSection("imageAvailable");
final Plane[] planes = image.getPlanes();
fillBytes(planes, yuvBytes);
yRowStride = planes[0].getRowStride();
final int uvRowStride = planes[1].getRowStride();
final int uvPixelStride = planes[1].getPixelStride();
imageConverter =
new Runnable() {
@Override
public void run() {
ImageUtils.convertYUV420ToARGB8888(
yuvBytes[0],
yuvBytes[1],
yuvBytes[2],
previewWidth,
previewHeight,
yRowStride,
uvRowStride,
uvPixelStride,
rgbBytes);
}
};
postInferenceCallback =
new Runnable() {
@Override
public void run() {
image.close();
isProcessingFrame = false;
}
};
processImage();
} catch (final Exception e) {
LOGGER.e(e, "Exception!");
Trace.endSection();
return;
}
Trace.endSection();
}
开发者ID:Nilhcem,项目名称:tensorflow-classifier-android,代码行数:66,代码来源:CameraActivity.java
示例11: onPreviewSizeChosen
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override
public void onPreviewSizeChosen(final Size size, final int rotation) {
final float textSizePx = TypedValue.applyDimension(
TypedValue.COMPLEX_UNIT_DIP, TEXT_SIZE_DIP, getResources().getDisplayMetrics());
borderedText = new BorderedText(textSizePx);
borderedText.setTypeface(Typeface.MONOSPACE);
classifier =
TensorFlowImageClassifier.create(
getAssets(),
MODEL_FILE,
LABEL_FILE,
INPUT_SIZE,
IMAGE_MEAN,
IMAGE_STD,
INPUT_NAME,
OUTPUT_NAME);
previewWidth = size.getWidth();
previewHeight = size.getHeight();
final Display display = getWindowManager().getDefaultDisplay();
final int screenOrientation = display.getRotation();
LOGGER.i("Sensor orientation: %d, Screen orientation: %d", rotation, screenOrientation);
sensorOrientation = rotation + screenOrientation;
LOGGER.i("Initializing at size %dx%d", previewWidth, previewHeight);
rgbFrameBitmap = Bitmap.createBitmap(previewWidth, previewHeight, Config.ARGB_8888);
croppedBitmap = Bitmap.createBitmap(INPUT_SIZE, INPUT_SIZE, Config.ARGB_8888);
frameToCropTransform = ImageUtils.getTransformationMatrix(
previewWidth, previewHeight,
INPUT_SIZE, INPUT_SIZE,
sensorOrientation, MAINTAIN_ASPECT);
cropToFrameTransform = new Matrix();
frameToCropTransform.invert(cropToFrameTransform);
addCallback(
new DrawCallback() {
@Override
public void drawCallback(final Canvas canvas) {
renderDebug(canvas);
}
});
}
开发者ID:Nilhcem,项目名称:tensorflow-classifier-android,代码行数:49,代码来源:ClassifierActivity.java
示例12: onImageAvailable
import org.tensorflow.demo.env.ImageUtils; //导入依赖的package包/类
@Override public void onImageAvailable(final ImageReader reader) {
imageReader = reader;
Image image = null;
try {
image = reader.acquireLatestImage();
if (image == null) {
return;
}
if (savingImage || computing) {
image.close();
return;
}
savingImage = true;
Trace.beginSection("imageAvailable");
final Plane[] planes = image.getPlanes();
fillBytes(planes, yuvBytes);
final int yRowStride = planes[0].getRowStride();
final int uvRowStride = planes[1].getRowStride();
final int uvPixelStride = planes[1].getPixelStride();
ImageUtils.convertYUV420ToARGB8888(yuvBytes[0], yuvBytes[1], yuvBytes[2], rgbBytes,
previewWidth, previewHeight, yRowStride, uvRowStride, uvPixelStride, false);
image.close();
} catch (final Exception e) {
if (image != null) {
image.close();
}
LOGGER.e(e, "Exception!");
Trace.endSection();
return;
}
rgbFrameBitmap.setPixels(rgbBytes, 0, previewWidth, 0, 0, previewWidth, previewHeight);
final Canvas canvas = new Canvas(croppedBitmap);
canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null);
// For examining the actual TF input.
if (SAVE_PREVIEW_BITMAP) {
ImageUtils.saveBitmap(croppedBitmap);
}
savingImage = false;
Trace.endSection();
}
开发者ID:flipper83,项目名称:SortingHatAndroid,代码行数:52,代码来源:CameraActivity.java
注:本文中的org.tensorflow.demo.env.ImageUtils类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论