Hi Peter,
With a fresh mind, I’ve been able to make it work without the corrupted images.
Some refactoring of how I streamed the frames and adding a reentrant lock did the trick.
The image size doesn’t change, so how would I cache the dataset?
Can I push in a new byte[] ? I tried with Dataset#set(java.lang.Object) but that didn’t work.
Kind regards,
Yannick
From: january-dev-bounces@xxxxxxxxxxx <january-dev-bounces@xxxxxxxxxxx>
On Behalf Of Peter.Chang@xxxxxxxxxxxxx
Sent: woensdag 26 juni 2019 10:55
To: january developer discussions <january-dev@xxxxxxxxxxx>
Subject: Re: [january-dev] Reshaping and slicing RGBA image byte buffer for android screen capture
Hi Yannick,
Good to see it sort of works. Does the size of the image change?
If it’s a synchronization problem and you are latency sensitive then you can reduce some overhead by caching the dataset and the sliceND object.
Regards,
Peter
Hi Peter,
Thanks! Your interpretations was correct.
It kind of works now with the following code:
private class
ImageAvailableListener
implements
ImageReader.OnImageAvailableListener {
private
ByteBuffer
imageBuffer;
@Override
public void
onImageAvailable(ImageReader reader) {
try
(Image latestImage =
imageReader.acquireLatestImage()) {
if
(latestImage ==
null) {
return;
}
final
Image.Plane plane = latestImage.getPlanes()[0];
final
ByteBuffer buffer = plane.getBuffer();
if
(imageBuffer
==
null) {
// prepare a buffer for ABGR to ARGB conversion
imageBuffer
= ByteBuffer.allocateDirect(buffer.capacity());
}
// clear the buffer and reset to position 0
imageBuffer.clear();
imageBuffer.position(0);
int
width = latestImage.getWidth();
int
height = latestImage.getHeight();
int
pixelStride = plane.getPixelStride();
int
rowStride = plane.getRowStride();
int
rowPadding = rowStride - (pixelStride * width);
// convert from ABGR to ARGB to get the correct color space for OpenTOK
RoboticChameleon.fromABGR().toARGB(buffer, rowStride,
imageBuffer, rowStride, width, height);
int
paddedWidth = width + (rowPadding / pixelStride);
if
(paddedWidth > width) {
// we need to remove padding on the right side of this image (some devices show this behaviour in portrait mode)
imageBuffer.position(0);
int
size = rowStride * height;
byte[] pixels =
new byte[size];
imageBuffer.get(pixels,
0, size);
// create a compound dataset with itemsize = pixel stride = 4 (RGBA pixel) and reshape it
int
fullWidth = rowStride / pixelStride;
Dataset dataset = DatasetFactory.createFromObject(pixelStride, CompoundByteDataset.class,
pixels, height, fullWidth);
// slice to width and height
Dataset slicedDataset = dataset.getSlice(new
SliceND(dataset.getShapeRef(), (Slice)
null,
new
Slice(width)));
imageBuffer.clear();
imageBuffer.position(0);
// put the sliced byte array in the image buffer
imageBuffer.put(((CompoundByteDataset) slicedDataset).getData());
}
frame
=
new
Frame(imageBuffer,
width, height);
newFrameAvailable
=
true;
} catch
(Exception e) {
Log.e(LOG_TAG,
"Error processing screen capture frame");
throw new
RuntimeException("Error processing screen capture frame",
e);
}
}
}
As long as the image doesn’t change too much, it’s fine.
There is still some distortion, but that’s ok.
But when something is happening/moving on screen, the image is not correct.
After a few frames it stabilizes again.
I think I need to build in some sort of synchronized locking on the imagebuffer, to avoid incorrect frames.
Thanks for the help!
Kind regards,
Yannick
Hi Yannick,
Can you confirm my interpretation of your definitions is correct. I am assume row-major pixel storage.
Pixel stride = position difference (in byte buffer) between pixels (4, in this case)
Row stride = position difference between one row and the next (at same column)
Width = width of image
Height = height of image
Full width = (row stride / pixel stride)
Size of buffer in bytes = pixel stride * full width * height
Copying all the imageBuffer to a byte array then using data = "" CompoundByteDataset.class, height, full width)
To grab the left portion, take a slice with
slicedData = data.getSlice(new SliceND(data.getShapeRef(), null, new Slice(width)))
So the flattened bytes are in
slicedDataBytes = slicedData.getData()
Regards,
Peter
Hi,
Erwin contacted you earlier today about slicing an image represented in a bytebuffer (on android).
I’m trying to accomplish this with a CompoundByteDataset as suggested by Peter, but I’m having some problems.
Below is my current code. I’ve added some additional comments marked in yellow
Could someone point me in the right direction here?
private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
private ByteBuffer imageBuffer;
@Override
public void onImageAvailable(ImageReader reader) {
try (Image latestImage = imageReader.acquireLatestImage()) {
if (latestImage == null) {
return;
}
final Image.Plane plane = latestImage.getPlanes()[0];
final ByteBuffer buffer = plane.getBuffer();
if (imageBuffer == null) {
// prepare a buffer for ABGR to ARGB conversion
imageBuffer = ByteBuffer.allocateDirect(buffer.capacity());
}
// clear the buffer and reset to position 0
imageBuffer.clear();
imageBuffer.position(0);
int width = latestImage.getWidth();
int height = latestImage.getHeight();
int pixelStride = plane.getPixelStride();
int rowStride = plane.getRowStride();
int rowPadding = rowStride - (pixelStride * width);
// convert from ABGR to ARGB to get the correct color space for OpenTOK
RoboticChameleon.fromABGR().toARGB(buffer, rowStride, imageBuffer, rowStride, width, height);
int paddedWidth = width + (rowPadding / pixelStride);
if (paddedWidth > width) {
// we need to remove padding on the right side of this image (some devices show this behaviour in portrait mode)
imageBuffer.position(0);
int size = rowStride * height; -> had to define a new byte array, since the byte array backed by the buffer had 7 extra bytes (which caused the reshaping to fail)
byte[] pixels = new byte[size];
imageBuffer.get(pixels, 0, size);
// create a compound dataset with itemsize 4 (RGBA pixel) and reshape it
Dataset dataset = DatasetFactory.createFromObject(4, CompoundByteDataset.class, pixels, rowStride / pixelStride, height);
// slice to width and height
Dataset slicedDataset = dataset.getSlice(new Slice(0, width), new Slice(0, height)); -> this takes a very long time
imageBuffer.clear();
imageBuffer.position(0);
// convert the sliced dataset back to a byte array
imageBuffer.put(((CompoundDataset) slicedDataset.flatten()).getByteArray()); -> not sure about this
}
// frame = new Frame(imageBuffer, paddedWidth, height);
frame = new Frame(imageBuffer, width, height);
newFrameAvailable = true;
} catch (Exception e) {
Log.e(LOG_TAG, "Error processing screen capture frame");
}
}
}
This is an example image, from which I want to remove the black bar on the right.
Kind regards,
Yannick
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please
notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom