Skip to main content
Version: 14 Oct 2024

Media

NameSpace: NDK

Classes

TypeName
structAImage
structAImageReader
classMediaFormatExtensions

Enums

Name
enumMediaFormat
{
Rgba8888 = 0x1, Rgbx8888 = 0x2, Rgb888 = 0x3, Rgb565 = 0x4, Rgba_Fp16 = 0x16, Yuv_420_888 = 0x23, Jpeg = 0x100, Raw16 = 0x20, RawPrivate = 0x24, Raw10 = 0x25, Raw12 = 0x26, Depth16 = 0x44363159, DepthPointCloud = 0x101, Private = 0x22, Y8 = 0x20203859, Heic = 0x48454946, DepthJpeg = 0x69656963
}

Enums Documentation

MediaFormat

EnumeratorValueDescription
Rgba88880x132 bits RGBA format, 8 bits for each of the four channels. Corresponding formats:
  • AHardwareBuffer: AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM

  • Vulkan: VK_FORMAT_R8G8B8A8_UNORM

  • OpenGL ES: GL_RGBA8 | | Rgbx8888 | 0x2| 32 bits RGBX format, 8 bits for each of the four channels. The values of the alpha channel bits are ignored (image is assumed to be opaque). Corresponding formats:

  • AHardwareBuffer: AHARDWAREBUFFER_FORMAT_R8G8B8X8_UNORM

  • Vulkan: VK_FORMAT_R8G8B8A8_UNORM

  • OpenGL ES: GL_RGB8 | | Rgb888 | 0x3| 24 bits RGB format, 8 bits for each of the three channels. Corresponding formats:

  • AHardwareBuffer: AHARDWAREBUFFER_FORMAT_R8G8B8_UNORM

  • Vulkan: VK_FORMAT_R8G8B8_UNORM

  • OpenGL ES: GL_RGB8 | | Rgb565 | 0x4| 16 bits RGB format, 5 bits for Red channel, 6 bits for Green channel, and 5 bits for Blue channel. Corresponding formats:

  • AHardwareBuffer: AHARDWAREBUFFER_FORMAT_R5G6B5_UNORM

  • Vulkan: VK_FORMAT_R5G6B5_UNORM_PACK16

  • OpenGL ES: GL_RGB565 | | Rgba_Fp16 | 0x16| 64 bits RGBA format, 16 bits for each of the four channels. Corresponding formats:

  • AHardwareBuffer: AHARDWAREBUFFER_FORMAT_R16G16B16A16_FLOAT

  • Vulkan: VK_FORMAT_R16G16B16A16_SFLOAT

  • OpenGL ES: GL_RGBA16F | | Yuv_420_888 | 0x23| Multi-plane Android YUV 420 format. This format is a generic YCbCr format, capable of describing any 4:2:0 chroma-subsampled planar or semiplanar buffer (but not fully interleaved), with 8 bits per color sample.

Images in this format are always represented by three separate buffers of data, one for each color plane. Additional information always accompanies the buffers, describing the row stride and the pixel stride for each plane.

The order of planes is guaranteed such that plane #0 is always Y, plane #1 is always U (Cb), and plane #2 is always V (Cr).

The Y-plane is guaranteed not to be interleaved with the U/V planes (in particular, pixel stride is always 1 in [AImage_getPlanePixelStride]).

The U/V planes are guaranteed to have the same row stride and pixel stride, that is, the return value of [AImage_getPlaneRowStride] for the U/V plane are guaranteed to be the same, and the return value of [AImage_getPlanePixelStride] for the U/V plane are also guaranteed to be the same.

For example, the AImage object can provide data in this format from a [ACameraDevice] through an AImageReader object.

This format is always supported as an output format for the android Camera2 NDK API. | | Jpeg | 0x100| Compressed JPEG format. This format is always supported as an output format for the android Camera2 NDK API. | | Raw16 | 0x20| 16 bits per pixel raw camera sensor image format, usually representing a single-channel Bayer-mosaic image. The layout of the color mosaic, the maximum and minimum encoding values of the raw pixel data, the color space of the image, and all other needed information to interpret a raw sensor image must be queried from the [ACameraDevice] which produced the image. | | RawPrivate | 0x24| Private raw camera sensor image format, a single channel image with implementation depedent pixel layout. AIMAGE_FORMAT_RAW_PRIVATE is a format for unprocessed raw image buffers coming from an image sensor. The actual structure of buffers of this format is implementation-dependent. | | Raw10 | 0x25| Android 10-bit raw format. This is a single-plane, 10-bit per pixel, densely packed (in each row), unprocessed format, usually representing raw Bayer-pattern images coming from an image sensor.

In an image buffer with this format, starting from the first pixel of each row, each 4 consecutive pixels are packed into 5 bytes (40 bits). Each one of the first 4 bytes contains the top 8 bits of each pixel, The fifth byte contains the 2 least significant bits of the 4 pixels, the exact layout data for each 4 consecutive pixels is illustrated below (Pi[j] stands for the jth bit of the ith pixel):

bit 7bit 6bit 5bit 4bit 3bit 2bit 1bit 0
Byte 0:P0[9]P0[8]P0[7]P0[6]P0[5]P0[4]P0[3]P0[2]
Byte 1:P1[9]P1[8]P1[7]P1[6]P1[5]P1[4]P1[3]P1[2]
Byte 2:P2[9]P2[8]P2[7]P2[6]P2[5]P2[4]P2[3]P2[2]
Byte 3:P3[9]P3[8]P3[7]P3[6]P3[5]P3[4]P3[3]P3[2]
Byte 4:P3[1]P3[0]P2[1]P2[0]P1[1]P1[0]P0[1]P0[0]

This format assumes

  • a width multiple of 4 pixels
  • an even height size = row stride * height where the row stride is in bytes, not pixels.

Since this is a densely packed format, the pixel stride is always 0. The application must use the pixel data layout defined in above table to access each row data. When row stride is equal to (width * (10 / 8)), there will be no padding bytes at the end of each row, the entire image data is densely packed. When stride is larger than (width * (10 / 8)), padding bytes will be present at the end of each row.

For example, the AImage object can provide data in this format from a [ACameraDevice] (if supported) through a AImageReader object. The number of planes returned by [AImage_getNumberOfPlanes] will always be 1. The pixel stride is undefined ([AImage_getPlanePixelStride] will return [AMEDIA_ERROR_UNSUPPORTED]), and the [AImage_getPlaneRowStride] described the vertical neighboring pixel distance (in bytes) between adjacent rows. | | Raw12 | 0x26| Android 12-bit raw format. This is a single-plane, 12-bit per pixel, densely packed (in each row), unprocessed format, usually representing raw Bayer-pattern images coming from an image sensor.

In an image buffer with this format, starting from the first pixel of each row, each two consecutive pixels are packed into 3 bytes (24 bits). The first and second byte contains the top 8 bits of first and second pixel. The third byte contains the 4 least significant bits of the two pixels, the exact layout data for each two consecutive pixels is illustrated below (Pi[j] stands for the jth bit of the ith pixel):

bit 7bit 6bit 5bit 4bit 3bit 2bit 1bit 0
Byte 0:P0[11]P0[10]P0[ 9]P0[ 8]P0[ 7]P0[ 6]P0[ 5]P0[ 4]
Byte 1:P1[11]P1[10]P1[ 9]P1[ 8]P1[ 7]P1[ 6]P1[ 5]P1[ 4]
Byte 2:P1[ 3]P1[ 2]P1[ 1]P1[ 0]P0[ 3]P0[ 2]P0[ 1]P0[ 0]

This format assumes

  • a width multiple of 4 pixels
  • an even height size = row stride * height where the row stride is in bytes, not pixels.

Since this is a densely packed format, the pixel stride is always 0. The application must use the pixel data layout defined in above table to access each row data. When row stride is equal to (width * (12 / 8)), there will be no padding bytes at the end of each row, the entire image data is densely packed. When stride is larger than (width * (12 / 8)), padding bytes will be present at the end of each row.

For example, the AImage object can provide data in this format from a [ACameraDevice] (if supported) through a AImageReader object. The number of planes returned by [AImage_getNumberOfPlanes] will always be 1. The pixel stride is undefined ([AImage_getPlanePixelStride] will return [AMEDIA_ERROR_UNSUPPORTED]), and the [AImage_getPlaneRowStride] described the vertical neighboring pixel distance (in bytes) between adjacent rows. | | Depth16 | 0x44363159| Android dense depth image format. Each pixel is 16 bits, representing a depth ranging measurement from a depth camera or similar sensor. The 16-bit sample consists of a confidence value and the actual ranging measurement.

The confidence value is an estimate of correctness for this sample. It is encoded in the 3 most significant bits of the sample, with a value of 0 representing 100% confidence, a value of 1 representing 0% confidence, a value of 2 representing 1/7, a value of 3 representing 2/7, and so on.

As an example, the following sample extracts the range and confidence from the first pixel of a DEPTH16-format AImage, and converts the confidence to a floating-point value between 0 and 1.f inclusive, with 1.f representing maximum confidence:

       uint16_t* data;
int dataLength;
AImage_getPlaneData(image, 0, (uint8_t**)&data, &dataLength);
uint16_t depthSample = data[0];
uint16_t depthRange = (depthSample & 0x1FFF);
uint16_t depthConfidence = ((depthSample >> 13) & 0x7);
float depthPercentage = depthConfidence == 0 ? 1.f : (depthConfidence - 1) / 7.f;

This format assumes

  • an even width
  • an even height
  • a horizontal stride multiple of 16 pixels y_size = stride * height

When produced by a camera, the units for the range are millimeters. | | DepthPointCloud | 0x101| Android sparse depth point cloud format. A variable-length list of 3D points plus a confidence value, with each point represented by four floats; first the X, Y, Z position coordinates, and then the confidence value.

The number of points is ((size of the buffer in bytes) / 16).

The coordinate system and units of the position values depend on the source of the point cloud data. The confidence value is between 0.f and 1.f, inclusive, with 0 representing 0% confidence and 1.f representing 100% confidence in the measured position values.

As an example, the following code extracts the first depth point in a DEPTH_POINT_CLOUD format AImage: float* data; int dataLength; AImage_getPlaneData(image, 0, (uint8_t**)&data, &dataLength); float x = data[0]; float y = data[1]; float z = data[2]; float confidence = data[3]; | | Private | 0x22| Android private opaque image format. The choices of the actual format and pixel data layout are entirely up to the device-specific and framework internal implementations, and may vary depending on use cases even for the same device. Also note that the contents of these buffers are not directly accessible to the application.

When an AImage of this format is obtained from an AImageReader or [AImage_getNumberOfPlanes()] method will return zero. | | Y8 | 0x20203859| Android Y8 format. Y8 is a planar format comprised of a WxH Y plane only, with each pixel being represented by 8 bits.

This format assumes

  • an even width
  • an even height
  • a horizontal stride multiple of 16 pixels size = stride * height

For example, the AImage object can provide data in this format from a [ACameraDevice] (if supported) through a AImageReader object. The number of planes returned by [AImage_getNumberOfPlanes] will always be 1. The pixel stride returned by [AImage_getPlanePixelStride] will always be 1, and the [AImage_getPlaneRowStride] described the vertical neighboring pixel distance (in bytes) between adjacent rows. | | Heic | 0x48454946| Compressed HEIC format. This format defines the HEIC brand of High Efficiency Image File Format as described in ISO/IEC 23008-12. | | DepthJpeg | 0x69656963| Depth augmented compressed JPEG format. JPEG compressed main image along with XMP embedded depth metadata following ISO 16684-1:2011(E). |