Closed
Description
Ideally decoded content would be in the same colorspace as the encoded content, and colorspace negotiation is "just" a metadata management problem. Android MediaCodec works differently:
- MediaCodec requires us to supply HDR metadata up front, and provides output Textures. To support a decoder like this, we would need to specify colorspace information up-front, and that means it has to be a per-codec-profile description.
- Thus colorspace is a property of a codec as much as it is a property of a frame.
- A decoded frame could be in a different colorspace from the encoded media.
- There is no obvious way to handle transcoding generically.
Open questions:
- Is there a workaround for MediaCodec? If so, can we assume that all future decoders would also allow unprocessed access to the decoded frames?
- Should we be implementing colorspace conversions for encoding, or is this always something the app should do? (Related: should encoders handle scaling content or is this also an app concern?)