Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix for #33
This PR fixes an issue that the rendered result is off-centered when using the Metal rendering pipeline.
The root problem is the frame we receive from
AVCaptureVideoDataOutput
has a different resolution from our output frame.The frame's resolution from
AVCaptureVideoDataOutput
is 1080 x 1920 while the output frame size is 887 x 1920 on iPhone 11 Pro Max, which is bound to the device's screen size. (We're calculating the output size so it has the same height as input.)We create textures for both input and output, then send both textures to shaders to let shaders apply filters to the input texture and write the result to the output texture.
What we want to do essentially is to crop the input texture then apply filters. In the iPhone 11 Pro Max's case, we want to crop 96.5 pixels from each side.
The first approach I looked for was to change the input resolution. If we can receive a frame with the same dimension as the output, we don't have to deal with this problem at all. But looks like AVFoundation doesn't allow us to do it. (We can actually choose resolutions from pre-defined options, but we cannot set exact size)
The second approach I looked for was to find a metal shader function that sets certain offsets to textures. With OpenGL, we can configure to make input texture centered. But with Metal's kernel shaders, I was not able to find an easy way to do the same.
So we need to crop the input frame somehow somewhere in the pipeline and turned out it's not easy(In terms of processing power) to crop
CVPixelBuffer
. Essentially we need to create a new CVPixelBuffer instance with an output size then re-render the input image onto the new CVPixelBuffer instance. (Let me know if I missed something)And I didn't want to perform that process in the CPU land as it is executed in every frame.
Long story short, I decided to implement a pseudo cropping logic (
getAdjustedPosition
) in shaders. Yes, it looks messy as every shader function needs to use it. But at least it is very light and it seems to be working well.I still feel there should be a better way, but this is the best I can do at least for now.
How to test
Use filters with Metal feature flags on and make sure all perform as expected.