-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs: How is RGB-D data packed in result? #11
Comments
Hey @Kjos -- great question! We support several binary output formats from the pipeline (see source/mesh_stream/ConvertToBinary.cpp for a full list). The workflow we ended up using most often (documented here: https://facebook.github.io/facebook360_dep/docs/workflow) is producing .vtx, .idx, and .bc7 binaries, which we then stripe into aggregated binary files for consumption by the viewers. If you were looking to use this typical workflow, we recommend running the UI, since it is structured with that workflow in mind. |
I'm on mobile, but from I gathered earlier vertex data is saved right?
Why that route and not a geometry shader using a texture? That would deal
with sync issues and would be easier to generate third party viewers for I
imagine.
Is the saving and loading of raw vertex data the most promising direction
to go?
Since Facebook is the market leader pretty much (and with Oculus quest I
imagine that to grow), I'm interested in the approach which you have found
to have the best results.
…On Wed, 9 Oct 2019, 23:47 Yash Patel, ***@***.***> wrote:
Hey @Kjos <https://github.com/Kjos> -- great question! We support several
binary output formats from the pipeline (see
source/mesh_stream/ConvertToBinary.cpp for a full list).
The workflow we ended up using most often (documented here:
https://facebook.github.io/facebook360_dep/docs/workflow) is producing
.vtx, .idx, and .bc7 binaries, which we then stripe into aggregated binary
files for consumption by the viewers. If you were looking to use this
typical workflow, we recommend running the UI, since it is structured with
that workflow in mind.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11?email_source=notifications&email_token=ABQ5OCLKZPLYRRZIEIANJZ3QNZGNHA5CNFSM4I64AIG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAZQROA#issuecomment-540215480>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQ5OCMWKF6DMSTZHPNBSJ3QNZGNHANCNFSM4I64AIGQ>
.
|
@Kjos we can also generate RGBD outputs as top-bottom representations, like this: |
Ah I see, that's good to know. Is the depth single channel to lower
filesize? Why not pack it in RGB channels? Namely the Facebook unity
capture sdk did that.
( Note I'm not familiar with the support from rgbd players on quest, go and
pc and going forward.)
…On Thu, 10 Oct 2019, 00:22 Albert Parra Pozo, ***@***.***> wrote:
@Kjos <https://github.com/Kjos> we can also generate RGBD outputs as
top-bottom representations, like this:
[image: rgbd]
<https://user-images.githubusercontent.com/7462577/66524844-72075800-eaa8-11e9-99fe-e5ea20ad00a8.jpg>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11?email_source=notifications&email_token=ABQ5OCNAKN3MTD65VVPFA3LQNZKSJA5CNFSM4I64AIG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAZUFSQ#issuecomment-540230346>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQ5OCKZ4CLKV2WUCI34BHTQNZKSJANCNFSM4I64AIGQ>
.
|
We chose the top-bottom format for RGBD frames because we saw a handful of existing players that were using it. So, the bottom line is that we can generate a lot of different formats, and the code can be easily extended to support others too. |
Sorry I meant packing the depth as RGB so it will span 24bits instead of 8
if that wasn't clear.
Also is depth stored linear or non linear?
…On Thu, 10 Oct 2019, 00:53 Albert Parra Pozo, ***@***.***> wrote:
We chose the top-bottom format for RGBD frames because we saw a handful of
existing players that were using it.
The depth is single channel because it only contains distance information.
It could be embedded as the alpha channel of an RGBA if the player expects
that.
We can also generate depthmaps in pfm format to keep floating point
precision.
So, the bottom line is that we can generate a lot of different formats,
and the code can be easily extended to support others too.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11?email_source=notifications&email_token=ABQ5OCIBNSOTIHYA732DD5LQNZOGBA5CNFSM4I64AIG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAZWTRY#issuecomment-540240327>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQ5OCIXLNAH7RUISQJAIGDQNZOGBANCNFSM4I64AIGQ>
.
|
Sorry about the confusion. You can save the depth as PFM and you get 32bit floating point accuracy, and no precision is lost. The depth is stored as a disparity map = 1 / depth, so non linear in depth space. |
I couldn't find any examples or relevant code about how the resulting RGB-D videos and images are packed.
In the past I've seen that facebook uses the lower half of the video to pack 24 bits depth in the RGB channels (or YUV for that matter).
The text was updated successfully, but these errors were encountered: