-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JavaFX Support #97
Comments
I have not yet considered JavaFX. The 3D capabilities of JavaFX are very limited: It supports geometry and phong materials, and ... that's about that. (One important aspect of glTF 2.0 was the standardization of the PBR material model, and this would not be supported by JavaFX. It may be worth mentioning that the Owever, one could make a case that JavaFX could be used for models with "simple" material properties, like CAD data. In this case, translating certain elements of the About 80% of the work here could probably be done on one weekend. But... ... then you'll have to think about animations, morphing, vertex skinning, and (particularly) the PBR materials, and it's very likely that not all of this can be supported (no matter how much effort you put into that). It might be interesting, as a proof of concept, but the results would be limited to a few specific use-cases (e.g. CAD data or plain, textured geometry). |
The results would indeed be very limited, but it could also be quick to get to that baseline level. For my own use case, yes, a CAD-like display is what I'm looking to do, since it is to position very very low poly models with very low resolution textures for use in augmented reality. I am going to try to get to a quick 50% solution running right away and then might have a few implementation questions about DefaultRenderedGltfModel. |
The If I had to give it a shot, I'd really just plainly traverse the |
I've been moving slowly, but I have it loading meshes and some preliminary material conversion. I haven't done textures yet -- those are next. The main challenge I had was that you have to manually rebuild the mesh array. The array() method on the float and int buffers don't return a plain array that can be directly ingested by JavaFX. The array layout is identical, it just needs to be recreated using the buffers' get() method. Also, you are definitely correct that "instancing" meshes is a problem. The scenegraph-y JavaFX system gives errors any time a node is repeated, including a mesh, a challenge I haven't yet figured a way around. |
I had a quick look at the
All this may be OK, and the first goal will be to make something that works correctly (and think about "optimizations" (like avoiding unnecessary copies) later. One approach could be AccessorFloatFata pa = (AccessorFloatData)primitive.getAttributes().get("POSITION").getAccessorData();
ObservableFloatArray pData = jfxMesh.getPoints();
int n = pa.getTotalNumComponents();
pData.resize(n);
for (int i=0; i<n; i++) {
pData.set(i, pa.get(i));
} NOTE: This is untested and NOT a "recommendation". In fact, using the "bulk" operations (without calling the EDIT: If your code is available somewhere publicly, I might actually have a look and try it out (even though I cannot make any promises...) |
Thank you for the advice to reduce the excessive copying. I gave it a try and it works well. The same approach works for transferring the faces into JavaFX. Now, I'll try to get the textures to work, then back to instancing. The JavaFX MeshView seems to allow Mesh instancing, but it seems finicky and thinly documented. I also reviewed this code to see how they approached the conversions and reached out to the author with no response yet: https://github.com/NekoLvds/GLTFImporter Our code is not publicly available, but the app is: https://www.runningreality.org . Most users use our web version, where we would use GWT to drive three.js to show historical maps and AR scenes with GLTF models. However, the editor is in the desktop app, so the desktop app needs to show the same GLTF models with enough fidelity to do basic layout on the map. I.e. to arrange infantry and cannons on a historical battlefield. The 3D features are present in the app and web versions now, but not made too obvious while they are still experimental. With AR on a mobile device, extremely low-poly models and <100kb textures are required, so JavaFX having low fidelity 3D for an editor of low fidelity AR content is fine. |
After a short look, that looks like a cool project (I'll probably have a closer look later - I'm somewhat involved in ~"maps and geospatial data visualization", in the context of https://cesium.com/why-cesium/3d-tiles/ ) As for the editor: On the one hand, I'd really like to add a proper renderer to JglTF. On the other hand, it may turn out that JglTF itself is more used for creating/managing/modifying glTF assets. And on the third hand, there are a million possibilities to extend the JglTF functionality (and... people only have two hands, that's why there is hardly any progress in this area). Is that destop-based editor already using any 3D framework at all? I know that there are some libraries out there that claim to target Web as well as Desktop and Mobile/Android. For example, I had a short look at 'libGDX' at one point, and it seemed relatively easy to use, and quick websearches reveal that they do also support glTF (e.g. https://github.com/mgsx-dev/gdx-gltf#demo-and-gallery ). There are probably others as well. Maybe you can leverage some exiting functionality here (considering the possible quirks of JavaFX for instancing or its limitations for the materials). Of course, from what you said, the focus does not seem to be on the latest-and-greatest rendering features and fidelity, but rather on the editing functionality for placing entities on the map. There's probably a trade-off of either spending time for creating a simple renderer, or using an existing renderer and spending time for improving the editor, but of course, you have to make the call here (and whatever ~"UI framework" is used for the editor: integrating another one may also considerable effort, so maybe a basic JavaFX renderer is the most time-efficient solution here) |
I was just reading the technical specs of the 3d tile format, myself! A 3D map is clearly the future, but for the moment I'm still using a 2D map. The app uses a custom 2D engine that dates pre 2007, and the web uses GWT driving OpenLayers. My main challenge is that RR is a history project, with historical data and not a GIS project. The data is all temporal, and only some of it is geographic, and all of it carries explicit uncertainty data. So, there is no practical way to generate pre-rendered tiles, either raster or vector or 3D. Still, I'm preparing to head toward a full 3D map and want to be "as compatible as possible" with standards like the 3D tile spec. Unifying around GLTF is a critical part of that. Trying to shoehorn some initial 3D capabilities into an existing hybrid Swing/JavaFX UI is a step to bootstrap us, but I don't want to over invest in that for when we shift to a full 3D framework. The main focus is on the historical side: Lay out a battle in an editor, with high uncertainty in the explicit data, interpolators that then make reasonable inferences (pick this GLTF for 1800's infantry, this GLTF for 300's infantry, and this GLTF for 400BC infantry), then deploy that explicit data and interpolated data uniformly across all devices, including a 400px-width AR display on the 3G phone of a tourist standing on that battlefield. I am incredibly supportive of your library because loading GLTF is going to be the key functionality to make Java a player in AR / digital twins, especially with 3D tiles also locking in on a GLTF approach. |
Lifting an application from 2D to 3D certainly requires careful planning and a careful analysis of the requirements and goals. For the "RunningReality" site, it may not even be important enough to justify the possible efforts. At least, there may not be an immediate use of 3D.
That's true. There have been some considerations and experiments for ~"time-dynamic 3D tiles", e.g. in https://cesium.com/blog/2018/11/08/weather-prediction-data-time-series-and-3d-tiles/ . But there is no "out of the box solution". And even if one came up with "some simple solution", then the specific context of looooong historical data and uncertainty could make it hard to apply that "simple" to your case. There certainly are things that are "fixed" in some way. For example, the actual terrain (geometry) data (e.g. https://sandcastle.cesium.com/index.html?src=Cesium%20World%20Terrain.html ) will not really change over time. What does change is ... "the textures" (you don't want the "texture" showing a city that just wasn't there in the year 1750...). And what also changes are the visualization of the "regions" (countries, kingdoms....). But the latter would probably not be part of the 3D Tiles themself, but rather polygons that are put on top of the map (roughly like https://sandcastle.cesium.com/index.html?src=CZML%20Polygon%20-%20Intervals%2C%20Availability.html ). |
I would love to migrate to Cesium in the future. Does your involvement mean there might be a Java version of Cesium in the future? It would be great to have a seamless multi-platform experience instead of a separate tech stack for desktop and mobile. I did actually consider simply loading GLTFs in JavaFX using a JavaFX browser and three.js, which would simplify the tech needed for the app but at the expense of making editing 3D data much harder. Many projects have tried to do world history with a GIS tech stack, where you just tuck the temporal information into the GIS metadata. Unfortunately, I've found that doesn't really scale well and breaks the links to critical reference citations. You need a data model that prioritizes temporal data as the most important dimension. You must model the Parthenon as the Parthenon, not as a render-optimized series of polygons in a tile with other structures with some Parthenon metadata. And you must model events such as the Venetians firing on the Parthenon and causing an explosion as a historical event, not as a set of GIS data. Then you use the model data (that is only as precise as can be documented) and drive a GIS system as a renderer (where you infer the Parthenon on fire with a fire symbol), but the GIS system is secondary not primary. This is how RR drives OpenLayers and could drive Cesium. But it means on-the-fly rendering not pre-computed. Terrain is fixed-"ish" but not fixed over these timescales. Many cities, especially around the Mediterranean and in the Fertile Crescent have risen and fallen with shoreline and river course changes. We treat terrain as fixed-ish with a combo of raster tiles and temporal vector adjustments. (Also, for fun, we built in an Easter egg for pre-history with tectonic plate motions with help from the GPlates team.) For RR, 3D would be more than cool. As a teaching tool, the ability to really experience history means eventually getting down to street level and showing people. Still, for now, we have to get the big stuff right, like national borders and city evolution and road and rail. But history is what happens on the map; it isn't the map itself. :) |
You certainly know that migrating an existing ("old") application to a new platform will require careful evaluation. Far too often, some third party solution accomplishes 90% of what has been built in-house, but that last 10% can be what brings the actual value in the respective context.
No.
That would indeed be great. That "seamless integration" is the crux, and I've recently been investigating some options here (in some gray area between my "work" and my "pet projects"...). And I also had a short look at different ways of integrating CesiumJS in a desktop application. I did not yet actively test the JavaFX browser - maybe it's a convenient way to achieve what would be the alternative, namely trying to sneak WebKit into a
That's related to that 10%: You apparently spent many years with creating, polishing, and curating the data and developing the appropriate data model and functionalities. A naively "idealistic" view is that ~"the renderer should be 'pluggable'", and just receive ~"the state of the world", for a given time, and translate that into ~"something that can be rendered (in 2D or 3D)". But ... there has to be a common understanding, because of the backward channel of a user clicking on something, and having to look up some information in the underlying data model. |
They use GWT. We also use GWT, so I am quite familiar with it. It uses the Google J2CL "closure" compiler to transpile Java into Javascript. It also allows "interop" wrappers that give a Java API to a Javascript library and a Java API to the DOM. We use it so there is a single Java historical data model across the desktop app, server, and browser version of RR. Then we can have transpiled Java drive OpenLayers and three.js for the web rendering. If only there was an equivalent way to transpile Javascript libraries into Java to drive Swing/JavaFX for things like cesium.js or three.js. I was hoping Java Nashorn would go that way, but then it didn't. |
FEATURE REQUEST: JavaFX's 3D capabilities aren't as full as JOGL or LWJGL, but it can support a simple display of a simple 3D model. Have you considered adding support to "render" a GltfModel to JavaFX? I was looking into adapting the code from DefaultRenderedGltfModel and associated classes to try to create a rudimentary de.javagl.jgltf.model.GltfModel -> javafx.scene.Group converter.
The text was updated successfully, but these errors were encountered: