Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tanks and temples dataset loader #653

Merged
merged 157 commits into from
Dec 3, 2023
Merged
Show file tree
Hide file tree
Changes from 139 commits
Commits
Show all changes
157 commits
Select commit Hold shift + click to select a range
392f8b1
Add files via upload
johnwlambert Jul 4, 2023
0977efb
Rename tanks_and_temples_loader.py to gtsfm/loader/tanks_and_temples_…
johnwlambert Jul 4, 2023
4d65e30
Add minimal Tanks and Temples Barn data files for unit tests
johnwlambert Jul 4, 2023
5cfb8fc
Rename tests/data/Barn.json to tests/data/tanks_and_temples_barn/Barn…
johnwlambert Jul 4, 2023
a4b8775
Rename tests/data/Barn_trans.txt to tests/data/tanks_and_temples_barn…
johnwlambert Jul 4, 2023
77b9178
Update tanks_and_temples_loader.py
johnwlambert Jul 4, 2023
7038d18
Rename tests/data/Barn_COLMAP_SfM.log to tests/data/tanks_and_temples…
johnwlambert Jul 4, 2023
117fdc6
Create test_tanks_and_temples_loader.py
johnwlambert Jul 4, 2023
324fc98
Update test_tanks_and_temples_loader.py
johnwlambert Jul 4, 2023
432daa9
Update tanks_and_temples_loader.py
johnwlambert Jul 8, 2023
581c6cb
Update tanks_and_temples_loader.py
johnwlambert Jul 8, 2023
d274689
Update benchmark.yml
johnwlambert Jul 9, 2023
346a7fe
Update execute_single_benchmark.sh
johnwlambert Jul 9, 2023
e8efe4d
Update download_single_benchmark.sh
johnwlambert Jul 9, 2023
40418ec
Update execute_single_benchmark.sh
johnwlambert Jul 9, 2023
1aadc03
Update download_single_benchmark.sh
johnwlambert Jul 9, 2023
888246d
Add correspondence generator for synthetic data
Jul 9, 2023
7b80409
run on CI
Jul 10, 2023
6efbeb0
run in CI
Jul 10, 2023
af1d26e
run w/ lookahead of 4
Jul 10, 2023
8fe66bf
use 700 synthetic points
Jul 10, 2023
be50f6e
Test vectorization
Jul 11, 2023
888f878
Remove vectorized code
Jul 11, 2023
8d3b339
flake8 fixes
Jul 11, 2023
b6fdffb
flake8 fixes
Jul 11, 2023
b3a652c
style fix
Jul 11, 2023
1e5b58e
fix image order
Jul 11, 2023
52b1140
Fix config comment
Jul 14, 2023
e0186d0
add unit test to make sure images are sorted as expected
Jul 14, 2023
1703165
loransac 0.5 px, no 2-view BA
Jul 14, 2023
cb31830
measure 2 view errors w/ synthetic
Jul 14, 2023
00c5d4f
dont compute error for None two view geometry
Jul 14, 2023
6687f8a
fix resolution error
Jul 15, 2023
5824974
fix resolution error
Jul 15, 2023
c70e937
fix resolution error
Jul 15, 2023
750e147
remove print statements
Jul 15, 2023
d4802dc
fix resolution error
Jul 15, 2023
cbb68c5
remove print statements
Jul 15, 2023
9062f12
fix resolution to 1080
Jul 15, 2023
52ee748
Remove 2-view error computation from the synthetic corr generator
Jul 16, 2023
968d16a
fix shonan capitalization
Jul 16, 2023
1ba2dc2
remove print statements from multiview opt
Jul 16, 2023
6faf44a
Add SO(3) check
Jul 16, 2023
c7864be
Fix capitlization in keypt agg
Jul 16, 2023
98c8d0c
fix capitalization in keypoint agg base
Jul 16, 2023
39b3cf8
fix 2-view estimator capitalization
Jul 16, 2023
5554638
Fix Sim(3) alignment loading in loader
Jul 16, 2023
6a331d4
improve unit test
Jul 16, 2023
25c0f68
flake8 fixes
Jul 16, 2023
2f3e9e2
flake8 fixes
Jul 16, 2023
574ac81
Add option for unit testing to use only K of N images, e.g. 3 of 410 …
Jul 16, 2023
38fbe1d
fix docstrings
Jul 16, 2023
f81667e
fix tanks and temples unit test
Jul 16, 2023
897923a
make some parts of T & T loader optional for unit testing w/o large f…
Jul 16, 2023
ef74582
flake8 fixes
Jul 16, 2023
f59d10c
Clean up T & T unit tests
Jul 16, 2023
bf4cbbf
flake8 cleanup
Jul 16, 2023
eadd6ee
back to 700 correspondences
Jul 16, 2023
dd3d3c9
clean up test comments
Jul 16, 2023
97304d3
Increase mesh resolution, with alpha=0.1 in mesh reconstruction, inst…
Jul 16, 2023
68a8fa1
fix input type as str not Path (via cast) for T& T loader test
Jul 16, 2023
96de130
add 3 tanks and temples images as test data
Jul 16, 2023
51ed829
save shonan input
Jul 16, 2023
cfb46fe
cast i1, i2 to int, as int64 not json serializable
Jul 17, 2023
f366590
200 3d landmarks
Jul 17, 2023
b3d0d9e
run tanks & temples on wildcat
Jul 18, 2023
46dd6f9
Log other connected components
Jul 18, 2023
9b26318
fix docstring capitalization
Jul 18, 2023
6fabe5d
dont redownload T&T
Jul 18, 2023
8ce2a4c
run with low Shonan sigma on CI
Jul 18, 2023
78e4e1d
turn off wildcat
Jul 18, 2023
62c99d2
sample using poisson disk
Jul 18, 2023
e658341
Update tanks_and_temples_loader.py
johnwlambert Jul 18, 2023
279623e
poisson sampling
Jul 19, 2023
420cfb4
add pose auc
Jul 22, 2023
8aa9f9f
add pose AUC
Jul 22, 2023
d6ebb47
add test_mesh fn on T & T
Jul 22, 2023
ea235cd
Merge branch 'master' of https://github.com/borglab/gtsfm into tanks-…
Jul 22, 2023
afd83e2
update T & T test paths
Jul 22, 2023
8fd61c1
flake8 fix
Jul 22, 2023
9b0be98
transition away from deprecated micromamba command
Jul 22, 2023
59bf16b
add image_filenames() method
Jul 22, 2023
44f02f7
remove activate-environment arg to setup-micromamba
Jul 22, 2023
6b5dc0e
fewer retrieval pairs and no_grad for netvald
Jul 22, 2023
92fc731
fix typo
Jul 22, 2023
614f682
lookahead 3
Jul 22, 2023
aa9618c
use logging instead of print
Jul 25, 2023
793e51c
Merge branch 'master' of https://github.com/borglab/gtsfm into tanks-…
Jul 25, 2023
09bc83a
run tanks and temples on self hosted
Jul 25, 2023
7254c29
run T&T on eagle
Jul 25, 2023
39504fe
remove rotation quaternion logging
Jul 25, 2023
ee861ec
tune shonan sigma to 0.01
Jul 25, 2023
9d7596d
decrease uncertainty on poses to 0.01
Jul 25, 2023
c3fab34
use logger_utils instead of getLogger() since unregistered
Jul 25, 2023
ed69f72
Merge branch 'master' of https://github.com/borglab/gtsfm into tanks-…
Jul 25, 2023
8d6cf88
fix logger formatting
Jul 25, 2023
640ad75
shonan sigma 0.1
Jul 25, 2023
6021108
Sample random 3d points. This sampling must occur only once, to avoid…
Jul 28, 2023
b7d9dc3
fix merge conflict
Jul 28, 2023
5778d1f
fix typo
Jul 28, 2023
d0c9e67
fix arg typo
Jul 28, 2023
5529914
fix typo
Jul 28, 2023
a60732d
use isinstance() instead of type()
Aug 4, 2023
38971bd
Style fixes
Aug 4, 2023
f781d32
astrovision account for 2 new return args, and style fixes
Aug 4, 2023
eb2dc0c
Add metrics group / report for Retriever
Aug 4, 2023
29cc2c3
style fixes on retriever
Aug 4, 2023
cb801a3
docstring fixes on open3d vis utils, and add fn to viz GT alongisde
Aug 4, 2023
ee6ba95
style fixes and use new scene data loader for colmap format data that…
Aug 4, 2023
0cbee24
flake8 fix
Aug 4, 2023
8f5a78b
Style fixes on images.py
Aug 4, 2023
aa75137
support bin and txt scene data, and read out point_cloud and rgb poin…
Aug 4, 2023
e005be2
Always save pose auc plots
Aug 4, 2023
634f220
save pre-ba summary
Aug 4, 2023
75d92a6
fix merge conflict
Aug 4, 2023
511c361
improve name of metric
Aug 4, 2023
68f75aa
Merge branch 'master' of https://github.com/borglab/gtsfm into tanks-…
Aug 4, 2023
78d7282
Remove custom shonan covariances
Aug 5, 2023
0a74180
use points instead of spheres for rendering speed
Aug 5, 2023
7fb8630
fix merge conflict
Aug 6, 2023
6d3ca41
remove stale print statement
Aug 6, 2023
ea532a9
remove dataset download from self-hosted runner
Aug 6, 2023
2e4eda5
Resolve merge conflict
Sep 5, 2023
34a0358
revert changes to self-hosted runner
Sep 5, 2023
f3fd910
Fix formatting
Sep 5, 2023
480a9f9
fix merge conflict
Nov 16, 2023
0043d37
fix merge conflict
Nov 25, 2023
a429482
update data paths
Nov 26, 2023
706ef7e
fix merge conflicts
Nov 26, 2023
783344b
flake8 fixes
Nov 26, 2023
4f9005c
fix duplicated imports
Nov 26, 2023
c084bd1
add more visualization functionality
Nov 26, 2023
845f35c
fix typo
Nov 26, 2023
e032f49
python black reformat
Nov 26, 2023
0237101
get intrinsics from EXIF
Nov 26, 2023
303b695
add back CI path
Nov 26, 2023
e564993
improve viz script
Nov 26, 2023
c79c21c
run synthetic in CI
Nov 26, 2023
306893b
update retriever to image_pairs_generator
Nov 26, 2023
5c9e26d
improve comments
Nov 26, 2023
c2a3ee6
move algorithm outside of loader
Nov 26, 2023
3ca4559
move algorithm outside of loader
Nov 26, 2023
6e6b691
fix flake8
Nov 26, 2023
032606a
fix flake8
Nov 26, 2023
5448dee
fix docs and missing arg
Nov 27, 2023
a8b6532
add todo
Nov 27, 2023
bc16d0d
fix
Nov 27, 2023
7195956
sequential retriever
Nov 28, 2023
6e97ff6
improve ValueError message
Nov 29, 2023
4abf136
update to not use netvlad in synthetic front end
Nov 29, 2023
395ff3e
remove synthetic tanks and temples from the CI
Dec 2, 2023
f3f8926
revert CI files
Dec 2, 2023
c3a9b57
clean up dead code
Dec 3, 2023
4afbfa5
remove unncessary function
Dec 3, 2023
4850f1f
clean up config
Dec 3, 2023
6ae33ae
5k pts
Dec 3, 2023
c0a14a7
reformat python black
Dec 3, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .github/scripts/download_single_benchmark.sh
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,15 @@ function download_and_unzip_dataset_files {
WGET_URL1=https://github.com/johnwlambert/gtsfm-datasets-mirror/releases/download/gerrard-hall-100/gerrard-hall-100.zip
ZIP_FNAME=gerrard-hall-100.zip

elif [ "$DATASET_NAME" == "tanks-and-temples-barn-410" ]; then
# Tanks and Temples Dataset, "Barn" scene.
WGET_URL1=https://github.com/johnwlambert/gtsfm-datasets-mirror/releases/download/tanks-and-temples-barn/Tanks_and_Temples_Barn_410.zip
ZIP_FNAME=Tanks_and_Temples_Barn_410.zip

elif [ "$DATASET_NAME" == "south-building-128" ]; then
WGET_URL1=https://github.com/johnwlambert/gtsfm-datasets-mirror/releases/download/south-building-128/south-building-128.zip
ZIP_FNAME=south-building-128.zip

fi

# Download the data.
Expand Down Expand Up @@ -140,6 +146,8 @@ function download_and_unzip_dataset_files {
elif [ "$DATASET_NAME" == "south-building-128" ]; then
unzip south-building-128.zip

elif [ "$DATASET_NAME" == "tanks-and-temples-barn-410" ]; then
unzip -qq Tanks_and_Temples_Barn_410.zip
fi
}

Expand Down
9 changes: 9 additions & 0 deletions .github/scripts/execute_single_benchmark.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ elif [ "$DATASET_NAME" == "notre-dame-20" ]; then
elif [ "$DATASET_NAME" == "gerrard-hall-100" ]; then
IMAGES_DIR=gerrard-hall-100/images
COLMAP_FILES_DIRPATH=gerrard-hall-100/colmap-3.7-sparse-txt-2023-07-27
elif [ "$DATASET_NAME" == "tanks-and-temples-barn-410" ]; then
DATASET_ROOT="Tanks_and_Temples_Barn_410"
SCENE_NAME="Barn"
elif [ "$DATASET_NAME" == "south-building-128" ]; then
IMAGES_DIR=south-building-128/images
#COLMAP_FILES_DIRPATH=south-building-128/colmap-official-2016-10-05
Expand Down Expand Up @@ -77,4 +80,10 @@ elif [ "$LOADER_NAME" == "astrovision" ]; then
--max_resolution ${MAX_RESOLUTION} \
${SHARE_INTRINSICS_ARG} \
--mvs_off

elif [ "$LOADER_NAME" == "tanks-and-temples" ]; then
python gtsfm/runner/run_scene_optimizer_synthetic_tanks_and_temples.py \
--config_name ${CONFIG_NAME}.yaml \
--dataset_root $DATASET_ROOT \
--scene_name $SCENE_NAME
fi
1 change: 1 addition & 0 deletions .github/scripts/execute_single_benchmark_self_hosted.sh
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,5 @@ elif [ "$LOADER_NAME" == "astrovision" ]; then
--max_frame_lookahead $MAX_FRAME_LOOKAHEAD \
--max_resolution ${MAX_RESOLUTION} \
${SHARE_INTRINSICS_ARG}

fi
3 changes: 2 additions & 1 deletion .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ jobs:
[sift, gerrard-hall-100, 15, jpg, wget, colmap-loader, 760, true],
[lightglue, gerrard-hall-100, 15, jpg, wget, colmap-loader, 760, true],
[sift, south-building-128, 15, jpg, wget, colmap-loader, 760, true],
[lightglue, south-building-128, 15, jpg, wget, colmap-loader, 760, true],
[lightglue, south-building-128, 15, jpg, wget, colmap-loader, 760, true],
[synthetic_front_end, tanks-and-temples-barn-410, 4, jpg, wget, tanks-and-temples, 1080, true],
]
defaults:
run:
Expand Down
102 changes: 102 additions & 0 deletions gtsfm/configs/synthetic_front_end.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Synthetic front-end configuration specifically for the Tanks & Temples dataset.

SceneOptimizer:
_target_: gtsfm.scene_optimizer.SceneOptimizer
save_gtsfm_data: True
save_two_view_correspondences_viz: False
save_3d_viz: True
pose_angular_error_thresh: 5 # degrees

image_pairs_generator:
_target_: gtsfm.retriever.image_pairs_generator.ImagePairsGenerator
global_descriptor:
_target_: gtsfm.frontend.cacher.global_descriptor_cacher.GlobalDescriptorCacher
global_descriptor_obj:
_target_: gtsfm.frontend.global_descriptor.netvlad_global_descriptor.NetVLADGlobalDescriptor
retriever:
_target_: gtsfm.retriever.joint_netvlad_sequential_retriever.JointNetVLADSequentialRetriever
num_matched: 2
min_score: 0.2
max_frame_lookahead: 3

# retriever:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, thanks.

# _target_: gtsfm.retriever.sequential_retriever.SequentialRetriever
# max_frame_lookahead: 4

# retriever:
# _target_: gtsfm.retriever.netvlad_retriever.NetVLADRetriever
# num_matched: 50
# min_score: 0.3

correspondence_generator:
_target_: gtsfm.frontend.correspondence_generator.synthetic_correspondence_generator.SyntheticCorrespondenceGenerator
#dataset_root: /Users/johnlambert/Downloads/Tanks_and_Temples_Barn_410
#dataset_root: /usr/local/gtsfm-data/Tanks_and_Temples_Barn_410
dataset_root: /home/runner/work/gtsfm/gtsfm/Tanks_and_Temples_Barn_410 # Path for CI.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer this being loaded from the dataset root passed from the terminal, but its not an easy fix. So we should be fine with it.

scene_name: Barn

two_view_estimator:
_target_: gtsfm.two_view_estimator.TwoViewEstimator
bundle_adjust_2view: False
eval_threshold_px: 4 # in px
ba_reproj_error_thresholds: [0.5]
bundle_adjust_2view_maxiters: 100

verifier:
_target_: gtsfm.frontend.verifier.loransac.LoRansac
use_intrinsics_in_verification: True
estimation_threshold_px: 0.5 # for H/E/F estimators

triangulation_options:
_target_: gtsfm.data_association.point3d_initializer.TriangulationOptions
mode:
_target_: gtsfm.data_association.point3d_initializer.TriangulationSamplingMode
value: NO_RANSAC

inlier_support_processor:
_target_: gtsfm.two_view_estimator.InlierSupportProcessor
min_num_inliers_est_model: 15
min_inlier_ratio_est_model: 0.1

multiview_optimizer:
_target_: gtsfm.multi_view_optimizer.MultiViewOptimizer

# comment out to not run
view_graph_estimator:
_target_: gtsfm.view_graph_estimator.cycle_consistent_rotation_estimator.CycleConsistentRotationViewGraphEstimator
edge_error_aggregation_criterion: MEDIAN_EDGE_ERROR

rot_avg_module:
_target_: gtsfm.averaging.rotation.shonan.ShonanRotationAveraging
# Use a very low value.
two_view_rotation_sigma: 0.1

trans_avg_module:
_target_: gtsfm.averaging.translation.averaging_1dsfm.TranslationAveraging1DSFM
robust_measurement_noise: True
projection_sampling_method: SAMPLE_INPUT_MEASUREMENTS

data_association_module:
_target_: gtsfm.data_association.data_assoc.DataAssociation
min_track_len: 2
triangulation_options:
_target_: gtsfm.data_association.point3d_initializer.TriangulationOptions
reproj_error_threshold: 10
mode:
_target_: gtsfm.data_association.point3d_initializer.TriangulationSamplingMode
value: RANSAC_SAMPLE_UNIFORM
max_num_hypotheses: 100
save_track_patches_viz: False

bundle_adjustment_module:
_target_: gtsfm.bundle.bundle_adjustment.BundleAdjustmentOptimizer
reproj_error_thresholds: [10, 5, 3] # for (multistage) post-optimization filtering
robust_measurement_noise: True
shared_calib: False
cam_pose3_prior_noise_sigma: 0.01
calibration_prior_noise_sigma: 1e-5
measurement_noise_sigma: 1.0

# # comment out to not run
# dense_multiview_optimizer:
# _target_: gtsfm.densify.mvs_patchmatchnet.MVSPatchmatchNet
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if we can just generate the matches apriori and use something like a cache to just load the matches from the disk.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was also thinking this would be the best approach, as this is more for debugging and will likely only be used by us.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see -- if we wish to use the Tanks and Temples loader to get GT poses to measure pose error, what's the difference between computing matches offline vs. online?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the synthetic matching frontend is only used for debugging, and I'm not sure we should merge it into the main repo

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ayushbaid @travisdriver wanted to revisit this -- I would prefer to keep as-is and make this code self-contained so that we only need to run one command, instead of having to make two new scripts (one to generate all the correspondences beforehand, and then making a new one to accept saved correspondences)

Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
"""Correspondence generator that creates synthetic keypoint correspondences using a 3d mesh.

Authors: John Lambert
"""
import tempfile
from typing import Dict, List, Tuple

from dask.distributed import Client, Future
import numpy as np
import open3d

from gtsfm.common.keypoints import Keypoints
from gtsfm.common.types import CAMERA_TYPE
from gtsfm.frontend.correspondence_generator.correspondence_generator_base import CorrespondenceGeneratorBase
from gtsfm.frontend.correspondence_generator.keypoint_aggregator.keypoint_aggregator_base import KeypointAggregatorBase
from gtsfm.frontend.correspondence_generator.keypoint_aggregator.keypoint_aggregator_dedup import (
KeypointAggregatorDedup,
)
from gtsfm.frontend.correspondence_generator.keypoint_aggregator.keypoint_aggregator_unique import (
KeypointAggregatorUnique,
)
from gtsfm.loader.loader_base import LoaderBase
from gtsfm.loader.tanks_and_temples_loader import TanksAndTemplesLoader


class SyntheticCorrespondenceGenerator(CorrespondenceGeneratorBase):
"""Pair-wise synthetic keypoint correspondence generator."""

def __init__(self, dataset_root: str, scene_name: str, deduplicate: bool = True) -> None:
"""
Args:
dataset_root: Path to where Tanks & Temples dataset is stored.
scene_name: Name of scene from Tanks & Temples dataset.
deduplicate: Whether to de-duplicate with a single image the detections received from each image pair.
"""
self._dataset_root = dataset_root
self._scene_name = scene_name
self._aggregator: KeypointAggregatorBase = (
KeypointAggregatorDedup() if deduplicate else KeypointAggregatorUnique()
)

def generate_correspondences(
self,
client: Client,
images: List[Future],
image_pairs: List[Tuple[int, int]],
num_sampled_3d_points: int = 500,
) -> Tuple[List[Keypoints], Dict[Tuple[int, int], np.ndarray]]:
"""Apply the correspondence generator to generate putative correspondences.

Args:
client: Dask client, used to execute the front-end as futures.
images: List of all images, as futures.
image_pairs: Indices of the pairs of images to estimate two-view pose and correspondences.

Returns:
List of keypoints, with one entry for each input image.
Putative correspondences as indices of keypoints (N,2), for pairs of images (i1,i2).
"""
dataset_root = self._dataset_root
scene_name = self._scene_name

img_dir = f"{dataset_root}/{scene_name}"
poses_fpath = f"{dataset_root}/{scene_name}_COLMAP_SfM.log"
lidar_ply_fpath = f"{dataset_root}/{scene_name}.ply"
colmap_ply_fpath = f"{dataset_root}/{scene_name}_COLMAP.ply"
ply_alignment_fpath = f"{dataset_root}/{scene_name}_trans.txt"
bounding_polyhedron_json_fpath = f"{dataset_root}/{scene_name}.json"
loader = TanksAndTemplesLoader(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will have to be pinned to the machine with the input worker, right? Hence I was suggesting we create the correspondences on disk beforehand. But fine with it given the time constraint.

img_dir=img_dir,
poses_fpath=poses_fpath,
lidar_ply_fpath=lidar_ply_fpath,
ply_alignment_fpath=ply_alignment_fpath,
bounding_polyhedron_json_fpath=bounding_polyhedron_json_fpath,
colmap_ply_fpath=colmap_ply_fpath,
)

mesh = loader.reconstruct_mesh()

# Sample random 3d points. This sampling must occur only once, to avoid clusters from repeated sampling.
pcd = mesh.sample_points_uniformly(number_of_points=num_sampled_3d_points)
pcd = mesh.sample_points_poisson_disk(number_of_points=num_sampled_3d_points, pcl=pcd)
sampled_points = np.asarray(pcd.points)

# TODO(jolambert): File Open3d bug to add pickle support for TriangleMesh.
open3d_mesh_path = tempfile.NamedTemporaryFile(suffix=".obj").name
open3d.io.write_triangle_mesh(filename=open3d_mesh_path, mesh=mesh)

loader_future = client.scatter(loader, broadcast=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could just avoid futures for this file I guess?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm wouldn't we lose all parallelization benefits then?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but since we are just loading from disk, we would not lose much in terms of time I think


def apply_synthetic_corr_generator(
loader_: LoaderBase,
camera_i1: CAMERA_TYPE,
camera_i2: CAMERA_TYPE,
open3d_mesh_fpath: str,
points: np.ndarray,
) -> Tuple[Keypoints, Keypoints]:
return loader_.generate_synthetic_correspondences_for_image_pair(
camera_i1, camera_i2, open3d_mesh_fpath, points
)

pairwise_correspondence_futures = {
(i1, i2): client.submit(
apply_synthetic_corr_generator,
loader_future,
loader.get_camera(index=i1),
loader.get_camera(index=i2),
open3d_mesh_path,
sampled_points,
)
for i1, i2 in image_pairs
}

pairwise_correspondences: Dict[Tuple[int, int], Tuple[Keypoints, Keypoints]] = client.gather(
pairwise_correspondence_futures
)

keypoints_list, putative_corr_idxs_dict = self._aggregator.aggregate(keypoints_dict=pairwise_correspondences)
return keypoints_list, putative_corr_idxs_dict
Loading
Loading