Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Stars
React + Next.js template for research websites (for PhD students, researchers, etc)
Official repo for paper "Structured 3D Latents for Scalable and Versatile 3D Generation".
Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
HunyuanVideo: A Systematic Framework For Large Video Generation Model
Open-source and strong foundation image recognition models.
A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes practical examples for both text and image modalities.
Implementation of XFeat (CVPR 2024). Do you need robust and fast local feature extraction? You are in the right place!
fabio-sim / LightGlue-ONNX
Forked from cvg/LightGlueONNX-compatible LightGlue: Local Feature Matching at Light Speed. Supports TensorRT, OpenVINO
Code for MegaSynth: Scaling Up 3D Scene Reconstruction with Synthesized Data (CVPR 2025)
A generative world for general-purpose robotics & embodied AI learning.
Doppelgangers++: Improved Visual Disambiguation with Geometric 3D Features
[CVPR 2025] Original implementation of "3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes"
Unifying 3D Mesh Generation with Language Models
This is the official code release for our work, Denoising Vision Transformers.
Create beautiful diagrams just by typing notation in plain text.
Official implementation of "ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis"
pySLAM is a visual SLAM pipeline in Python for monocular, stereo and RGBD cameras. It supports many modern local and global features, different loop-closing methods, a volumetric reconstruction pip…
GIM: Learning Generalizable Image Matcher From Internet Videos (ICLR 2024 Spotlight)
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
DN-Splatter + AGS-Mesh: Depth and Normal Priors for Gaussian Splatting
[CAAI AIR'24] Bilateral Reference for High-Resolution Dichotomous Image Segmentation