From c8336cfcaa27e01d3480da24d53ed23a95dc996a Mon Sep 17 00:00:00 2001 From: etaisella <74829220+etaisella@users.noreply.github.com> Date: Mon, 6 May 2024 13:36:19 +0300 Subject: [PATCH] Update index.html --- index.html | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/index.html b/index.html index 555a76e..48bcb57 100644 --- a/index.html +++ b/index.html @@ -37,8 +37,8 @@ -

SPiC·E

-

Structural Priors in 3D Diffusion Models using Cross-Entity Attention

+

Spice·E

+

Structural Priors in 3D Diffusion using Cross-Entity Attention

@@ -271,14 +271,14 @@

Abstract

However, time-consuming optimization procedures are required for synthesizing each sample, hindering their potential for democratizing 3D content creation. Conversely, 3D diffusion models now train on million-scale 3D datasets, yielding high-quality text-conditional 3D - samples within seconds. In this work, we present SPiC·E - a neural network that adds structural guidance + samples within seconds. In this work, we present Spice·E - a neural network that adds structural guidance to 3D diffusion models, extending their usage beyond text-conditional generation. At its core, our framework introduces a cross-entity attention mechanism that allows for multiple entities (in particular, paired input and guidance 3D shapes) to interact via their internal representations within the denoising network. We utilize this mechanism for learning task-specific structural priors in 3D diffusion models from auxiliary guidance shapes. We show that our approach supports a variety of applications, including 3D stylization, semantic shape editing and text-conditional abstraction-to-3D, which transforms primitive-based abstractions - into highly-expressive shapes. Extensive experiments demonstrate that SPiC·E achieves SOTA performance over + into highly-expressive shapes. Extensive experiments demonstrate that Spice·E achieves SOTA performance over these tasks while often being considerably faster than alternative methods. Importantly, this is accomplished without tailoring our approach for any specific task.

@@ -356,7 +356,7 @@

How does it work?

our proposed cross-entity attention mechanism (in red). This mechanism mixes their latent representations by carefully combining their Queries functions, allowing for learning task-specific structural priors while preserving the model's generative capabilities.

- 🔍 During inference, SPiC·E receives a guidance shape in addition to a target text prompt, enabling the generation of 3D shapes + 🔍 During inference, Spice·E receives a guidance shape in addition to a target text prompt, enabling the generation of 3D shapes (represented as either a neural radiance field or a signed texture field) conditioned on both high-level text directives and low-level structural constraints.

📋 See our paper for more details on our cross-entity attention mechanism and how we apply it for incorporating structural priors