-
Notifications
You must be signed in to change notification settings - Fork 3
/
index.html
370 lines (345 loc) · 17.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<link href='https://fonts.googleapis.com/css?family=Noto Sans' rel='stylesheet'>
<link href='https://fonts.googleapis.com/css?family=Indie Flower' rel='stylesheet'>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="google-site-verification" content="xdvJxvo39Ei0nahgmgXGp9DCslFea8wH789x6mmAY-A" />
<meta property="og:site_name" content="Vox-E" />
<meta property="og:type" content="video.other" />
<meta property="og:title" content="Vox-E: Text-guided Voxel Editing of 3D Objects" />
<meta property="og:description" content="We use voxel grids and 2D diffusion to make local and global textual edits to 3D shapes" />
<meta property="og:url" content="https://tau-vailab.github.io/Vox-E/" />
<meta property="og:image" content="https://tau-vailab.github.io/Vox-E/images/voxe_sample.png" />
<meta property="article:publisher" content="https://tau-vailab.github.io/Vox-E/" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="Vox-E: Text-guided Voxel Editing of 3D Objects" />
<meta name="twitter:description" content="We use voxel grids and 2D diffusion to make local and global textual edits to 3D shapes" />
<meta name="twitter:url" content="https://tau-vailab.github.io/Vox-E/" />
<meta name="twitter:image" content="https://tau-vailab.github.io/Vox-E/images/voxe_sample.png" />
<title>Vox-E: Text-guided Voxel Editing of 3D Objects</title>
<!-- <link rel="icon" href="../pics/wis_logo.jpg">-->
<link rel="icon" href="images/browser_icon.png">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/css/bulma.min.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/swiper@9/swiper-bundle.min.css">
<link href="style.css" rel="stylesheet" type="text/css">
</head>
<body>
<div class="page-container" >
<script src="https://cdn.jsdelivr.net/npm/swiper@9/swiper-bundle.min.js"></script>
<!-- title -->
<h1 class="ourh1" align="center">Vox-E</h1>
<h2 class="ourh2" align="center">Text-guided Voxel Editing of 3D Objects</h2>
<!-- authors and affiliations -->
<section class="authors_block">
<div class="authors" align="center">
<span class="author-block"><a href="https://etaisella.github.io/" target="_blank">Etai Sella</a><sup>1</sup>,</span>
<span class="author-block"><a href="https://www.linkedin.com/in/gal-fiebelman-b923031b4/" target="_blank">Gal Fiebelman</a><sup>1</sup>,</span>
<span class="author-block"><a href="https://phogzone.com/" target="_blank">Peter Hedman</a><sup>2</sup>,</span>
<span class="author-block"><a href="https://www.elor.sites.tau.ac.il/" target="_blank">Hadar Averbuch-Elor</a><sup>1</sup></span>
</div>
<div class="affiliations" align="center">
<span class="author-block"><sup>1</sup>Tel Aviv University, </span>
<span class="author-block"><sup>2</sup>Google Research</span>
</div>
</section>
<!-- authors and affiliations -->
<section class="conference">
<div class="conference" align="center">
<span class="conference-block">ICCV 2023</span>
</div>
</section>
<!-- link buttons -->
<div class="column has-text-centered">
<div class="publication-links" align="center">
<!-- arxiv link -->
<span class="link-block">
<a href="https://arxiv.org/abs/2303.12048/" class="paper-link" style="display: inline-block">
<button class="button">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>arXiv</span>
</button>
</a>
</span>
<!-- Github Link. -->
<span class="link-block">
<a href="https://github.com/TAU-VAILab/Vox-E" style="display: inline-block">
<button class="button">
<span class="icon">
<i class="fa fa-github"></i>
</span>
<span>Code</span>
</button>
</a>
</span>
<!-- Supp Link. -->
<span class="link-block">
<a href="supp/index.html" style="display: inline-block">
<button class="button">
<span class="icon">
<i class="fa fa-plus-square"></i>
</span>
<span>Supplementary Material</span>
</button>
</a>
</span>
</div>
</div>
<!-- slider -->
<!-- Slider main container -->
<div class="swiper">
<!-- Additional required wrapper -->
<div class="swiper-wrapper">
<!-- Slides -->
<!-- Slide 1 -->
<div class="swiper-slide">
<table class="slide-table" width="100%" align="center">
<tbody>
<tr>
<th colspan="2" width="90%" class="prompt_title_local" align="center">"A kangaroo on rollerskates"</th>
</tr>
<tr>
<td colspan="2">
<video loop autoplay muted width="95%" class="result-video">
<source src=".\videos\kangaroo_rollerskates_small.mp4" type="video/mp4">
</video>
</td>
</tr>
<tr>
<td width="45%">
<div align="center" class="video-label-slides">Initial</div>
</td>
<td width="41%">
<div align="center" class="video-label-slides">Edited </div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Slide 2 -->
<div class="swiper-slide">
<table class="slide-table" width="100%" align="center">
<tbody>
<tr>
<th colspan="2" width="90%" class="prompt_title" align="center">"A dog in low-poly video game style"</th>
</tr>
<tr>
<td colspan="2">
<video loop autoplay muted width="95%" class="result-video">
<source src=".\videos\dog_lowpoly_small.mp4" type="video/mp4">
</video>
</td>
</tr>
<tr>
<td width="45%">
<div align="center" class="video-label-slides">Initial</div>
</td>
<td width="41%">
<div align="center" class="video-label-slides">Edited </div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Slide 3 -->
<div class="swiper-slide">
<table class="slide-table" width="100%" align="center">
<tbody>
<tr>
<th colspan="2" width="90%" class="prompt_title_local" align="center">"An alien wearing a tuxedo"</th>
</tr>
<tr>
<td colspan="2">
<video loop autoplay muted width="95%" class="result-video">
<source src=".\videos\alien_tux_small.mp4" type="video/mp4">
</video>
</td>
</tr>
<tr>
<td width="45%">
<div align="center" class="video-label-slides">Initial</div>
</td>
<td width="41%">
<div align="center" class="video-label-slides">Edited </div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Slide 4 -->
<div class="swiper-slide">
<table class="slide-table" width="100%" align="center">
<tbody>
<tr>
<th colspan="2" width="90%" class="prompt_title" align="center">"A black swan"</th>
</tr>
<tr>
<td colspan="2">
<video loop autoplay muted width="95%" class="result-video">
<source src=".\videos\duck_black_swan_small.mp4" type="video/mp4">
</video>
</td>
</tr>
<tr>
<td width="45%">
<div align="center" class="video-label-slides">Initial</div>
</td>
<td width="41%">
<div align="center" class="video-label-slides">Edited </div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Slide 5 -->
<div class="swiper-slide">
<table class="slide-table" width="100%" align="center">
<tbody>
<tr>
<th colspan="2" width="90%" class="prompt_title_local" align="center">"A cactus in a pot"</th>
</tr>
<tr>
<td colspan="2">
<video loop autoplay muted width="95%" class="result-video">
<source src=".\videos\ficus_cactus_small.mp4" type="video/mp4">
</video>
</td>
</tr>
<tr>
<td width="45%">
<div align="center" class="video-label-slides">Initial</div>
</td>
<td width="41%">
<div align="center" class="video-label-slides">Edited </div>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- If we need pagination -->
<div class="swiper-pagination"></div>
<!-- If we need navigation buttons -->
<div class="swiper-button-prev"></div>
<div class="swiper-button-next"></div>
</div>
<!-- intro -->
<section class="into-paragraph-section" width="100%">
<div class="intro-container has-text-justified">
<div class="intro-paragraph">
<p>
Given multiview images of an object, our technique generates volumetric edits from target text prompts, allowing for
significant geometric and appearance changes, while faithfully preserving the input object.
The objects can be edited either globally or locally (local edits are emphasized in <span class="prompt_title_local">green</span>).
</p>
</div>
</div>
</section>
<!-- abstract -->
<section class="abstract-section" width="100%">
<div class="abstract-container has-text-justified">
<hr>
<h2 align="center">Abstract</h2>
<p class="has-text-justified">
Large scale text-guided diffusion models have garnered significant attention due to their ability to synthesize diverse images that convey complex visual concepts.
This generative power has more recently been leveraged to perform text-to-3D synthesis.
In this work, we present a technique that harnesses the power of latent diffusion models for editing existing 3D objects.
Our method takes oriented 2D images of a 3D object as input and learns a grid-based volumetric representation of it.
To guide the volumetric representation to conform to a target text prompt, we follow unconditional text-to-3D methods and optimize a Score Distillation Sampling (SDS) loss.
However, we observe that combining this diffusion-guided loss with an image-based regularization loss that encourages the representation not to deviate too strongly from the input object
is challenging, as it requires achieving two conflicting goals while viewing only structure-and-appearance coupled 2D projections.
Thus, we introduce a novel volumetric regularization loss that operates directly in 3D space, utilizing the explicit nature of our 3D representation to enforce correlation between the global structure of the original and edited object.
Furthermore, we present a technique that optimizes cross-attention volumetric grids to refine the spatial extent of the edits.
Extensive experiments and comparisons demonstrate the effectiveness of our approach in creating a myriad of edits which cannot be achieved by prior works.
</p>
<div class="swipe_vid_container" align="center" width="100%">
<video loop autoplay muted width="60%" class="swipe_video">
<source src=".\videos\swipe_mid_qual.mp4" type="video/mp4">
</video>
</div>
</div>
</section>
<!-- method -->
<section class="method-section" width="100%">
<div class="method-container">
<hr>
<h2 align="center">How does it work?</h2>
<div class="im_container has-text-justified" width="90%" align="center">
<img align="center" src="images/overview_official.png" alt="Overview" width="100%">
</div>
<p class="has-text-justified">
Given a set of posed multi-view images of an object (for example a dog in the example above) we first model the
scene as a voxel grid where each voxel contains learned features (denoted as the "Initial Feature Grid" above).
We then perform text-guided object editing by carefully applying a score distillation loss on an edited voxel grid.
Our key idea is to regularize this process in 3D space by encouraging similarity between the structure of the
edited grid and the initial grid. To further refine the spatial extent of the edits, we introduce an (optional)
refinement stage that utilizes 2D cross-attention maps which roughly capture regions associated with the target
edit and lift them to volumetric grids. We then use these 3D cross-attention grids as a signal for a binary
volumetric segmentation algorithm that splits the reconstructed volume into edited and non-edited regions,
allowing for merging the features of the volumetric grids to better preserve regions that should not be affected
by the textual edit.
</p>
<div class="attn-grid-vid-container">
</div>
</div>
</section>
<!-- BibTex -->
<section class="bib-section" width="100%">
<div class="bib-container">
<hr>
<h2 align="center">BibTeX</h2>
<div class="code-container">
<code>
@misc{sella2023voxe, <br>
  title={Vox-E: Text-guided Voxel Editing of 3D Objects}, <br>
  author={Etai Sella and Gal Fiebelman and Peter Hedman and Hadar Averbuch-Elor}, <br>
  year={2023}, <br>
  eprint={2303.12048}, <br>
  archivePrefix={arXiv}, <br>
  primaryClass={cs.CV} <br>
}
</code>
</div>
</div>
</section>
<!-- Acknowledgements -->
<!-- <section class="ack-section" width="100%"> -->
<!-- <div class="ack-container"> -->
<!-- <hr> -->
<!-- <h2 align="center">Acknowledgements</h2> -->
<!-- <p> -->
<!-- We would like to acknowledge such and such for this and that. -->
<!-- </p> -->
<!-- </div> -->
<!-- </section> -->
<p><br>
</p>
<p> </p>
<p> </p>
<p> </p>
</div>
<script>
const swiper = new Swiper('.swiper', {
autoplay: {
delay: 4000,
},
// Optional parameters
speed: 1000,
loop: true,
// If we need pagination
pagination: {
el: '.swiper-pagination',
},
// Navigation arrows
navigation: {
nextEl: '.swiper-button-next',
prevEl: '.swiper-button-prev',
},
});
</script>
</body></html>