-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
212 lines (192 loc) · 10.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
<!DOCTYPE html>
<html>
<head lang="en"><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title>IRISformer</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- <link rel="icon" type="image/png" href="./index_files/icon.png"> -->
<link rel="stylesheet" href="./index_files/bootstrap.min.css">
<link rel="stylesheet" href="./index_files/font-awesome.min.css">
<link rel="stylesheet" href="./index_files/codemirror.min.css">
<link rel="stylesheet" href="./index_files/app.css">
<link rel="stylesheet" href="./index_files/bootstrap.min(1).css">
<script type="text/javascript" async="" src="./index_files/analytics.js"></script>
<script type="text/javascript" async="" src="./index_files/analytics(1).js"></script>
<script async="" src="./index_files/js"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-110862391-3');
</script>
<script src="./index_files/jquery.min.js"></script>
<script src="./index_files/bootstrap.min.js"></script>
<script src="./index_files/codemirror.min.js"></script>
<script src="./index_files/clipboard.min.js"></script>
<script src="./index_files/app.js"></script>
</head>
<body data-gr-c-s-loaded="true">
<div class="container" id="main">
<div class="row">
<h1 class="col-md-12 text-center">
<b>IRISformer</b>: Dense Vision Transformers <br>for Single-Image Inverse Rendering in Indoor Scenes
<br />
<small>
CVPR 2022 (<b>Oral presentation</b>)
</small>
<br />
</h1>
</div>
<div class="row">
<div class="col-md-12 text-center">
<ul class="list-inline">
<li>
<a href="https://jerrypiglet.github.io/">
Rui Zhu
</a><sup>1</sup>
</li>
<li>
<a href="https://sites.google.com/a/eng.ucsd.edu/zhengqinli">
Zhengqin Li
</a><sup>1</sup>
</li>
<li>
<a href="https://scholar.google.com/citations?user=vfeeYI0AAAAJ&hl=en">
Janarbek Matai
</a><sup>2</sup>
</li>
<li>
<a href="http://www.porikli.com/">
Fatih Porikli
</a><sup>2</sup>
</li>
<li>
<a href="https://cseweb.ucsd.edu/~mkchandraker/">
Manmohan Chandraker
</a><sup>1</sup>
</li>
</ul>
</div>
</div>
<div class="row">
<div class="col-md-12 text-center">
<ul class="list-inline">
<li>
<sup>1</sup>UC San Diego
</li>
<li>
<sup>2</sup>Qualcomm AI Research
</li>
</ul>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2 text-center">
<ul class="nav nav-pills nav-justified">
<li>
<a href="https://arxiv.org/pdf/to-be-released.pdf">
<img src="./index_files/paper.png" height="120px"><br>
<h4><strong>Paper</strong></h4>
</a>
</li>
<li>
<a href="https://www.youtube.com/watch?v=Wy3P4LivzAc">
<img src="./index_files/youtube_icon_dark.png" height="120px"><br>
<h4><strong>Technical Video</strong></h4>
</a>
</li>
<li>
<a href="https://github.com/ViLab-UCSD/IRISformer">
<img src="./index_files/github_pad.png" height="120px"><br>
<h4><strong>Source Code</strong></h4>
</a>
</li>
</ul>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
   
</h3>
<!-- <video id="v0" width="100%" autoplay="" loop="" muted="" controls="">
<source src="https://www.youtube.com/watch?v=Wy3P4LivzAc" type="video/mp4">
</video> -->
<!-- <iframe width="100%" height="100%" src="https://www.youtube.com/embed/Wy3P4LivzAc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> -->
<div class="video-container">
<iframe width="560" height="315" src="https://www.youtube.com/embed/Wy3P4LivzAc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Overview
</h3>
<figure>
<img src="./index_files/teaser.png" class="img-responsive" alt="overview">
<figcaption>
<i>Given a single real world image, <b>IRISformer</b> simultaneously infers material (albedo and roughness), geometry (depth and normals), and spatially-varying lighting of the scene. The estimation enables virtual object insertion where we demonstrate high-quality photorealistic renderings in challenging lighting conditions compared to previous work (Li et al., 2021). The learned attention is also visualized for selected patches, indicating benefits of global attention to reason about distant interactions. </i>
</figcaption>
</figure>
<br>
</iframe>
<p class="text-justify">
Indoor scenes exhibit significant appearance variations due to myriad interactions between arbitrarily diverse object shapes, spatially-changing materials, and complex lighting. Shadows, highlights, and inter-reflections caused by visible and invisible light sources require reasoning about long-range interactions for inverse rendering, which seeks to recover the components of image formation, namely, shape, material, and lighting. <br>
In this work, our intuition is that the long-range attention learned by transformer architectures is ideally suited to solve longstanding challenges in single-image inverse rendering. We demonstrate with a specific instantiation of a dense vision transformer, <b>IRISformer</b>, that excels at both single-task and multi-task reasoning required for inverse rendering. Specifically, we propose a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness and lighting from a single image of an indoor scene. Our extensive evaluations on benchmark datasets demonstrate state-of-the-art results on each of the above tasks, enabling applications like object insertion and material editing in a single unconstrained real image, with greater photorealism than prior works.
</p>
</div>
</div>
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Highlights
</h3>
<p class="text-justify">
<ul>
<li>
Given a single real-world image, inferring <b>material</b> (albedo and roughness), <b>geometry</b> (depth and normals), <b>spatially-varying lighting</b> of the scene.
</li>
<li>
Two-stage model design with <b>Transformer-based</b> encoder-decoders:
<ol style="list-style-type: lower-alpha; padding-bottom: 0;">
<li style="margin-left:0em"><b>multi-task setting</b>: sharing encoder-decoders with smaller model;</li>
<li style="margin-left:0em; padding-bottom: 0;"><b>single-task setting</b>: independent encoder-decoders with better accuracy.</li>
</ol>
</li>
<li>
Demonstrating the benefits of <b>global attention</b> to reason about long-range interactions.
</li>
<li>
<b>State-of-the-art results</b> in:
<ol style="list-style-type: lower-alpha; padding-bottom: 0;">
<li style="margin-left:0em">per-pixel inverse rendering tasks on OpenRooms dataset;</li>
<li style="margin-left:0em">albedo estimation on IIW dataset;</li>
<li style="margin-left:0em; padding-bottom: 0;">object insertion on natural image datasets.</li>
</ol>
</li>
</ul>
</p>
</div>
</div>
<!-- <div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Download
</h3>
<p class='text-justify'>To download the dataset, please send your request to the email <a>[email protected]</a>. A download link will be sent to you once the dataset is released.
</p>
</div>
</div> -->
<div class="row">
<div class="col-md-8 col-md-offset-2">
<h3>
Acknowledgments
</h3>
We thank NSF CAREER 1751365, NSF IIS 2110409 and NSF CHASE-CI, generous support by Qualcomm, as well as gifts from Adobe and a Google Research Award. The website template was borrowed from <a href="http://mgharbi.com/">Michaël Gharbi</a>.
<p></p>
</div>
</div>
</div>
</body></html>