-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
161 lines (155 loc) · 6.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ARTNet</title>
<style>
body {
font-family: Arial, sans-serif;
margin: 0;
padding: 0;
background-color: #f5f5f5;
text-align: center;
}
/* Flex container for title and image */
.title-container {
display: flex;
justify-content: center;
align-items: center;
margin-top: 20px;
}
h1 {
font-size: 36px;
color: #000000; /* Adjust to match the colors of the lung image */
margin-right: 15px;
}
h2 {
font-size: 20px;
margin-bottom: 0;
}
.authors {
margin-bottom: 20px;
}
.authors a {
color: #0066cc;
text-decoration: none;
margin-right: 15px;
}
.authors a:hover {
text-decoration: underline;
}
.gif-container img {
width: 70%;
max-width: 600px;
border: 2px solid #ccc;
margin-bottom: 20px;
}
.abstract {
margin: 0 auto;
width: 80%;
background-color: white;
padding: 20px;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
/* 视频容器 */
.videos {
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
margin: 20px auto;
width: 80%;
}
.videos video {
width: 80%;
height: auto;
border: 2px solid #ccc;
margin-bottom: 20px;
}
.intro-video video {
width: 80%;
height: auto;
border: 2px solid #ccc;
margin-bottom: 20px;
}
/* 图片容器 */
.images {
margin: 20px auto;
width: 80%;
}
.images img {
width: 50%;
height: auto;
border: 2px solid #ccc;
margin-bottom: 20px;
}
/* Logo 样式 */
.logo {
width: 100px;
height: auto;
}
</style>
</head>
<body>
<!-- 标题和插图在一行 -->
<div class="title-container">
<h1>ARTNet</h1>
<img class="logo" src="art_lung_3.png" alt="ARTNet Logo">
</div>
<h2>An Artifact-Resilient Translation Network for Endoluminal Navigation</h2>
<div class="authors">
<a>Junyang Wu</a>
<a>Minghui Zhang</a>
<a>Fangfang Xie</a>
<a>Jiayuan Sun</a>
<a>Yun Gu</a>
<a>Guang-zhong Yang</a>
<br>Shanghai Jiao Tong University
</div>
<!-- 放大intro视频 -->
<div class="intro-video">
<video controls>
<source src="intro_video.mp4" type="video/mp4">
</video>
</div>
<!-- Abstract Section -->
<div class="abstract">
<h1>Abstract</h1>
<p>
Domain adaptation, which bridges the distributions across different modalities, plays a crucial role in multimodal medical image analysis. In endoscopic imaging, combining pre-operative data with intra-operative imaging is important for surgical planning and navigation. However, existing domain adaptation methods are hampered by distribution shift caused by in vivo artifacts, necessitating robust techniques for aligning noisy patient endoscopic videos with clean virtual images reconstructed from pre-operative tomographic data for pose estimation during surgical guidance.
This paper presents an Artifact-Resilient image Translation method (ARTNet) for this purpose. The method incorporates a novel ``local-global'' translation framework and a noise-resilient feature extraction strategy. For the former, it decouples the image translation process into a local step for feature denoising, and a global step for global style transfer. For feature extraction, a new contrastive learning strategy is proposed, which can extract noise-resilient features for establishing robust correspondence across domains. Detailed validation on both public and in-house clinical datasets has been conducted, demonstrating improved performance compared to the current state-of-the-art.
</p>
</div>
<!-- 视频部分 -->
<div class="videos">
<h1>Watch a 4-minute overview video</h1>
<video controls>
<source src="Supplementary Movie 1.mp4" type="video/mp4">
</video>
</div>
<!-- 更多视频 -->
<div class="videos">
<h1>ARTNet enables depth estimation without real depth labels</h1>
<video controls>
<source src="depth_video2.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<h1>ARTNet enables accurate pose estimation</h1>
<video controls>
<source src="video_demo_with_filter.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
<!-- 图片展示部分 -->
<div class="images">
<h1>Methods</h1>
<p>
The main workflow and rationale of the proposed method highlight two primary components: a “local-global” translation framework and a noise-resilient feature extraction strategy.
For the former, it decouples the image translation process into a local step for feature denoising, and a global step for global style transfer. For feature extraction, a new contrastive learning strategy is proposed, which can extract noise-resilient features for establishing robust correspondence across
domains
</p>
<img src="method.png" alt="Methods diagram">
</div>
</body>
</html>