-
Notifications
You must be signed in to change notification settings - Fork 0
/
profile_offline_resume.html
250 lines (232 loc) · 12.7 KB
/
profile_offline_resume.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
<!DOCTYPE html>
<!-- saved from url=(0022)http://localhost:6419/ -->
<html lang="en" class="gr__localhost"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Resume</title>
<link rel="icon" href="http://localhost:6419/__/grip/static/favicon.ico">
<link rel="stylesheet" href="./profile_files/site-aa41a245cc5c75e6a9cc3d5dc3b187e5.css">
<link rel="stylesheet" href="./profile_files/github-576363b813d46ddb96ed5901a4933f28.css">
<link rel="stylesheet" href="./profile_files/frameworks-9b5314213e37056ed87b0418056c4f2c.css">
<link rel="stylesheet" href="./profile_files/profile.css">
<link rel="stylesheet" href="./profile_files/octicons.css">
<style>
/* Page tweaks */
.preview-page {
margin-bottom: 64px;
}
/* User-content tweaks */
.timeline-comment-wrapper > .timeline-comment:after,
.timeline-comment-wrapper > .timeline-comment:before {
content: none;
max-width: 920px;
}
/* User-content overrides */
.discussion-timeline.wide {
width: 920px;
align: center;
margin-right: auto;
margin-left: auto;
}
</style>
</head>
<body data-gr-c-s-loaded="true">
<div class="page">
<div id="preview-page" class="preview-page" data-autorefresh-url="/__/grip/refresh/">
<div role="main" class="main-content">
<div class="container new-discussion-timeline experiment-repo-nav">
<div class="repository-content">
<div class="issues-listing" data-pjax="">
<div id="show_issue" class="js-issues-results">
<div id="discussion_bucket" class="clearfix">
<div class="timeline-comment-wrapper js-comment-container">
<div class="comment timeline-comment js-comment js-task-list-container is-task-list-enabled">
<div class="comment-content">
<div class="edit-comment-hide">
<div class="comment-body markdown-body markdown-format js-comment-body" id="grip-content">
<h2>
<a id="user-content-profile" class="anchor" href="http://localhost:6419/#profile" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Profile</h2>
<div class="row">
<div class="left_column">
<p>
<a href="./profile_files/portrait.jpg" target="_blank" rel="noopener noreferrer"><img src="./profile_files/portrait.jpg" alt="portrait" width="200/" style="max-width:100%; margin-left:20px"></a>
</p>
</div>
<div class="right_column">
<p style="font-size:20px">표정우(Jungwoo Pyo)</p>
<p style="font-size:15px">
1994.05.12<br />
E-mail: [email protected]<br />
H.P. : +82-10-2681-1137<br />
Github: <a href="https://github.com/jw-pyo">https://github.com/jw-pyo</a><br />
Homepage: <a href="https://jw-pyo.github.io">https://jw-pyo.github.io</a><br />
<b>* 현재 석사 졸업예정으로, 전문연구요원으로 지원할 회사를 찾고 있습니다.</b><br /><br />
Live As Greedy: local optimal을 선택하다 보면 결국 global optimal에 도달하는 greedy algorithm의 풀이 방식처럼, 현재의 위치에서 최선을 다하면 결국 원하는 목표에 도달할 수 있다는 motto를 가지며 새로운 것에 대한 배움을 즐기는 엔지니어입니다.</p>
</div>
</div>
<h2>
<a id="user-content-education" class="anchor" href="http://localhost:6419/#education" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Education</h2>
<ul>
<li>2017.03 - 2020.02 , M.S, Electrical & Computer Engineering in Seoul National University(In-memory Database Lab)</li>
<li>2013.03 - 2017.02 , B.S, Electrical & Computer Engineering in Seoul National University</li>
</ul>
<h2>
<a id="user-content-career--internship" class="anchor" href="http://localhost:6419/#career--internship" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Career & Internship</h2>
<ul>
<li>2018.03 - 2019 , 서울대학교 블록체인 학회 Decipher 1, 2기</li>
<li>2016.07 - 2016.08 , LG전자 CTO부문 SW센터 PMO실 SDK part intern</li>
</ul>
<h2>
<a id="user-content-projects" class="anchor" href="http://localhost:6419/#projects" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Projects</h2>
<ul>
<li><strong>2019, An Attention-Based Speaker Naming Method for Online Adaptation in Non-Fixed Scenarios(will be presented at <em>AAAI 2020 Workshop(WICRS)</em>)</strong></li>
</ul>
<p align="center">
<a href="./profile_files/speaker_naming.png" target="_blank" rel="noopener noreferrer"><img src="./profile_files/speaker_naming.png" alt="speaker naming" width="450" height="200" style="max-width:100%;"></a>
</p>
<p align="center">
<a href="./profile_files/speaker_naming_architecture.png" target="_blank" rel="noopener noreferrer"><img src="./profile_files/speaker_naming_architecture.png" alt="speaker naming architecture" width="450" height="200" style="max-width:100%;"></a>
</p>
<p align="center">
<em> ▲ Overall architecture of attention-based speaker naming method</em>
</p>
<ul><ul>
<li>영화나 드라마에서 화자의 얼굴을 localize하고, face-voice feature embedding을 활용하여 화자의 ID를 식별하는 speaker naming task 수행</li>
<li>speaker naming task를 수행하는 기존의 gradient-based method 방식과 달리, attention module을 이용하여 gradient update process 없이 prior knowledge와 target data 간의 similarity의 linear combination을 통한 target data의 identification을 수행하는 방법 제시</li>
<li>기존의 방식은 모델을 훈련시키기 위해 충분한 training data와 긴 training time이 소요되는 단점이 있었으나, 본 논문에서 제시한 방법을 통해 비슷한 수준의 accuracy를 유지하면서 model training time을 크게 단축시킴(10x-100x).</li>
<li>gradient-based method에서 사용된 것보다 적은 수의 training data(5 face-voice pairs per ID)만을 이용하여 모델 구축 가능</li>
<li>모델이 배포된 이후에 추가적으로 얻게 되는 새로운 데이터를 attention module에 추가하여 knowledge base로 활용하는 online adaptation 가능</li>
</ul></ul>
<ul>
<li><strong>2019, Multi-Domain Networks for Object Detection in Road Environment</strong></li>
</ul>
<p align="center">
<a href="./profile_files/example_mdnet.png" target="_blank" rel="noopener noreferrer"><img src="./profile_files/example_mdnet.png" alt="mdnet" width="450" height="200" style="max-width:100%;"></a>
</p>
<ul><ul>
<li>Yolo-v3을 backbone network로 사용하여 도로의 여러 차량을 인식하는 multi-domain object detection model 구축</li>
<li>network를 shared weight를 공유하는 multi-branch로 나누고, 각 branch를 road condition(weather, time) 별로 구분하여 train</li>
<li>[BDD100k](<a href="https://bair.berkeley.edu/blog/2018/05/30/bdd/" rel="nofollow">https://bair.berkeley.edu/blog/2018/05/30/bdd/</a>) dataset 활용</li>
<li>baseline(single-branch network)에 비해 mAP 향상</li>
<li>data preprocessing: python, core: python(Pytorch)</li>
<li>demo link: [<a href="https://youtu.be/CZ_VfzbysHA%5D(https://youtu.be/CZ_VfzbysHA)" rel="nofollow">https://youtu.be/CZ_VfzbysHA](https://youtu.be/CZ_VfzbysHA)</a>
</li>
<li>
<p align="center">
<a href="./profile_files/mAP_mdnet.png" target="_blank" rel="noopener noreferrer"><img src="./profile_files/mAP_mdnet.png" alt="mAP comparison between multi-domain network and single-domain network" width="450" style="max-width:100%;"></a>
</p>
<p align="center">
<em> ▲ mAP comparison between multi-domain network and single-domain network</em>
</p>
</li>
</ul></ul>
<ul>
<li>
<strong>2018.08.24 - 08.25, Queryable blockchain-중국 상하이 Zhongan Insurance 해커톤 준우승(2nd prize)</strong><a href="https://jw-pyo.github.io/blockchain,/databases/2018/08/26/zhongan_hackathon.html">(포스트)</a>
</li>
</ul>
<ul><ul>
<li>demo link: [<a href="https://youtu.be/EzoG1hWP9eA%5D(https://youtu.be/EzoG1hWP9eA)" rel="nofollow">https://youtu.be/EzoG1hWP9eA](https://youtu.be/EzoG1hWP9eA)</a>
</li>
<li>core(blockchain): C++11, network & client: Python, JavaScript</li>
</ul></ul>
<ul>
<li><strong>2017, smartcard 데이터를 이용한 출.퇴근 시간 crowd path visualization</strong></li>
</ul>
<ul><ul>
<li>2016.06.17(Sat) 하루 동안의 수도권 교통카드 tagging data 활용(about 10 million rows)</li>
<li>subway, bus를 이용한 승객들의 이동 방향 및 수를 시각화</li>
<li>시간, tagging count를 filtering하여 확인 가능</li>
<li>data processing: Python, visualization: Tableau</li>
<li>[subway demo](<a href="https://public.tableau.com/profile/.3518#!/vizhome/bus_v0_2/1?publish=yes" rel="nofollow">https://public.tableau.com/profile/.3518#!/vizhome/bus_v0_2/1?publish=yes</a>)</li>
<li>[bus demo](<a href="https://public.tableau.com/profile/.3518#!/vizhome/bus_v0_2_onlybus/1?publish=yes" rel="nofollow">https://public.tableau.com/profile/.3518#!/vizhome/bus_v0_2_onlybus/1?publish=yes</a>)</li>
<p align="center">
<a href="./profile_files/smartcard_subway.png" target="_blank" rel="noopener noreferrer"><img src="./profile_files/smartcard_subway.png" alt="example of visualization for smartcard:subway" width="450" style="max-width:100%;"></a>
</p>
<p align="center">
<em> ▲ example of crowd's subway trajactory visualization based on smartcard data </em>
</p>
</ul></ul>
<h2>
<a id="user-content-skills--experiences" class="anchor" href="http://localhost:6419/#skills--experiences" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Skills & Experiences</h2>
<ul>
<li>Python(Tensorflow, Pytorch)</li>
<li>C, C++</li>
<li>(RDBMS)PostgreSQL, MySQL</li>
<li>(Blockchain)Ethereum, Quorum, Solidity</li>
</ul>
<h2>
<a id="user-content-publications" class="anchor" href="http://localhost:6419/#publications" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Publications</h2>
<ul>
<li>Jungwoo Pyo, Joohyun Lee, Youngjune Park, Tien-Cuong Bui, Sang Kyun Cha, <a href="http://arxiv.org/abs/1912.00649"><b>An Attention-Based Speaker Naming Method for Online Adaptation in Non-Fixed Scenarios</b></a>, AAAI 2020 Workshop on Interactive and Conversational Recommendation Systems(WICRS), Feb 2020, New York, USA.</li>
<li><a href="https://medium.com/decipher-media/zero-knowledge-proof-chapter-1-introduction-to-zero-knowledge-proof-zk-snarks-6475f5e9b17b">영지식 증명(zero-knowledge proof)에 관한 글</a></li>
</ul></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div> </div>
</div><script>
function showCanonicalImages() {
var images = document.getElementsByTagName('img');
if (!images) {
return;
}
for (var index = 0; index < images.length; index++) {
var image = images[index];
if (image.getAttribute('data-canonical-src') && image.src !== image.getAttribute('data-canonical-src')) {
image.src = image.getAttribute('data-canonical-src');
}
}
}
function scrollToHash() {
if (location.hash && !document.querySelector(':target')) {
var element = document.getElementById('user-content-' + location.hash.slice(1));
if (element) {
element.scrollIntoView();
}
}
}
function autorefreshContent(eventSourceUrl) {
var initialTitle = document.title;
var contentElement = document.getElementById('grip-content');
var source = new EventSource(eventSourceUrl);
var isRendering = false;
source.onmessage = function(ev) {
var msg = JSON.parse(ev.data);
if (msg.updating) {
isRendering = true;
document.title = '(Rendering) ' + document.title;
} else {
isRendering = false;
document.title = initialTitle;
contentElement.innerHTML = msg.content;
showCanonicalImages();
}
}
source.onerror = function(e) {
if (e.readyState === EventSource.CLOSED && isRendering) {
isRendering = false;
document.title = initialTitle;
}
}
}
window.onhashchange = function() {
scrollToHash();
}
window.onload = function() {
scrollToHash();
}
showCanonicalImages();
var autorefreshUrl = document.getElementById('preview-page').getAttribute('data-autorefresh-url');
if (autorefreshUrl) {
autorefreshContent(autorefreshUrl);
}
</script>
</body></html>