-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.njk
73 lines (71 loc) · 3.55 KB
/
index.njk
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Ayush Tewari</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Ayush Tewari is a Postdoc at MIT">
<link rel="stylesheet" href="css/main.css">
</head>
<body>
<table>
<tbody>
<tr>
<td class="bio">
<h1 class="name ">Ayush Tewari</h1>
<div class="email ">[email protected]</div>
<p>
{# <i>I will join the <a href="https://www.eng.cam.ac.uk/">Department of Engineering</a> at the <a href="https://www.cam.ac.uk/">University of Cambridge</a> as an Assistant Professor in 2025 and will be looking for PhD students. #}
{# Join me to understand and build models of visual perception. Feel free to reach out for more details. Apply <a href="https://www.postgraduate.study.cam.ac.uk/courses/directory/egegpdpeg">here</a>. </i><br/><br/> #}
I am an assistant professor at Unviersity of Cambridge. I was previously a postdoctoral researcher at MIT CSAIL with <a href="http://billf.mit.edu">Bill Freeman</a>, <a href="http://web.mit.edu/cocosci/josh.html">Josh Tenenbaum</a>, and <a href="https://www.vincentsitzmann.com/">Vincent Sitzmann</a>.
I completed my Ph.D. at the <a href="https://mpi-inf.mpg.de/departments/visual-computing-and-artificial-intelligence/">Max Planck Institute for Informatics</a> with <a href="https://www.mpi-inf.mpg.de/~theobalt/">Christian Theobalt</a>.
<!--I received an <a href="https://www.mpg.de/prizes/otto-hahn-medal">Otto Hahn Medal</a> from the Max Planck Society for my PhD.-->
My research interests lie in visual perception. I develop methods that infer rich structured representations of the visual world from images and videos, much like the mental models humans infer to interact with and navigate their surroundings.
Check out some <a href="#talks">recent talks</a> for an overview. <br/><br/>
{{ metadata | safe }} <a href="#publications" >[Publications]</a> <a href="#talks" >[Talks]</a> </p>
</td>
<td class="photo">
<img src="assets/photo.png" width="100%"> <br/>
</td>
</tr>
</tbody>
</table>
<h2 id="publications">Publications</h2>
<ul class="publications">
{% for pub in publications %}
<li class="pub">
<table>
<tbody>
<tr>
<td class="pub-photo">
{%- if 'mp4' in pub.teaser %}
<video width="100%" autoplay loop muted>
<source src={{ pub.teaser }} type="video/mp4" />
</video>
{%- else -%}
<img src={{ pub.teaser }} width="100%">
{%- endif -%}
</td>
<td>
<span class="pub-title">{{ pub.title }}</span><br/>
{{ pub.conference | safe }} <br/>
<span>{{ pub.authors | safe }}<br/>
{{ pub.data | safe }}<br/>
</td>
</tr>
</tbody>
</table>
</li>
{% endfor %}
</ul>
<h2 id="talks">Talks</h2>
<ul class="talks">
{% for talk in talks %}
<li class="talk">
<span class="talk-title">{{ talk.title }}</span>, {{ talk.venue | safe}}<br/>
{{ talk.data | safe }}<br/>
</li>
{% endfor %}
</ul>
</body>
</html>