-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathopportunities.html
392 lines (353 loc) · 30.3 KB
/
opportunities.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
---
layout: page
title: Opportunities
og-description: Job, Funding, and other opportunities in the AI safety space.
nav-menu: true
order: 4
---
<!-- Main -->
<div id="main" class="alt">
<section class="bg-gray">
<div class="inner">
<h3 id="ai_safety"><a href="aisafety">Learn more context on large-scale risks from advanced AI</a></h3>
<p>These opportunities focus on AI safety work aimed at preventing loss of human control of very capable AI systems. <b>To maximize your eligibility for these opportunities, we recommend gaining context on the perspectives of this subfield</b>, e.g. by <a href="aisafety">skimming pertinent AI safety papers</a>.</p>
<a href="aisafety" class="button button-white button-right button-special" style="z-index: 2">AI Safety Papers</a>
</div>
</section>
<section>
<div class="inner">
<h1>Opportunities</h1>
<p><i>Last updated: 01/20/25</i></p>
<!-- <p style="margin-left:5%"><i>Opportunities relevant to <a href="aisafety">reducing large-scale risks from advanced AI</a>.</i></p> -->
<!-- <div id="opportunities_headsup" class="box box-blue special">
<p>These opportunities are relevant to reducing large-scale risks from advanced AI, with focus on AI safety work aimed at preventing loss of human control of very capable AI systems. <b>To maximize your eligibility for these opportunities, we recommend gaining familiarity with the context and perspectives of this subfield</b>, either by closely reading the grantmakers' webpages or skimming some AI safety papers.</p>
<a href="aisafety" class="button button-white">AI Safety Papers</a>
</div>
-->
<!-- <h3 id="research_opportunities">Jobs and Studentships (Research and Engineering)</h3>
<h4>Academic Roles</h4>
<div class="iframe-container">
<div class="iframe-dark-mode-inverter"> </div>
<iframe id="academic" onload="iframeLoaded('iframe_loading_spinner_academic')" src="https://airtable.com/embed/appWAkbSGU6x8Oevt/shrlJFISJfJkqHIZx?viewControls=on" frameborder="0" onmousewheel="" width="100%" height="533" style="background: transparent; border: 1px solid #ccc;"></iframe>
<div id="iframe_loading_spinner_academic" class="iframe-loading">
{% include loading_spinner.html %}
</div>
</div>
<p class="text-right" style="margin-top: 0.3em;"><i>Want to update or remove your own details in the table? Email <a href="mailto:[email protected]">[email protected]</a></i></p>
-->
<h3 id="job-opportunities">Job Opportunities</h3>
<ul>
<!-- <li class="expandable">OpenAI</li>
<ul>
<li><a href="https://openai.com/safety/safety-systems">Safety Systems Team</a> (<a href="https://openai.com/careers/search?c=safety-systems">roles</a>)</li>
<li><a href="https://openai.com/safety/preparedness">Preparedness Team</a> (<a href="https://openai.com/careers/search?c=preparedness">roles</a>)</li>
<li><a href="https://openai.com/blog/introducing-superalignment">Superalignment Team</a> (<a href="https://openai.com/careers/search?c=alignment">roles</a>)</li>
<li>Security Team</li>
<li>Policy Research Team</li>
<li>Trustworthy AI Team</li>
</ul> -->
<li class="expandable">Anthropic (<a href="https://www.anthropic.com/careers#open-roles">roles</a>)</li>
<ul>
<li><a href="https://www.anthropic.com/news/frontier-threats-red-teaming-for-ai-safety">Frontier Red Team</a></li>
<li><a href="https://www.anthropic.com/news/anthropics-responsible-scaling-policy">Responsible Scaling Policy Team</a></li>
<li><a href="https://www.alignmentforum.org/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic">Alignment Stress Testing Team</a></li>
<li>Interpretability Team</li>
<li>Dangerous Capability Evaluations Team</li>
<li>Assurance Team</li>
<li>Security Team</li>
</ul>
<li class="expandable">Google DeepMind (<a href="https://deepmind.google/about/careers/#open-roles">roles</a>)</li>
<ul>
<li>AI Safety and Alignment Team (Bay Area)</li>
<li>Scalable Alignment Team</li>
<li>Frontier Model and Governance Team</li>
<li>Mechanistic Interpretability Team</li>
<li><a href="https://deepmind.google/about/responsibility-safety/#:~:text=To%20empower%20teams%20to%20pioneer,and%20collaborations%20against%20our%20AI">Responsibility & Safety Team</a></li>
</ul>
<li><a href="https://far.ai/">FAR.AI</a> (<a href="https://far.ai/jobs">roles</a>)</li>
<li><a href= "https://www.redwoodresearch.org/">Redwood Research </a>(<a href="https://www.redwoodresearch.org/careers">roles</a>)</li>
<li class="expandable">UK AI Safety Institute (AISI) (<a href="https://www.aisi.gov.uk/careers#open-roles">roles</a>)</li>
<ul>
<li>In particular, the Autonomous Systems Team's <a href=https://boards.eu.greenhouse.io/aisi/jobs/4468756101>Engineering Residency</a> (though note that this role has some citizenship restrictions)</li>
</ul>
<li><a href="https://metr.org">Model Evaluations and Threat Research (METR)</a> (<a href="https://hiring.metr.org/">roles</a>)</li>
<li><a href="https://www.apolloresearch.ai/">Apollo Research</a> (<a href="https://www.apolloresearch.ai/careers">roles</a>)</li>
<li class="expandable">RAND (<a href="https://www.rand.org/jobs/technology-security-policy-fellows.html">roles</a>)</li>
<ul>
<li> Note that RAND's Technology and Security Policy Fellowship is not just for policy research; ML engineers, software engineers with either infrastructure or front-end experience, and technical program managers are also encouraged to apply via this Fellowship.
</li>
</ul>
<li class="expandable">Postdoctoral Positions and PhDs</li>
<ul>
<li>You can use the <a href="https://airtable.com/appWAkbSGU6x8Oevt/shr70kvYK7xlrPr5s">filtered view</a> of our database to find professors with open positions of any seniority, or the <a href="connections">unfiltered view</a> to find potential collaborators.</li>
<li>We'd also like to highlight the Centre for Human-Compatible AI's <a href="https://humancompatible.ai/jobs">Research Fellowship and Research Collaborator</a> positions.</li>
</ul>
<!-- <li class="expandable" data-toggle="closed_orgs"><i>Currently Closed Opportunities</i></li>
<ul>
<li><a href="https://www.safe.ai/">Center for AI Safety (CAIS)</a> (<a href="https://safe.ai/careers">roles</a>)</li>
<li><a href="https://www.redwoodresearch.org/">Redwood Research</a> (<a href="https://www.redwoodresearch.org/careers">roles</a>)</li>
<li><a href="https://palisaderesearch.org/">Palisade Research</a> (<a href="https://palisaderesearch.org/work">roles</a>)
</ul> -->
</ul>
<h3 id="funding">Funding Opportunities</h3>
<ul>
<li class="expandable">Open Philanthropy</li>
<ul>
<!--<li><a href="https://www.openphilanthropy.org/rfp-llm-impacts/">Request for proposals: studying and forecasting the real-world impacts of systems built from LLMs</a></li>-->
<li><a href="https://www.openphilanthropy.org/request-for-proposals-ai-governance/">Request for proposals: AI governance</a> (In the "technical governance" section, examples include: compute governance, model evaluations, technical safety and security standards for AI developers, cybersecurity for model weights, and privacy-preserving transparency mechanisms. See also the <a href="/opportunities#governance">Governance and Policy section</a> below) </li>
<li><a href="https://www.openphilanthropy.org/career-development-and-transition-funding/">Career development and transition funding</a></li>
<li><a href="https://www.openphilanthropy.org/open-philanthropy-course-development-grants/">Course development grants</a></li>
<li><a href="https://www.openphilanthropy.org/funding-for-work-that-builds-capacity-to-address-risks-from-transformative-ai/"> Funding for work that builds capacity to address risks from transformative AI</a></li>
<!-- <li><a href="https://www.openphilanthropy.org/how-to-apply-for-funding/">How to Apply for Funding</a></li> -->
</ul>
<li><a href="https://aisfund.org/funding-opportunities/">AI Safety Fund: RFP for Bio- and Cyber- Security and AI</a> <i>(deadline: 01/20/25)</i></li>
<li><a href="https://www.mlsafety.org/safebench">SafeBench Competition</a> (<i>deadline: 2/25/2025; $250k in prizes</i>)</li>
<li><a href="https://new.nsf.gov/funding/opportunities/secure-trustworthy-cyberspace-satc">NSF Secure and Trustworthy Cyberspace Grants</a></li>
<li><a href ="https://foresight.org/ai-safety/">Foresight Institute: Grants for Security, Cryptography & Multipolar Approaches to AI Safety <i>(quarterly applications)</i></a></li>
<li><a href="https://funds.effectivealtruism.org/funds/far-future">Long-Term Future Fund</a> <i>(deadline: 02/15/25, rolling)</i></li>
<!-- https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_role_type%5D%5B0%5D=Funding -->
<li class="expandable" data-toggle="closed_funding"><i>Currently Closed Funding Opportunities</i></li>
<ul>
<li><a href="https://www.anthropic.com/news/a-new-initiative-for-developing-third-party-model-evaluations">Anthropic Model Evaluation Initiative</a><i> (accepting EOIs for their next round)</i></li>
<li class="expandable"><a href="https://www.aria.org.uk/programme-safeguarded-ai/">ARIA's Safeguarded AI Program</a> <i>(accepting EOIs for their next round)</i></li>
<ul>
<li>Safeguarded AI aims to provide quantitative safety guarantees for AI. Their current funding round is for demonstrations that AI systems with such guarantees are useful and profitable in safety-critical contexts (e.g. optimising energy networks, clinical trials, or telecommunications).</li>
</ul>
<li><a href=https://www.cooperativeai.com/contests/concordia-2024>Cooperative AI Foundation Concordia Contest 2024</a></li>
<li><a href="https://www.cooperativeai.com/grants/cooperative-ai">Cooperative AI Foundation Research Grants</a></li>
<li><a href="https://futureoflife.org/grant-program/phd-fellowships/">Future of Life Institute: PhD Fellowships</a></li>
<li><a href="https://futureoflife.org/grant-program/postdoctoral-fellowships/">Future of Life Institute Postdoctoral Fellowships</a> <i>(deadline: 01/06/25)</i></li>
<li><a href="https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/">Future of Life Institute: How to Mitigate AI-driven Power Concentration</a</li>
<li><a href="https://www.schmidtsciences.org/safe-ai/">Schmidt Sciences: Safety Assurance through Fundamental Science in Emerging AI</a></li>
<li><a href="https://www.openphilanthropy.org/rfp-llm-benchmarks/">Open Philanthropy Request for proposals: benchmarking LLM agents on consequential real-world tasks</a></li>
<li class="expandable">Survival and Flourishing Fund (SFF): <a href="https://survivalandflourishing.fund/sff-2024-applications">Grant Round</a> with additional <a href="https://survivalandflourishing.fund/sff-freedom-and-fairness-tracks">Freedom and Fairness tracks</a></li>
<!-- and <a href="https://survivalandflourishing.fund/speculation-grants">Speculation Grants</a> -->
<ul>
<li> Note: SFF gives <a href="https://survivalandflourishing.fund/faq#does-sff-have-an-indirect--overhead-rate-limit-for-grants-to-universities">grants to universities</a>. Alternatively, SFF requires that you have a 501c3 charity (i.e. your nonprofit has 501c3 status or you have a fiscal sponsor that has 501c3 status).</li>
<!-- ALSO: https://jaan.info/xrisk/ -->
</ul>
<li><a href="https://cset.georgetown.edu/wp-content/uploads/FRG-Call-for-Research-Ideas-Expanding-the-Toolkit-for-Frontier-Model-Releases.pdf">Call for Research Ideas: Expanding the Toolkit for Frontier Model Releases</a> from CSET</li>
<li>OpenAI: <a href="https://openai.smapply.org/prog/agentic-ai-research-grants/">Research into Agentic AI Systems</a>, <a href="https://openai.com/blog/superalignment-fast-grants">Superalignment Fast Grants</a>, <a href="https://openai.com/blog/openai-cybersecurity-grant-program">OpenAI Cybersecurity Grants (<i>assumed closed</i>)</a></li>
<li>NSF: <a href="https://new.nsf.gov/funding/opportunities/safe-learning-enabled-systems">Safe Learning-Enabled Systems</a> and <a href="https://new.nsf.gov/funding/opportunities/responsible-design-development-deployment">Responsible Design, Development, and Deployment of Technologies</a></li>
<li>Center for Security and Emerging Technology (CSET): <a href="https://cset.georgetown.edu/foundational-research-grants/">Foundational Research Grants</a></li>
</ul>
<br>
<h3 class="expandable" id="compute">Compute Opportunities</h3>
<ul>
<!--<li><a href="https://www.safe.ai/compute-cluster">Center for AI Safety Compute Cluster</a></li> -->
<li><a href="https://ndif.us/start.html">National Deep Inference Fabric (NDIF)</a>, can request early access to a research computing project for interpretability research</li>
<li><a href="https://txt.cohere.com/c4ai-research-grants/">Cohere for AI</a>, subsidized access to APIs</li>
</ul>
<br>
<h3 class="expandable" id="visitor-programs">AI Safety Programs / Fellowships / Residencies / Collaborations</h3>
<ul>
<li class="expandable">UK AI Safety Institute</li>
<ul>
<li><a href="https://www.aisi.gov.uk/academic-engagement">Academic Engagement:</a> research collaborations and workshops targeted at academics.</li>
<li><a href=https://boards.eu.greenhouse.io/aisi/jobs/4468756101>6-month residency with the autonomous systems team</a> (note that this has role some citizenship restrictions).</li>
</ul>
<li class ="expandable">Constellation <i>(extended visits and residencies at an AI safety organization in Berkeley)</i></li>
<ul>
<li><a href="https://www.constellation.org/programs/visiting-fellows">Visiting Fellows:</a> a 3-6 month (unpaid) visit at the Constellation office (Berkeley, CA) for researchers, engineers, entrepreneurs, and other professionals working on their <a href="https://www.constellation.org/research">focus areas</a>. Applications open for the winter cohort (beginning January 6th). </li>
<li><a href="https://www.constellation.org/programs/residency">Residencies:</a> a year-long salaried position ($100K-$300K) for experienced researchers, engineers, entrepreneurs, and other professionals to pursue self-directed work on one of Constellation's <a href="https://www.constellation.org/research">focus areas</a> in the Constellation office (Berkeley, CA).</li>
<li><a href="https://airtable.com/appEr4IN5Kkzu9GLq/shrRJhQiMx0I6QsSb">Workshops:</a> Constellation also expect to offer 1-2 day intensive workshops for experts working in or transitioning into their <a href="https://www.constellation.org/research">focus areas</a>.</li>
</ul>
<li><a href="https://aisafetyfellowship.org/">Impact Academy's Global AI Safety Fellowship</a> <i>(note that though applications are open, they will not be reviewed until June/July)</i></li>
<!-- <li class="expandable" data-toggle="mats_description"><a href="https://www.matsprogram.org/">MATS Winter Program</a> ( Deadline: 10/6/24</i>)</li>
<ul>
<li>The <a href="https://www.matsprogram.org/">ML Alignment & Theory Scholars (MATS)</a> Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and safety. The Winter Program will run Jan 6 - Mar 14, 2025. Follow the instructions on the <a href="https://www.matsprogram.org/">MATS homepage</a> to apply.</li>
</ul> -->
<!--<li id="spar"><a href="https://supervisedprogramforalignment.org/">Supervised Program for Alignment Research (SPAR) Spring Program <i>(deadline: 01/08/25)</i></a></li>-->
</ul>
<!-- <li class="expandable" data-toggle="closed_programs"><i>Currently Closed Programs</i></li> -->
<!-- <li class="expandable" data-toggle="astra_description"><a href="https://www.constellation.org/programs/astra-fellowship">Astra Fellowship</a> at Constellation (<i>for researchers</i>)</li>
<ul>
<li>The Astra Fellowship pairs fellows with experienced advisors to collaborate on a two or three month AI safety research project. Fellows will be part of a cohort of talented researchers working out of the Constellation offices in Berkeley, CA, allowing them to connect and exchange ideas with leading AI safety researchers.</li>
</ul> -->
<!-- <li class="expandable" data-toggle="lasr_description">LASR (London AI Safety Research) Labs (<i>deadline: 4/24, for graduate students</i>)</li> -->
<!-- <li class="https://supervisedprogramforalignment.org/">SPAR</li>
-->
<br>
<h3 class="expandable" id="workshops">Workshops and Community</h3>
<ul>
<li><a href="https://airtable.com/appK578d2GvKbkbDD/pagkxO35Dx2fPrTlu/form">Expression of Interest form</a> for FAR.AI's <a href="https://www.alignment-workshop.com/"> Alignment Workshop</a>. Recordings from the previous workshop are also available <a href="https://www.alignment-workshop.com/vienna-2024">on the website</a>.</li>
<li><a href="https://airtable.com/appEr4IN5Kkzu9GLq/shrRJhQiMx0I6QsSb">Expression of Interest form</a> for Constellation Workshops. <a href="https://www.constellation.org/">Constellation</a> expects to offer 1–2 day intensive workshops for people working in or transitioning into their <a href="https://www.constellation.org/focus-areas">focus areas</a>.</li>
<li><a href = https://airtable.com/appZOFcnymTfcv9ml/shr9KxPh3RQqd7QfL>Expression of Interest form</a> for events by the <a href="https://aisecurity.forum/">AI Security Forum</a></li>
<li class="expandable" data-toggle="past_workshops"><i>Past Workshops</i></li>
<ul>
<li class="expandable">NeurIPS 2024</li>
<ul>
<li><a href="https://solar-neurips.github.io/">Socially Responsible Language Modelling Research (SoLaR)</a></li>
<li><a href="https://evaleval.github.io/">Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI</a></li>
<li><a href="https://redteaming-gen-ai.github.io/">Red Teaming GenAI: What Can We Learn from Adversaries?</a></li>
<li><a href="https://interpretable-ai-workshop.github.io/">Interpretable AI: Past, Present and Future</a></li>
<li><a href="https://safegenaiworkshop.github.io/">Safe Generative AI</a></li>
</ul>
<li class="expandable" data-toggle="ICLR">ICLR 2024</li>
<ul>
<li><a href="https://www.mlsafety.org/events/iclr-social">ML Safety Social</a> hosted by the Center for AI Safety</li>
<li><a href="https://set-llm.github.io/">Secure and Trustworthy Large Language Models</a></li>
<li><a href="https://agiworkshop.github.io">How Far Are We From AGI?</a></li>
<li><a href="https://iclr-r2fm.github.io/">Reliable and Responsible Foundation Models</a></li>
<li><a href="https://sites.google.com/view/me-fomo2024">ME-FoMo: Mathematical and Empirical Understanding of Foundation Models</a></li>
<!--https://pml-workshop.github.io/iclr24/-->
</ul>
<li class="expandable" data-toggle="AW">Alignment Workshops <i>(recordings available)</i></li>
<ul>
<li><b><a href="https://www.alignment-workshop.com/bay-area-2024">Bay Area Alignment Workshop (October 2024)</a></b></li>
<li><a href="https://www.alignment-workshop.com/vienna-2024">Vienna Alignment Workshop (July 2024)</a></li>
<li><a href="https://www.alignment-workshop.com/nola-2023">New Orleans Alignment Workshop (Dec 2023)</a></li>
<li><a href="https://www.alignment-workshop.com/sf-2023">San Francisco Alignment Workshop 2023 (Feb 2023)</a></li>
</ul>
<li><a href="https://sites.google.com/mila.quebec/scaling-laws-workshop/">Neural Scaling & Alignment: Towards Maximally Beneficial AGI Workshop Series (2021-2023)</a></li>
<li><a href="https://sites.google.com/mila.quebec/hlai-2023-boston/home">Human-Level AI: Possibilities, Challenges, and Societal Implications (June 2023)</a></li>
<li><a href="https://futuretech.mit.edu/workshop-on-ai-scaling-and-its-implications#:~:text=The%20FutureTech%20workshop%20on%20AI,a%20range%20of%20key%20tasks%3F">Workshop on AI Scaling and its Implications (Oct 2023)</a></li>
</ul>
<br>
<li class="expandable" data-toggle="academia">Researchers working in AI safety</li>
<ul>
<li><a href="connections">Arkose's Database of AI Safety Professionals</a></li>
<li><a href="https://futureoflife.org/about-us/our-people/ai-existential-safety-community/">AI Existential Safety Community</a> from Future of Life Institute</li>
<li>See speakers from the Alignment Workshop series (<a href="https://www.alignment-workshop.com/sf-2023">SF 2023</a>, <a href="https://www.alignment-workshop.com/nola-2023">NOLA 2023</a>)</li>
</ul>
<li><a href="https://www.aisafety.com/communities">AISafety.com's List of AI Safety Communities</a></li>
<li class="expandable" data-toggle="china">Interested in working in China?</li>
<ul>
<li>Contact <a href="https://concordia-ai.com/">Concordia AI 安远AI</a></li>
<li><a href="https://idais.ai/">International Dialogues on AI Safety</a></li>
<li><a href="https://alignmentsurvey.com/">AI Alignment: A Comprehensive Survey</a></li>
<li>Newsletters: <a href="https://aisafetychina.substack.com/">AI Safety in China</a>, <a href="https://chinai.substack.com/about">ChinAI Newsletter</a></li>
</ul>
</ul>
<h3 class="expandable" id="open-source">Open Source Projects</h3>
<ul>
<li><a href="https://inspect.ai-safety-institute.org.uk/">Inspect (an evaluations framework)</a></li>
<li><a href="https://vivaria.metr.org/">Vivaria (a capability elicitation and evaluation framework)</a></li>
<li><a href="https://blog.eleuther.ai/autointerp/">SAE AutoInterp (an interpretability tool)</a>
</ul>
<br>
<h3 class="expandable" id="jobs">Job Board</h3>
<ul class="width-100">
<div class="iframe-container">
<div class="iframe-dark-mode-inverter"> </div>
<iframe id="job_board" onload="iframeLoaded('iframe_loading_spinner_job_board')" src="https://airtable.com/embed/appQwyPiYo4egdPR5/shr6n4WqjnmmoTpMf?backgroundColor=grayLight&viewControls=on" frameborder="0" onmousewheel="" width="100%" height="533" style="background: transparent; border: 1px solid #ccc;"></iframe>
<div id="iframe_loading_spinner_job_board" class="iframe-loading">
{% include loading_spinner.html %}
</div>
</div>
<p class="text-right" style="margin-top: 0.3em;"><i>Filtered from the <a href="https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Bcompany_data%5D%5B0%5D=Highlighted%20organisations&refinementList%5Btags_exp_required%5D%5B0%5D=Mid%20%285-9%20years%20experience%29&refinementList%5Btags_exp_required%5D%5B1%5D=Multiple%20experience%20levels&refinementList%5Btags_exp_required%5D%5B2%5D=Senior%20%2810%2B%20years%20experience%29&refinementList%5Btags_skill%5D%5B0%5D=Data&refinementList%5Btags_skill%5D%5B1%5D=Engineering&refinementList%5Btags_skill%5D%5B2%5D=Research&refinementList%5Btags_skill%5D%5B3%5D=Software%20engineering">80,000 Hours Job Board</a></i></p>
</ul>
<h4 class="expandable" id="alternative_opportunities">Alternative Technical Opportunities</h4>
<ul>
<li class="expandable" data-toggle="theoretical_research"><b>Theoretical Research</b></li>
<ul>
<li><a href="https://www.alignment.org/theory/">Alignment Research Center</a> <a href="https://www.alignment.org/hiring/">(roles)</a></li>
<!-- <li><a href="https://intelligence.org/careers/">Machine Intelligence Research Institute (MIRI)</a> </li> -->
</ul>
<li class="expandable" data-toggle="information_security"><b>Information Security</b></li>
<ul>
<li><a href="https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_exp_required%5D%5B0%5D=Mid%20%285-9%20years%20experience%29&refinementList%5Btags_exp_required%5D%5B1%5D=Multiple%20experience%20levels&refinementList%5Btags_exp_required%5D%5B2%5D=Senior%20%2810%2B%20years%20experience%29&refinementList%5Btags_skill%5D%5B0%5D=Information%20security">Information Security roles</a></li>
<li><a href="https://80000hours.org/career-reviews/information-security/">Overview from a security engineer at Google</a></li>
<li><a href="https://www.linkedin.com/in/jason-clinton-475671159/">Jason Clinton</a>'s recommended <a href="https://www.google.com/books/edition/Building_Secure_and_Reliable_Systems/Kn7UxwEACAAJ?hl=en&kptab=getbook">upskilling book</a></li> <!--<a href="https://forum.effectivealtruism.org/posts/zxrBi4tzKwq2eNYKm/ea-infosec-skill-up-in-or-make-a-transition-to-infosec-via">EA Infosec: skill up in or make a transition to infosec via this book club</a>-->
</ul>
<li><a href="https://jobs.80000hours.org/?query=Forecasting&refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy"><b>Forecasting</b></a> (see especially <a href="https://epochai.org/careers">Epoch</a>)</li>
<li><a href="https://jobs.80000hours.org/?refinementList%5Btags_area%5D%5B0%5D=AI%20safety%20%26%20policy&refinementList%5Btags_exp_required%5D%5B0%5D=Mid%20%285-9%20years%20experience%29&refinementList%5Btags_exp_required%5D%5B1%5D=Multiple%20experience%20levels&refinementList%5Btags_exp_required%5D%5B2%5D=Senior%20%2810%2B%20years%20experience%29&refinementList%5Btags_skill%5D%5B0%5D=Software%20engineering"><b>Software Engineering</b></a></li>
<li class="expandable" id="governance"><b>AI Governance and Policy</b></li>
<ul>
<p>AI governance is focused on developing global norms, policies, and institutions to increase the chances that advanced AI is beneficial for humanity.</p>
<li class="expandable"><a href="https://horizonpublicservice.org/2025-horizon-fellowship-cohort-applications/">The Horizon Fellowship</a> <i> (Deadline 08/30/2024)</i></li>
<ul>
<li>The Horizon Fellowship places experts in emerging technologies in federal agencies, congressional offices, and thinktanks in Washington DC for up to two years.</li>
</ul>
<li class="expandable"><a href="https://jobs.lever.co/futureof-life/fdde100d-8f61-409c-aa53-a89e9b6b6d13">Future of Life Institute: AI Compute Security & Governance Technical Program Manager</a></li>
<ul>
<li>The Future of Life Institute is looking to hire someone with experience in both hardware engineering and project management to lead a new initiative in technical AI governance. </li>
</ul>
<!--<li> <a href="https://www.governance.ai/post/winter-fellowship-2025">Center for the Governance of AI (GovAI) Winter Fellowship 2025</a><i> (Deadline 08/11/2024)</i></li>-->
<li class="expandable"><a href="https://airtable.com/appqcwYjLEfQEGL72/shrQunWR8lZjsFdMa">Summer Webinar Series on Careers in Emerging Technology Policy</a> (mid-July - end of August)</a></li>
<ul>
<li>The series is designed to help individuals interested in federal AI and biosecurity policy decide if they should pursue careers in these fields. Each session features experienced policy practitioners who will discuss what it’s like to work in emerging technology policy and provide actionable advice on how to get involved. Some of the sessions will be useful for individuals from all fields and career stages, while others are focused on particular backgrounds and opportunities. You may choose to attend all or only some of the sessions.</li>
</ul>
<li><a href="https://www.agisafetyfundamentals.com/ai-governance-curriculum">AI Governance Curriculum</a> by BlueDot Impact</li>
<li><a href="https://emergingtechpolicy.org/areas/ai-policy/">AI Policy Resources</a> by Emerging Technology Policy Careers</li>
<li class="expandable expanded">Several organizations working in the space:</li>
<ul>
<li><a href="https://www.longtermresilience.org/">Center for Long-Term Resilience (CLTR)</a></li>
<li><a href="https://www.rand.org/topics/science-technology-and-innovation-policy.html">RAND's Technology and Security Policy work</a></li>
<li><a href="https://www.horizonpublicservice.org/">Horizon Institute for Public Service</a></li>
<li><a href="https://www.iaps.ai/">Institute for AI Policy and Strategy</a></li>
<li><a href="https://cset.georgetown.edu/">Center for Security and Emerging Technology (CSET)</a></li>
<li>Frontier AI Task Force</li>
<li><a href="https://www.governance.ai/">Center for the Governance of AI (GovAI)</a></li>
<li>Industry AI Governance teams</li>
<li><a href="https://www.aipolicy.us/about">Center for AI Policy</a></li>
<!-- Center for AI Safety -->
</ul>
</ul>
</ul>
<!--
<h3>Example of expandable lists</h3>
<li class="expandable" data-toggle="theoretical_research">Theoretical research</li>
<li class="expandable expanded" data-toggle="theoretical_research">Theoretical</li>
-->
</div>
</section>
<script>
function iframeLoaded(spinnerID) {
document.getElementById(spinnerID).style.display = 'none';
}
</script>
{% if false %}
<!-- All this commented-out section is inside an if-statement so it doesn't get sent to users.-->
<!--
{% if false %}
<section id="two" class="bg-gray">
<div class="inner">
<div id="book_a_call"><h2>Book a call</h2>
<p>If you're interested in working in AI alignment and advanced AI safety, please book a call with <a href="https://vaelgates.com">Vael Gates</a>, who leads this project and conducted the <a href=interviews>interviews</a> as part of their postdoctoral work with Stanford University.</p>
<form id="book_call_form" method="post" action="#">
<div class="row uniform">
<div class="6u 12u$(xsmall)">
<input type="text" name="name" id="name" value="" placeholder="Name" />
</div>
<div class="6u$ 12u$(xsmall)">
<input type="email" name="email" id="email" value="" placeholder="Email" />
</div>
<div class="12u$">
<div class="select-wrapper">
<select name="interest" id="interest">
<option value=""> Interested in talking about... </option>
<option value="AI Alignment Research or Engineering">AI Alignment Research or Engineering</option>
<option value="AI Alignment Governance">AI Alignment Governance</option>
<option value="Other">Other (please specify below)</option>
</select>
</div>
</div>
<div class="12u$">
<textarea name="message" id="message" placeholder="Enter your message" rows="6"></textarea>
</div>
<div class="12u$">
<ul class="actions">
<li>
<button class="button" id="send_form_button">
<div class="button-progress-bar"></div>
<div class="button-text">Send Message</div>
</button>
</li>
</ul>
</div>
</div>
</form>
</div>
</div>
</section>
{% endif %}
</div>
{% if false %}
<script src="{{ "assets/js/book_call.js" | absolute_url }}" type="module"></script>
<script>
window.contactEmail = "{{site.email}}"
</script>
{% endif %}
-->
{% endif %}