Skip to content

Commit

Permalink
Update opportunities.html
Browse files Browse the repository at this point in the history
  • Loading branch information
victoriaBrook committed Sep 9, 2024
1 parent e956356 commit e616d20
Showing 1 changed file with 2 additions and 6 deletions.
8 changes: 2 additions & 6 deletions opportunities.html
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ <h3 id="ai_safety"><a href="aisafety">Learn more context on large-scale risks fr
<section>
<div class="inner">
<h1>Opportunities</h1>
<p><i>Last updated: 09/06/24</i></p>
<p><i>Last updated: 09/09/24</i></p>
<!-- <p style="margin-left:5%"><i>Opportunities relevant to <a href="aisafety">reducing large-scale risks from advanced AI</a>.</i></p> -->


Expand Down Expand Up @@ -72,7 +72,7 @@ <h3>Job Opportunities</h3>
<li>Mechanistic Interpretability Team</li>
<li><a href="https://deepmind.google/about/responsibility-safety/#:~:text=To%20empower%20teams%20to%20pioneer,and%20collaborations%20against%20our%20AI">Responsibility & Safety Team</a></li>
</ul>
<li><a href="https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute">UK AI Safety Institute (AISI)</a> (<a href="https://www.aisi.gov.uk/careers#open-roles">roles</a>)</li>
<li><a href="https://www.aisi.gov.uk/">UK AI Safety Institute (AISI)</a> (<a href="https://www.aisi.gov.uk/careers#open-roles">roles</a>)</li>
<li class="expandable">RAND (<a href="https://www.rand.org/jobs/technology-security-policy-fellows.html">roles</a>)</li>
<ul>
<li> Note that RAND's Technology and Security Policy Fellowship is not just for policy research; ML engineers, software engineers with either infrastructure or front-end experience, and technical program managers are also encouraged to apply via this Fellowship.
Expand Down Expand Up @@ -109,10 +109,6 @@ <h3 id="funding">Funding Opportunities</h3>
<li><a href="https://www.cooperativeai.com/grants/cooperative-ai">Cooperative AI Research Grants</a> (<i>deadline: 10/6</i>)</li>
<li><a href="https://taskdev.metr.org/bounty/">METR Evaluation Task Bounty</a></i> (<i>related: <a href="https://metr.github.io/autonomy-evals-guide/">METR's Autonomy Evaluation Resources</a></i>)</li>
<li><a href="https://www.mlsafety.org/safebench">SafeBench Competition</a> (<i>deadline: 2/25/2025; $250k in prizes</i>)</li>
<li class="expandable"><a href="https://www.catalyze-impact.org/apply">Catalyze Impact's AI Safety Incubation Program </a><i>(deadline: 9/3)</i></li>
<ul>
<li>Catalyze Impact are providing co-founder matching, mentoring, and funding to new organisations working on technical AI safety.</li>
</ul>
<li class="expandable"><a href="https://www.anthropic.com/news/a-new-initiative-for-developing-third-party-model-evaluations">Anthropic Model Evaluation Initiative</a></li>
<ul>
<li>Note that though proposals are welcome, they will not be assessed until the round 1 proposals are processed (date TBD)</li>
Expand Down

0 comments on commit e616d20

Please sign in to comment.