Skip to content

Commit

Permalink
Update opportunities.html
Browse files Browse the repository at this point in the history
  • Loading branch information
victoriaBrook committed Aug 12, 2024
1 parent f25f3ac commit df97d1e
Showing 1 changed file with 12 additions and 9 deletions.
21 changes: 12 additions & 9 deletions opportunities.html
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ <h3 id="ai_safety"><a href="aisafety">Learn more context on large-scale risks fr
<section>
<div class="inner">
<h1>Opportunities</h1>
<p><i>Last updated: 08/01/24</i></p>
<p><i>Last updated: 08/12/24</i></p>
<!-- <p style="margin-left:5%"><i>Opportunities relevant to <a href="aisafety">reducing large-scale risks from advanced AI</a>.</i></p> -->


Expand Down Expand Up @@ -59,7 +59,7 @@ <h3 id="research_opportunities">Job Opportunities (Research and Engineering)</h3
<li>Mechanistic Interpretability Team</li>
<li><a href="https://deepmind.google/about/responsibility-safety/#:~:text=To%20empower%20teams%20to%20pioneer,and%20collaborations%20against%20our%20AI">Responsibility & Safety Team</a></li>
</ul>
<li><a href="https://www.nist.gov/aisi">US AI Safety Institute (AISI)</a><a href="https://www.usajobs.gov/job/800760000"> (roles)</a><i> (Deadline: 08/06/2024)</i></li>
<li><a href="https://www.nist.gov/aisi">US AI Safety Institute (AISI)</a><a href="https://www.usajobs.gov/job/800760000"> (roles)</a><i> (Deadline: 08/12/2024)</i></li>
<li><a href="https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute">UK AI Safety Institute (AISI)</a> (<a href="https://www.aisi.gov.uk/careers#open-roles">roles</a>)</li>
<li class="expandable">RAND (<a href="https://www.rand.org/jobs/technology-security-policy-fellows.html">roles</a>)</li>
<ul>
Expand Down Expand Up @@ -93,6 +93,10 @@ <h3 id="funding">Funding Opportunities</h3>
<li><a href="https://www.cooperativeai.com/grants/cooperative-ai">Cooperative AI Research Grants</a> (<i>deadline: 10/6</i>)</li>
<li><a href="https://taskdev.metr.org/bounty/">METR Evaluation Task Bounty</a></i> (<i>related: <a href="https://metr.github.io/autonomy-evals-guide/">METR's Autonomy Evaluation Resources</a></i>)</li>
<li><a href="https://www.mlsafety.org/safebench">SafeBench Competition</a> (<i>deadline: 2/25/2025; $250k in prizes</i>)</li>
<li class="expandable"><a href="https://www.anthropic.com/news/a-new-initiative-for-developing-third-party-model-evaluations">Anthropic Model Evaluation Initiative</a></li>
<ul>
<li>Note that though proposals are welcome, they will not be assessed until the round 1 proposals are processed (date TBD)</li>
</ul>
<li><a href="https://aisfund.org/grant-process/">AI Safety Fund</a> <a href="https://www.frontiermodelforum.org/updates/ai-safety-fund-initiates-first-round-of-research-grants/">via the Frontier Model Forum</a></li>
<li><a href="https://new.nsf.gov/funding/opportunities/secure-trustworthy-cyberspace-satc">NSF Secure and Trustworthy Cyberspace Grants</a></li>
<li><a href ="https://foresight.org/ai-safety/">Foresight Institute: Grants for Security, Cryptography & Multipolar Approaches to AI Safety</a></li>
Expand All @@ -102,9 +106,8 @@ <h3 id="funding">Funding Opportunities</h3>

<li class="expandable" data-toggle="closed_funding"><i>Currently Closed Funding Opportunities</i></li>
<ul>
<li><a href="https://www.openphilanthropy.org/rfp-llm-benchmarks/">Open Philanthropy Request for proposals: benchmarking LLM agents on consequential real-world tasks</a> (<i>deadline: 7/26</i>)</li>
<li><a href="https://www.anthropic.com/news/a-new-initiative-for-developing-third-party-model-evaluations">Anthropic Model Evaluation Initiative</a></li>
<li><a href="https://www.aria.org.uk/programme-safeguarded-ai/">ARIA's Safeguarded AI Program</a> (aimed at quantitative safety guarantees, accepting proposals)</li>
<li><a href="https://www.openphilanthropy.org/rfp-llm-benchmarks/">Open Philanthropy Request for proposals: benchmarking LLM agents on consequential real-world tasks</a></li>
<li><a href="https://www.aria.org.uk/programme-safeguarded-ai/">ARIA's Safeguarded AI Program</a> (aimed at quantitative safety guarantees)</li>
<li class="expandable">Survival and Flourishing Fund (SFF): <a href="https://survivalandflourishing.fund/sff-2024-applications">Grant Round</a> with additional <a href="https://survivalandflourishing.fund/sff-freedom-and-fairness-tracks">Freedom and Fairness tracks</a></li>
<!-- and <a href="https://survivalandflourishing.fund/speculation-grants">Speculation Grants</a> -->
<ul>
Expand Down Expand Up @@ -138,9 +141,9 @@ <h3 class="expandable" id="visitor-programs">AI Safety Programs / Fellowships /
<ul>
<li><a href="https://www.constellation.org">Constellation</a> is offering year-long salaried positions ($100K-$180K) at their office (Berkeley, CA) for experienced researchers, engineers, entrepreneurs, and other professionals to pursue self-directed work on one of Constellation's <a href="https://www.constellation.org/focus-areas">focus areas</a><a href="https://airtable.com/appEr4IN5Kkzu9GLq/shr3LgseSOaRxA2mQ">Apply here</a>. See here for <a href="https://www.constellation.org/programs/residency">more details</a>.</li>
</ul>
<li class="expandable" data-toggle="mats_description">MATS Winter Program (<i>deadline: 8/1/24, for graduate students</i>)</li>
<li class="expandable" data-toggle="mats_description">MATS Winter Program (<i>Neel Nanda and Arthur Conmy's streams only. Deadline: 8/30/24. Aimed primarily at students</i>)</li>
<ul>
<li>The <a href="https://www.matsprogram.org/">ML Alignment & Theory Scholars (MATS)</a> Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and safety. We also connect them with the Berkeley alignment research community. Our Winter Program will run from early Jan, 2025. <a href="https://airtable.com/appPxJ0QMqR7TElYU/pagRPwHQtcN8L0vIE/form">Apply here</a>.</li>
<li>The <a href="https://www.matsprogram.org/">ML Alignment & Theory Scholars (MATS)</a> Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and safety. We also connect them with the Berkeley alignment research community. Our Winter Program will run from early Jan, 2025. General applications have now closed for the Winter 2024-5 cohort, but you can still apply to Neel Nanda or Arthur Conmy's stream until August 30th. Follow the instructions on the <a href="https://www.matsprogram.org/">MATS homepage</a> to apply.</li>
</ul>
<li id="spar"><a href="https://supervisedprogramforalignment.org/">Supervised Program for Alignment Research (SPAR) Fall Program</a></li>
<!-- <li class="expandable" data-toggle="closed_programs"><i>Currently Closed Programs</i></li> -->
Expand Down Expand Up @@ -168,7 +171,7 @@ <h3 class="expandable" id="workshops">Workshops and Community</h3>
<li><a href="https://sites.google.com/view/me-fomo2024">ME-FoMo: Mathematical and Empirical Understanding of Foundation Models</a></li>
<!--https://pml-workshop.github.io/iclr24/-->
<!--</ul> -->
<li><a href="https://airtable.com/appK578d2GvKbkbDD/pagkxO35Dx2fPrTlu/form">Future events interest form</a> for the Alignment Workshop Series (previous: <a href="https://www.alignment-workshop.com/nola-2023">Dec 2023</a>, <a href="https://www.alignment-workshop.com/sf-2023">Feb 2023</a>)</li>
<!--<li><a href="https://airtable.com/appK578d2GvKbkbDD/pagkxO35Dx2fPrTlu/form">Future events interest form</a> for the Alignment Workshop Series (previous: <a href="https://www.alignment-workshop.com/nola-2023">Dec 2023</a>, <a href="https://www.alignment-workshop.com/sf-2023">Feb 2023</a>)</li>-->
<li><a href="https://airtable.com/appEr4IN5Kkzu9GLq/shrRJhQiMx0I6QsSb">Future events interest form</a> for Constellation Workshops. Constellation expects to offer 1–2 day intensive workshops for people working in or transitioning into their <a href="https://www.constellation.org/focus-areas">focus areas</a>.</li>
</ul>
<li class="expandable" data-toggle="past_workshops"><i>Past Workshops</i></li>
Expand Down Expand Up @@ -252,7 +255,7 @@ <h4 class="expandable" id="alternative_opportunities">Alternative Technical Oppo
<ul>
<li>The Horizon Fellowship places experts in emerging technologies in federal agencies, congressional offices, and thinktanks in Washington DC for up to two years.</li>
</ul>
<li> <a href="https://www.governance.ai/post/winter-fellowship-2025">Center for the Governance of AI (GovAI) Winter Fellowship 2025</a><i> (Deadline 08/11/2024)</i></li>
<!--<li> <a href="https://www.governance.ai/post/winter-fellowship-2025">Center for the Governance of AI (GovAI) Winter Fellowship 2025</a><i> (Deadline 08/11/2024)</i></li>-->
<li class="expandable"><a href="https://airtable.com/appqcwYjLEfQEGL72/shrQunWR8lZjsFdMa">Summer Webinar Series on Careers in Emerging Technology Policy</a> (mid-July - end of August)</a></li>
<ul>
<li>The series is designed to help individuals interested in federal AI and biosecurity policy decide if they should pursue careers in these fields. Each session features experienced policy practitioners who will discuss what it’s like to work in emerging technology policy and provide actionable advice on how to get involved. Some of the sessions will be useful for individuals from all fields and career stages, while others are focused on particular backgrounds and opportunities. You may choose to attend all or only some of the sessions.</li>
Expand Down

0 comments on commit df97d1e

Please sign in to comment.