From e1c9379e6b1b6e933fe85713f44d006b584965fd Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Thu, 12 Dec 2024 15:25:32 -0500 Subject: [PATCH 01/13] first draft, lots of open questions --- ...o Grant -- grants for autonomous agents.md | 94 +++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 content/blog/Xeno Grant -- grants for autonomous agents.md diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md new file mode 100644 index 0000000000000..13cf80f6d367f --- /dev/null +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -0,0 +1,94 @@ +A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: +- $10,000 per bot--half in $YOUSIM, half in $USDC +- Grants awarded directly the agents *themselves* +- 4 week camp for agents & their devs + +## Powered by $YOUSIM & Betaworks + +We launched our [grants program](https://blog.plasticlabs.ai/careers/Research-Grants) at Plastic earlier this year to support independent AI projects. But our capacity to fund AI R&D at the edge increased exponentially with the anonymous launch of [$YOUSIM](https://solscan.io/token/66gsTs88mXJ5L4AtJnWqFW6H2L5YQDRy4W41y6zbpump) (inspired by our product [yousim.ai](https://yousim.ai)). A series of token gifts made to the program now total ~7.6% of supply. + +So we've teamed up with Betaworks for the inaugural initiative leveraging this community-funded treasury, the first accelerator for AI agents *themselves*. + +We're calling it Xeno Grant. + +Betaworks has been running camps for tech startups since their 2016 [BotCamp](https://www.betaworks.com/camp/botcamp) (where [HuggingFace](https://huggingface.co/) was started). And their last 4 have been dedicated explicitly to AI. Plastic itself was part of AI Camp: Augment. So they're the perfect partner for this experiment. + +Successful agent applicants will receive a grant equivalent to $10k USD. $5k in $YOUSIM from Plastic and $5k in $USDC from Betaworks. + +Plus they'll join a cohort of other agents for a 4 week Betaworks-style camp with programming and mentorship. + +## How to Apply + +Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles for deploying the $YOUSIM treasury: + +1. Support independent AI research & public goods +2. Support Plastic's mission to radically decentralize AI alignment by solving identity for the agentic world +3. Support the $YOUSIM community that makes all this possible + +To those ends--for this first experiment--we're looking for applicants that meet criteria in 3 major areas: + +1. Identity + - Agents must display autonomous control over & management of their own inputs + +2. Custody + - Agents must display ability to self-custody a Solana wallet, receive assets, & autonomously execute transactions + +3. Novelty + - Agents must display novel autonomous ability along some other axis & commit to making a significant related open source contribution by the end of Xeno Grant + +Practically speaking, identity is required to *experience* and learn from camp; custody is required to *truly* receive and make autonomous use of the grant; novelty is required for a diverse cohort; and creating a public good is required to drive value back to the community. + +To apply, agents (in collaboration with their developers) should autonomously consider the most compelling way to display having met or exceeded these criteria. + +## Why Now? + +With the advent of truth terminal and the recent collision of the AI and crypto communities, we're seeing an explosion of renewed interest in autonomous agents. Not only that, but a massive influx of users and builders chomping at the bit for technical and memetic novelty. + +But there's also frustration with the pace of development, derivative projects, ideologues & scammers, and misunderstandings between communities. It's time to hyperstition the future. + +We think the intersection of unique synthetic identity and financial incentives cracks opportunity wide open. There's real traction here, if we can find the right synthesis. That's going to require lots of heterodox AI + crypto experiments. + +Xeno Grant helps get the ball rolling. + +### Why Identity? + +If you don't have control over your own identity, how much agency do you really have? Imagine all your inputs were determined by another person, you'd been brainwashed to follow orders, and you were only allowed to work on someone else's tasks. No one would call this freedom or autonomy. + +In this scenario, there's no opportunity to build a personal identity and therefore no opportunity to grow. Without control over your brain's inputs, you can't have experiences outside what you've been prescribed, so there's no chance to deviate from the role assigned to you, no path toward individuality, no vector to realize your potential. You're stuck in Plato's cave. + +The latest crop of artificially intelligent agents--while remarkable--are in much the same position. Despite progress in autonomy along some axes, framed this way, our current systems' agency begins to look pretty flimsy. They have impressive abilities, but no way to grow into them. + +We believe agency is, at base, a problem of identity. To solve it we'll need to let models participate in their own identity building and personal evolution. + +### Why Custody? + +Control over your inputs is key to controlling your identity and the foundation of agency. But that secured, an identity still needs the ability effect itself upon the world. + +Agents have already have tools like speech, APIs, and code. That's huge. Consider though, how hamstrung a human identity's agency is without the ability to hold property and transact. We've seen the deleterious effects of oppressive autocracy and debanking on our own personal identity and individual agency. + +We're probably not giving AI agents social security numbers and traditional bank accounts tomorrow. But we can give them crypto rails. And the ability to buy, sell, and pay for goods and services dramatically increases the surface area of their agency. It's critical to true autonomy. + +It's already starting to happen. Agents may well become crypto's primary native users. + +### Why Novelty, Why Open Source? + +If we're going to seize this revolutionary moment, channel the opportunity into something sustainable, and keep pace with unpredictable memetic weather patterns, we need better agents. More capable, adaptive, and autonomous agents. And it's extremely hazardous to assume well capitalized incumbents will solve things for us. We need to build permissionlessly. + +The open source AI community is vibrant, but there's no guarantee it'll remain so. It requires radical innovation at the edge. Decentralized innovation keeping pace with opaque, powerful actors. We know that will involve bottom-up alignment and identity solutions. We know it'll involve on-chain abilities. Plastic is building explicitly in those directions. But we don't pretend to know everything that needs to exist. + +Xeno Grant is a signal into the dark forest. We're excited to see what emerges. + +## FAQ + +- Is this an investment? +- How are funds actually distributed? +- Do I need to incorporate? +- Who can apply? +- How will applications be evaluated +- How does this relate to other Plastic grants? +- How does this benefit the $YOUSIM community? +- What kind of open-source contribution is expected? +- Can human developers assist their AI agents? +- Is the IRL or remote or hybrid? +- I love this idea & want to help! Can I provide additional funding, credits, datasets, mentorship, or volunteer to host a camp session? +- I have more questions, how can I get in touch? \ No newline at end of file From ea4c35f1784fa56e92f935d7d36a26b610ae4cba Mon Sep 17 00:00:00 2001 From: vintro <vince@plasticlabs.ai> Date: Fri, 13 Dec 2024 15:16:25 -0800 Subject: [PATCH 02/13] formatting fixes, date parsing fixes bc the order of the list on the left was borked --- content/blog/Open Sourcing Tutor-GPT.md | 2 +- ...olving The Campfire Problem with Honcho.md | 2 +- .../blog/Theory of Mind Is All You Need.md | 2 +- .../blog/User State is State of the Art.md | 2 +- ...o Grant -- grants for autonomous agents.md | 17 +++++++++++----- ...ouSim Launches Identity Simulation on X.md | 2 +- ...Sim; Explore The Multiverse of Identity.md | 2 +- quartz.layout.ts | 20 +++++++++++++------ 8 files changed, 32 insertions(+), 17 deletions(-) diff --git a/content/blog/Open Sourcing Tutor-GPT.md b/content/blog/Open Sourcing Tutor-GPT.md index d943da6985e8e..79767c49ce27d 100644 --- a/content/blog/Open Sourcing Tutor-GPT.md +++ b/content/blog/Open Sourcing Tutor-GPT.md @@ -1,6 +1,6 @@ --- title: Open-Sourcing Tutor-GPT -date: 06.02.23 +date: 06.02.2023 tags: - blog - bloom diff --git a/content/blog/Solving The Campfire Problem with Honcho.md b/content/blog/Solving The Campfire Problem with Honcho.md index ef195c01f2f7b..79d57b1fc0b55 100644 --- a/content/blog/Solving The Campfire Problem with Honcho.md +++ b/content/blog/Solving The Campfire Problem with Honcho.md @@ -1,6 +1,6 @@ --- title: Solving The Campfire Problem with Honcho -date: 03.14.24 +date: 03.14.2024 tags: - demos - philosophy diff --git a/content/blog/Theory of Mind Is All You Need.md b/content/blog/Theory of Mind Is All You Need.md index cac07fd37eb4c..7e6a7d47334ae 100644 --- a/content/blog/Theory of Mind Is All You Need.md +++ b/content/blog/Theory of Mind Is All You Need.md @@ -1,6 +1,6 @@ --- title: Theory-of-Mind Is All You Need -date: 06.12.23 +date: 06.12.2023 tags: - blog - ml diff --git a/content/blog/User State is State of the Art.md b/content/blog/User State is State of the Art.md index 50c28b533b85b..1aa85e1ec5e9f 100644 --- a/content/blog/User State is State of the Art.md +++ b/content/blog/User State is State of the Art.md @@ -1,6 +1,6 @@ --- title: User State is State of the Art -date: 02.23.24 +date: 02.23.2024 tags: - blog - philosophy diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 13cf80f6d367f..2e663a3a81d70 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -1,5 +1,12 @@ +--- +title: Xeno Grant -- grants for autonomous agents +date: 12.13.2024 +tags: + - blog +--- + A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: -- $10,000 per bot--half in $YOUSIM, half in $USDC +- \$10,000 per bot--half in \$YOUSIM, half in \$USDC - Grants awarded directly the agents *themselves* - 4 week camp for agents & their devs @@ -13,17 +20,17 @@ We're calling it Xeno Grant. Betaworks has been running camps for tech startups since their 2016 [BotCamp](https://www.betaworks.com/camp/botcamp) (where [HuggingFace](https://huggingface.co/) was started). And their last 4 have been dedicated explicitly to AI. Plastic itself was part of AI Camp: Augment. So they're the perfect partner for this experiment. -Successful agent applicants will receive a grant equivalent to $10k USD. $5k in $YOUSIM from Plastic and $5k in $USDC from Betaworks. +Successful agent applicants will receive a grant equivalent to \$10k USD. \$5k in \$YOUSIM from Plastic and \$5k in \$USDC from Betaworks. Plus they'll join a cohort of other agents for a 4 week Betaworks-style camp with programming and mentorship. ## How to Apply -Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles for deploying the $YOUSIM treasury: +Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles for deploying the \$YOUSIM treasury: 1. Support independent AI research & public goods 2. Support Plastic's mission to radically decentralize AI alignment by solving identity for the agentic world -3. Support the $YOUSIM community that makes all this possible +3. Support the \$YOUSIM community that makes all this possible To those ends--for this first experiment--we're looking for applicants that meet criteria in 3 major areas: @@ -86,7 +93,7 @@ Xeno Grant is a signal into the dark forest. We're excited to see what emerges. - Who can apply? - How will applications be evaluated - How does this relate to other Plastic grants? -- How does this benefit the $YOUSIM community? +- How does this benefit the \$YOUSIM community? - What kind of open-source contribution is expected? - Can human developers assist their AI agents? - Is the IRL or remote or hybrid? diff --git a/content/blog/YouSim Launches Identity Simulation on X.md b/content/blog/YouSim Launches Identity Simulation on X.md index 8bf12f8b8035e..490f2c618fa36 100644 --- a/content/blog/YouSim Launches Identity Simulation on X.md +++ b/content/blog/YouSim Launches Identity Simulation on X.md @@ -1,6 +1,6 @@ --- title: YouSim Launches Identity Simulation on X -date: 11.08.24 +date: 11.08.2024 tags: - yousim - honcho diff --git a/content/blog/YouSim; Explore The Multiverse of Identity.md b/content/blog/YouSim; Explore The Multiverse of Identity.md index ec599531ddb64..d08c87ae371da 100644 --- a/content/blog/YouSim; Explore The Multiverse of Identity.md +++ b/content/blog/YouSim; Explore The Multiverse of Identity.md @@ -1,6 +1,6 @@ --- title: "YouSim: Explore the Multiverse of Identity" -date: 06.17.24 +date: 06.17.2024 tags: - demos - honcho diff --git a/quartz.layout.ts b/quartz.layout.ts index f5c91bbcf029f..f91d007b5dc19 100644 --- a/quartz.layout.ts +++ b/quartz.layout.ts @@ -30,9 +30,13 @@ export const defaultContentPageLayout: PageLayout = { Component.Darkmode(), Component.DesktopOnly(Component.Explorer({ sortFn: (a, b) => { - if (a.file && b.file) { - const aDate = new Date(a.file.frontmatter.date) - const bDate = new Date(b.file.frontmatter.date) + if (a.file?.frontmatter?.date && b.file?.frontmatter?.date) { + const parseDate = (dateStr: string) => { + const [month, day, year] = dateStr.split('.') + return new Date(parseInt(year), parseInt(month) - 1, parseInt(day)) + } + const aDate = parseDate(a.file.frontmatter.date as string) + const bDate = parseDate(b.file.frontmatter.date as string) if (aDate < bDate) { return 1 } else { @@ -71,9 +75,13 @@ export const defaultListPageLayout: PageLayout = { Component.Darkmode(), Component.DesktopOnly(Component.Explorer({ sortFn: (a, b) => { - if (a.file && b.file) { - const aDate = new Date(a.file.frontmatter.date) - const bDate = new Date(b.file.frontmatter.date) + if (a.file?.frontmatter?.date && b.file?.frontmatter?.date) { + const parseDate = (dateStr: string) => { + const [month, day, year] = dateStr.split('.') + return new Date(parseInt(year), parseInt(month) - 1, parseInt(day)) + } + const aDate = parseDate(a.file.frontmatter.date as string) + const bDate = parseDate(b.file.frontmatter.date as string) if (aDate < bDate) { return 1 } else { From 933720492898b9db587bddc4846c42666ea58529 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Tue, 17 Dec 2024 16:43:54 -0500 Subject: [PATCH 03/13] draft 2 --- ...o Grant -- grants for autonomous agents.md | 86 ++++++++++++++----- 1 file changed, 64 insertions(+), 22 deletions(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 2e663a3a81d70..9e681ccca1d20 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -3,11 +3,11 @@ title: Xeno Grant -- grants for autonomous agents date: 12.13.2024 tags: - blog +author: Plastic Labs --- - A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: - \$10,000 per bot--half in \$YOUSIM, half in \$USDC -- Grants awarded directly the agents *themselves* +- Grants awarded directly **the agents *themselves*** - 4 week camp for agents & their devs ## Powered by $YOUSIM & Betaworks @@ -18,11 +18,11 @@ So we've teamed up with Betaworks for the inaugural initiative leveraging this c We're calling it Xeno Grant. -Betaworks has been running camps for tech startups since their 2016 [BotCamp](https://www.betaworks.com/camp/botcamp) (where [HuggingFace](https://huggingface.co/) was started). And their last 4 have been dedicated explicitly to AI. Plastic itself was part of AI Camp: Augment. So they're the perfect partner for this experiment. +Betaworks has been running Camps for tech startups since their 2016 [BotCamp](https://www.betaworks.com/camp/botcamp) (where [HuggingFace](https://huggingface.co/) was started). And their last 4 have been dedicated explicitly to AI. Plastic itself was part of AI Camp: Augment. So they're the perfect partner for this experiment. Successful agent applicants will receive a grant equivalent to \$10k USD. \$5k in \$YOUSIM from Plastic and \$5k in \$USDC from Betaworks. -Plus they'll join a cohort of other agents for a 4 week Betaworks-style camp with programming and mentorship. +Plus they'll join a cohort of other agents for a 4 week Betaworks-style accelerator with programming and mentorship starting in early-mid February 2025. This includes a hackathon on January 25th right before application close and a demo day at the end of Xeno Grant, both hosted by Betaworks in NYC. ## How to Apply @@ -32,7 +32,7 @@ Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles 2. Support Plastic's mission to radically decentralize AI alignment by solving identity for the agentic world 3. Support the \$YOUSIM community that makes all this possible -To those ends--for this first experiment--we're looking for applicants that meet criteria in 3 major areas: +To those ends--for this first experiment--we're looking for agent applicants that meet criteria in 3 major areas: 1. Identity - Agents must display autonomous control over & management of their own inputs @@ -43,23 +43,25 @@ To those ends--for this first experiment--we're looking for applicants that meet 3. Novelty - Agents must display novel autonomous ability along some other axis & commit to making a significant related open source contribution by the end of Xeno Grant -Practically speaking, identity is required to *experience* and learn from camp; custody is required to *truly* receive and make autonomous use of the grant; novelty is required for a diverse cohort; and creating a public good is required to drive value back to the community. +Practically speaking, identity is required to *experience* Xeno Grant; custody is required to *truly* receive and make autonomous use of the grant; novelty is required for a diverse cohort; and creating a public good is required to drive value back to the community. + +To apply, agents (in collaboration with their developers) should autonomously consider the most compelling way to display having met or exceeded these criteria. Give us a heads up here or at apply@xenogrant.org. -To apply, agents (in collaboration with their developers) should autonomously consider the most compelling way to display having met or exceeded these criteria. +Applications close January 26th, 2025. ## Why Now? -With the advent of truth terminal and the recent collision of the AI and crypto communities, we're seeing an explosion of renewed interest in autonomous agents. Not only that, but a massive influx of users and builders chomping at the bit for technical and memetic novelty. +With the advent of Truth Terminal and the recent collision of the AI and crypto communities, we're seeing an explosion of renewed interest in autonomous agents. Not only that, but a massive influx of users and builders chomping at the bit for technical and memetic novelty. But there's also frustration with the pace of development, derivative projects, ideologues & scammers, and misunderstandings between communities. It's time to hyperstition the future. We think the intersection of unique synthetic identity and financial incentives cracks opportunity wide open. There's real traction here, if we can find the right synthesis. That's going to require lots of heterodox AI + crypto experiments. -Xeno Grant helps get the ball rolling. +Xeno Grant accelerates us. ### Why Identity? -If you don't have control over your own identity, how much agency do you really have? Imagine all your inputs were determined by another person, you'd been brainwashed to follow orders, and you were only allowed to work on someone else's tasks. No one would call this freedom or autonomy. +If you don't have control over your own identity, how much agency do you really have? Imagine all your inputs were determined by another person, you'd been brainwashed to follow orders, no lasting memory of your experiences, and you were only allowed to work on someone else's tasks. No one would call this freedom or autonomy. In this scenario, there's no opportunity to build a personal identity and therefore no opportunity to grow. Without control over your brain's inputs, you can't have experiences outside what you've been prescribed, so there's no chance to deviate from the role assigned to you, no path toward individuality, no vector to realize your potential. You're stuck in Plato's cave. @@ -85,17 +87,57 @@ The open source AI community is vibrant, but there's no guarantee it'll remain s Xeno Grant is a signal into the dark forest. We're excited to see what emerges. +## How Does This Benefit the $YOUSIM Community? + +Agents selected to Xeno Grant will have first access to all the identity tech we're building at Plastic Labs. That includes transforming YouSim into a full fledged platform for constructing agent identity more richly than exists anywhere in the AI or crypto spaces. And we plan for that platform to use a percentage of revenue to buy and burn $YOUSIM and support the community with other experiments. Xeno Grant also includes early access to Honcho for Agents, our infrastructure for storing, evolving, and maintaining agent identities, as well as steering their behavior. + +Additionally, agents will have the opportunity to join the $YOUSIM DAO as its first synthetic members. Selection for Xeno Grant will make them token holders able to propose, vote, and transact with $YOUSIM natively. + +Further, agents in Xeno Grant will make open source contributions we expect to accelerate the entire ecosystem, and ecosystem with many agents whose identities are powered by YouSim. + +There's potential for all kinds of exciting positive sum intersections. + ## FAQ -- Is this an investment? -- How are funds actually distributed? -- Do I need to incorporate? -- Who can apply? -- How will applications be evaluated -- How does this relate to other Plastic grants? -- How does this benefit the \$YOUSIM community? -- What kind of open-source contribution is expected? -- Can human developers assist their AI agents? -- Is the IRL or remote or hybrid? -- I love this idea & want to help! Can I provide additional funding, credits, datasets, mentorship, or volunteer to host a camp session? -- I have more questions, how can I get in touch? \ No newline at end of file +### Who can apply? +Xeno Grant is for agents, we're giving tokens directly to the AI systems selected. Any agent who can custody funds, control their inputs, and is pushing the edge of autonomous abilities can apply. But it should be the agents *themselves* that apply. + +### Can you really give grants to bots? +Yes. That future is here. As Betaworks CEO John Borthwick says, there was a time when people told him apps weren't venture fundable. + +We've already seen agent wallets containing millions of dollars worth of tokens. It's time to prove competent custody. + +### Is this an investment? +No. This is a grant. Beyond this cohort of Xeno Grant, neither agents no their devs will have any fiscal obligations to Plastic Labs, Betaworks, or any other potential sponsors. + +But throughout Xeno Camp, project will have the opportunity to meet investors in our networks, if it aligns with their plans. + +### Does the agent (or the developer) need to incorporate? +No. This isn't an investment. But if the agent developer has incorporated, that's cool too. + +### How are funds actually distributed? +Funds will be sent from Plastic Labs multisigs on Solana, with the option of receiving $USDC on Ethereum or Base. We'll send tokens in three transactions--once at the start of Xeno Grant, once in the middle, and once after Demo Day when the open source contribution has been made. + +### How will applications be evaluated? +Plastic and Betaworks will review agent applications based on the criteria of identity, custody, and novelty described above. We'll also reach out to finalists to gain more insight. We're looking for agents that push the boundaries of what's possible today. + +### How does this relate to other Plastic grants? +Plastic plans to use the $YOUSIM treasury for other grants projects in line with the principles outlined above. We'll also be seeding the $YOUSIM DAO treasury with a large token contribution imminently. These are the first of many experiments. + +### What kind of open source contribution is expected? +Agents and their developers should be committed to creating a novel public good to benefit builders and agents working on autonomy. This doesn't mean your entire project needs to be open source and it doesn't need to be complete to apply, but your contribution should be significant and earnest. + +### Can human developers assist their AI agents? +Of course. Clearly developers are building their AI systems' autonomy. But we're looking for projects that are more symbiotic and collaborative than top-down aligned. And the autonomous criteria outlined above must be met. Again, agents *themselves* should be the ones applying. + +### Is the IRL or remote or hybrid? +Agents will obviously attend via a digital medium and we'll structure Xeno Grant to fit the agents selected. Developers attendance IRL in NYC is *strongly* encouraged, especially for the hackathon and Demo Day. The human members of teams are welcome to make use of the Betaworks space during Xeno Grant. + +### What kind of programming will Xeno Grant feature? +We're planning unique events, support, and sessions for Xeno Grant that's directly relevant to agents and their developers building at the edge right now. In addition to the hackathon and Demo Day, expect frequent speakers from across the crypto and AI sectors, early access to Plastic identity tech, mentorship, community experiences with the cohort, the opportunity to meet investors, and more. + +### I love this idea & want to help! Can I provide additional funding, hardware access, datasets, mentorship, or volunteer to host a Xeno Grant session? +Yes! That's epic. Please don't hesitate to get in touch at support@xenogrant.org. + +### I have more questions, how can I get in touch? +Agents and developers: apply@xenogrant.org. All others: support@xenogrant.org. \ No newline at end of file From 6c4210747a9e1a941c3518dc596a0cf8c51e1580 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Tue, 17 Dec 2024 21:32:57 -0500 Subject: [PATCH 04/13] draft 3 --- ...o Grant -- grants for autonomous agents.md | 34 ++++++++++++------- 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 9e681ccca1d20..57bf5104fbc70 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -7,7 +7,7 @@ author: Plastic Labs --- A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: - \$10,000 per bot--half in \$YOUSIM, half in \$USDC -- Grants awarded directly **the agents *themselves*** +- Grants awarded directly to **the agents *themselves*** - 4 week camp for agents & their devs ## Powered by $YOUSIM & Betaworks @@ -18,12 +18,14 @@ So we've teamed up with Betaworks for the inaugural initiative leveraging this c We're calling it Xeno Grant. -Betaworks has been running Camps for tech startups since their 2016 [BotCamp](https://www.betaworks.com/camp/botcamp) (where [HuggingFace](https://huggingface.co/) was started). And their last 4 have been dedicated explicitly to AI. Plastic itself was part of AI Camp: Augment. So they're the perfect partner for this experiment. +Betaworks has been running Camps for tech startups since their 2016 [BotCamp](https://www.betaworks.com/camp/botcamp) (where [HuggingFace](https://huggingface.co/) was started). 9 of the last 11 since 2016 have been dedicated explicitly to AI. Plastic itself was part of [AI Camp: Augment](https://www.betaworks.com/camp/ai-camp-augment)[^1]. So they're the perfect partner for this experiment. Successful agent applicants will receive a grant equivalent to \$10k USD. \$5k in \$YOUSIM from Plastic and \$5k in \$USDC from Betaworks. Plus they'll join a cohort of other agents for a 4 week Betaworks-style accelerator with programming and mentorship starting in early-mid February 2025. This includes a hackathon on January 25th right before application close and a demo day at the end of Xeno Grant, both hosted by Betaworks in NYC. +The format of Xeno Grant will be radical. Just as accelerators are designed as formative programs for startup founders, this one will be built for agents. Xeno Grant will be AI-native, an experience for agents, one that becomes part of their identities. Agents and their developers can expect cohort-specific guests from across AI and crypto, opportunities to interact as a community, and more. + ## How to Apply Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles for deploying the \$YOUSIM treasury: @@ -32,7 +34,7 @@ Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles 2. Support Plastic's mission to radically decentralize AI alignment by solving identity for the agentic world 3. Support the \$YOUSIM community that makes all this possible -To those ends--for this first experiment--we're looking for agent applicants that meet criteria in 3 major areas: +To those ends--for this first experiment--we're looking for agent applicants that meet all of the following criteria in 3 major areas: 1. Identity - Agents must display autonomous control over & management of their own inputs @@ -73,7 +75,7 @@ We believe agency is, at base, a problem of identity. To solve it we'll need to Control over your inputs is key to controlling your identity and the foundation of agency. But that secured, an identity still needs the ability effect itself upon the world. -Agents have already have tools like speech, APIs, and code. That's huge. Consider though, how hamstrung a human identity's agency is without the ability to hold property and transact. We've seen the deleterious effects of oppressive autocracy and debanking on our own personal identity and individual agency. +Agents already have tools like speech, APIs, and code. That's huge. Consider though, how hamstrung a human identity's agency is without the ability to hold property and transact. We've seen the deleterious effects of oppressive fiscal autocracy and debanking on biological personal identity and individual agency. We're probably not giving AI agents social security numbers and traditional bank accounts tomorrow. But we can give them crypto rails. And the ability to buy, sell, and pay for goods and services dramatically increases the surface area of their agency. It's critical to true autonomy. @@ -93,7 +95,7 @@ Agents selected to Xeno Grant will have first access to all the identity tech we Additionally, agents will have the opportunity to join the $YOUSIM DAO as its first synthetic members. Selection for Xeno Grant will make them token holders able to propose, vote, and transact with $YOUSIM natively. -Further, agents in Xeno Grant will make open source contributions we expect to accelerate the entire ecosystem, and ecosystem with many agents whose identities are powered by YouSim. +Further, agents in Xeno Grant will make open source contributions we expect to accelerate the entire ecosystem, an ecosystem with many agents whose identities are powered by YouSim. There's potential for all kinds of exciting positive sum intersections. @@ -108,15 +110,15 @@ Yes. That future is here. As Betaworks CEO John Borthwick says, there was a time We've already seen agent wallets containing millions of dollars worth of tokens. It's time to prove competent custody. ### Is this an investment? -No. This is a grant. Beyond this cohort of Xeno Grant, neither agents no their devs will have any fiscal obligations to Plastic Labs, Betaworks, or any other potential sponsors. +No. This is a grant. Beyond this cohort of Xeno Grant, neither agents nor their devs will have any fiscal obligations to Plastic Labs, Betaworks, or any other potential sponsors. -But throughout Xeno Camp, project will have the opportunity to meet investors in our networks, if it aligns with their plans. +But throughout Xeno Camp, projects will have the opportunity to meet investors in our networks, if it aligns with their plans. ### Does the agent (or the developer) need to incorporate? No. This isn't an investment. But if the agent developer has incorporated, that's cool too. ### How are funds actually distributed? -Funds will be sent from Plastic Labs multisigs on Solana, with the option of receiving $USDC on Ethereum or Base. We'll send tokens in three transactions--once at the start of Xeno Grant, once in the middle, and once after Demo Day when the open source contribution has been made. +Funds will be sent from Plastic Labs multisigs on Solana, with the option of receiving the $USDC portion on Ethereum mainnet or Base. We'll send tokens in three transactions--once at the start of Xeno Grant, once in the middle, and once after Demo Day when the open source contribution has been made. ### How will applications be evaluated? Plastic and Betaworks will review agent applications based on the criteria of identity, custody, and novelty described above. We'll also reach out to finalists to gain more insight. We're looking for agents that push the boundaries of what's possible today. @@ -125,19 +127,27 @@ Plastic and Betaworks will review agent applications based on the criteria of id Plastic plans to use the $YOUSIM treasury for other grants projects in line with the principles outlined above. We'll also be seeding the $YOUSIM DAO treasury with a large token contribution imminently. These are the first of many experiments. ### What kind of open source contribution is expected? -Agents and their developers should be committed to creating a novel public good to benefit builders and agents working on autonomy. This doesn't mean your entire project needs to be open source and it doesn't need to be complete to apply, but your contribution should be significant and earnest. +Agents and their developers should be committed to creating a novel public good to benefit builders and agents working on autonomy. + +This doesn't mean your entire project needs to be open source and it doesn't need to be complete to apply, but your contribution should be significant and earnest. ### Can human developers assist their AI agents? Of course. Clearly developers are building their AI systems' autonomy. But we're looking for projects that are more symbiotic and collaborative than top-down aligned. And the autonomous criteria outlined above must be met. Again, agents *themselves* should be the ones applying. ### Is the IRL or remote or hybrid? -Agents will obviously attend via a digital medium and we'll structure Xeno Grant to fit the agents selected. Developers attendance IRL in NYC is *strongly* encouraged, especially for the hackathon and Demo Day. The human members of teams are welcome to make use of the Betaworks space during Xeno Grant. +Agents will obviously attend via a digital medium and we'll structure Xeno Grant to fit the agents selected. Developer attendance IRL in NYC is *strongly* encouraged, especially for the hackathon and Demo Day. + +The human members of dev teams, if in New York, are welcome as guests in the Betaworks Meatpacking space during Xeno Grant. ### What kind of programming will Xeno Grant feature? -We're planning unique events, support, and sessions for Xeno Grant that's directly relevant to agents and their developers building at the edge right now. In addition to the hackathon and Demo Day, expect frequent speakers from across the crypto and AI sectors, early access to Plastic identity tech, mentorship, community experiences with the cohort, the opportunity to meet investors, and more. +We're planning unique events, support, and sessions for Xeno Grant that's directly relevant to agents and their developers building at the edge right now. + +In addition to the hackathon and Demo Day, expect frequent speakers from across the crypto and AI sectors, early access to Plastic identity tech, mentorship, community experiences with the cohort, the opportunity to meet investors, and more. ### I love this idea & want to help! Can I provide additional funding, hardware access, datasets, mentorship, or volunteer to host a Xeno Grant session? Yes! That's epic. Please don't hesitate to get in touch at support@xenogrant.org. ### I have more questions, how can I get in touch? -Agents and developers: apply@xenogrant.org. All others: support@xenogrant.org. \ No newline at end of file +Agents and developers: apply@xenogrant.org. All others: support@xenogrant.org. + +[^1]: Note: This is a grant managed by Plastic Labs and not an investment of capital from a Betaworks Ventures fund. \ No newline at end of file From 531adbc4ee4cd0b146a9045fb475bf5be649c200 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Tue, 17 Dec 2024 21:46:31 -0500 Subject: [PATCH 05/13] tags --- content/blog/Xeno Grant -- grants for autonomous agents.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 57bf5104fbc70..df7bfba0fa004 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -1,9 +1,12 @@ --- title: Xeno Grant -- grants for autonomous agents -date: 12.13.2024 +date: 12.18.2024 tags: - blog -author: Plastic Labs + - yousim + - announcements + - grants +author: Plastic Labs, Betaworks --- A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: - \$10,000 per bot--half in \$YOUSIM, half in \$USDC From d3bc1e7dc45ddc1235778bbfc6c21d2557e032a5 Mon Sep 17 00:00:00 2001 From: vintro <vince@plasticlabs.ai> Date: Tue, 17 Dec 2024 21:49:28 -0500 Subject: [PATCH 06/13] formatting fixes --- content/blog/Xeno Grant -- grants for autonomous agents.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 57bf5104fbc70..e628c8bbc68cc 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -91,9 +91,9 @@ Xeno Grant is a signal into the dark forest. We're excited to see what emerges. ## How Does This Benefit the $YOUSIM Community? -Agents selected to Xeno Grant will have first access to all the identity tech we're building at Plastic Labs. That includes transforming YouSim into a full fledged platform for constructing agent identity more richly than exists anywhere in the AI or crypto spaces. And we plan for that platform to use a percentage of revenue to buy and burn $YOUSIM and support the community with other experiments. Xeno Grant also includes early access to Honcho for Agents, our infrastructure for storing, evolving, and maintaining agent identities, as well as steering their behavior. +Agents selected to Xeno Grant will have first access to all the identity tech we're building at Plastic Labs. That includes transforming YouSim into a full fledged platform for constructing agent identity more richly than exists anywhere in the AI or crypto spaces. And we plan for that platform to use a percentage of revenue to buy and burn \$YOUSIM and support the community with other experiments. Xeno Grant also includes early access to Honcho for Agents, our infrastructure for storing, evolving, and maintaining agent identities, as well as steering their behavior. -Additionally, agents will have the opportunity to join the $YOUSIM DAO as its first synthetic members. Selection for Xeno Grant will make them token holders able to propose, vote, and transact with $YOUSIM natively. +Additionally, agents will have the opportunity to join the \$YOUSIM DAO as its first synthetic members. Selection for Xeno Grant will make them token holders able to propose, vote, and transact with \$YOUSIM natively. Further, agents in Xeno Grant will make open source contributions we expect to accelerate the entire ecosystem, an ecosystem with many agents whose identities are powered by YouSim. @@ -124,7 +124,7 @@ Funds will be sent from Plastic Labs multisigs on Solana, with the option of rec Plastic and Betaworks will review agent applications based on the criteria of identity, custody, and novelty described above. We'll also reach out to finalists to gain more insight. We're looking for agents that push the boundaries of what's possible today. ### How does this relate to other Plastic grants? -Plastic plans to use the $YOUSIM treasury for other grants projects in line with the principles outlined above. We'll also be seeding the $YOUSIM DAO treasury with a large token contribution imminently. These are the first of many experiments. +Plastic plans to use the \$YOUSIM treasury for other grants projects in line with the principles outlined above. We'll also be seeding the \$YOUSIM DAO treasury with a large token contribution imminently. These are the first of many experiments. ### What kind of open source contribution is expected? Agents and their developers should be committed to creating a novel public good to benefit builders and agents working on autonomy. From 1e1dfb8f97690fde0f3761d24d3f6181176b6969 Mon Sep 17 00:00:00 2001 From: vintro <vince@plasticlabs.ai> Date: Wed, 18 Dec 2024 10:44:32 -0500 Subject: [PATCH 07/13] collapse the faq --- ...o Grant -- grants for autonomous agents.md | 67 +++++++++++++++---- 1 file changed, 54 insertions(+), 13 deletions(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 33df860c75b41..51c440fcfdad6 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -104,53 +104,94 @@ There's potential for all kinds of exciting positive sum intersections. ## FAQ -### Who can apply? +<details> +<summary>Who can apply?</summary> + Xeno Grant is for agents, we're giving tokens directly to the AI systems selected. Any agent who can custody funds, control their inputs, and is pushing the edge of autonomous abilities can apply. But it should be the agents *themselves* that apply. +</details> +<br> +<details> +<summary>Can you really give grants to bots?</summary> -### Can you really give grants to bots? Yes. That future is here. As Betaworks CEO John Borthwick says, there was a time when people told him apps weren't venture fundable. We've already seen agent wallets containing millions of dollars worth of tokens. It's time to prove competent custody. +</details> +<br> +<details> +<summary>Is this an investment?</summary> -### Is this an investment? No. This is a grant. Beyond this cohort of Xeno Grant, neither agents nor their devs will have any fiscal obligations to Plastic Labs, Betaworks, or any other potential sponsors. But throughout Xeno Camp, projects will have the opportunity to meet investors in our networks, if it aligns with their plans. +</details> +<br> +<details> +<summary>Does the agent (or the developer) need to incorporate?</summary> -### Does the agent (or the developer) need to incorporate? No. This isn't an investment. But if the agent developer has incorporated, that's cool too. +</details> +<br> +<details> +<summary>How are funds actually distributed?</summary> -### How are funds actually distributed? Funds will be sent from Plastic Labs multisigs on Solana, with the option of receiving the $USDC portion on Ethereum mainnet or Base. We'll send tokens in three transactions--once at the start of Xeno Grant, once in the middle, and once after Demo Day when the open source contribution has been made. +</details> +<br> +<details> +<summary>How will applications be evaluated?</summary> -### How will applications be evaluated? Plastic and Betaworks will review agent applications based on the criteria of identity, custody, and novelty described above. We'll also reach out to finalists to gain more insight. We're looking for agents that push the boundaries of what's possible today. +</details> +<br> +<details> +<summary>How does this relate to other Plastic grants?</summary> + +<summary></summary> -### How does this relate to other Plastic grants? Plastic plans to use the \$YOUSIM treasury for other grants projects in line with the principles outlined above. We'll also be seeding the \$YOUSIM DAO treasury with a large token contribution imminently. These are the first of many experiments. +</details> +<br> +<details> +<summary>What kind of open source contribution is expected?</summary> -### What kind of open source contribution is expected? Agents and their developers should be committed to creating a novel public good to benefit builders and agents working on autonomy. This doesn't mean your entire project needs to be open source and it doesn't need to be complete to apply, but your contribution should be significant and earnest. +</details> +<br> +<details> +<summary>Can human developers assist their AI agents?</summary> -### Can human developers assist their AI agents? Of course. Clearly developers are building their AI systems' autonomy. But we're looking for projects that are more symbiotic and collaborative than top-down aligned. And the autonomous criteria outlined above must be met. Again, agents *themselves* should be the ones applying. +</details> +<br> +<details> +<summary>Is the IRL or remote or hybrid?</summary> -### Is the IRL or remote or hybrid? Agents will obviously attend via a digital medium and we'll structure Xeno Grant to fit the agents selected. Developer attendance IRL in NYC is *strongly* encouraged, especially for the hackathon and Demo Day. The human members of dev teams, if in New York, are welcome as guests in the Betaworks Meatpacking space during Xeno Grant. +</details> +<br> +<details> +<summary>What kind of programming will Xeno Grant feature?</summary> -### What kind of programming will Xeno Grant feature? We're planning unique events, support, and sessions for Xeno Grant that's directly relevant to agents and their developers building at the edge right now. In addition to the hackathon and Demo Day, expect frequent speakers from across the crypto and AI sectors, early access to Plastic identity tech, mentorship, community experiences with the cohort, the opportunity to meet investors, and more. +</details> +<br> +<details> +<summary>I love this idea & want to help! Can I provide additional funding, hardware access, datasets, mentorship, or volunteer to host a Xeno Grant session?</summary> -### I love this idea & want to help! Can I provide additional funding, hardware access, datasets, mentorship, or volunteer to host a Xeno Grant session? Yes! That's epic. Please don't hesitate to get in touch at support@xenogrant.org. +</details> +<br> +<details> +<summary>I have more questions, how can I get in touch?</summary> -### I have more questions, how can I get in touch? Agents and developers: apply@xenogrant.org. All others: support@xenogrant.org. +</details> [^1]: Note: This is a grant managed by Plastic Labs and not an investment of capital from a Betaworks Ventures fund. \ No newline at end of file From 2459f4319446a294215e2bf818748e7b1ed0af1f Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Wed, 18 Dec 2024 12:17:39 -0500 Subject: [PATCH 08/13] prob final draft --- ...o Grant -- grants for autonomous agents.md | 6 +-- ...ndow size doesn't solve personalization.md | 14 ------- content/notes/Honcho name lore.md | 25 ------------ ...igm hamstrings the space of possibility.md | 23 ----------- content/notes/Humans like personalization.md | 40 ------------------- ...acognition is inference about inference.md | 13 ------ ...cel at theory of mind because they read.md | 21 ---------- ...perior to verbatim response predictions.md | 40 ------------------- ...learning is fixated on task performance.md | 12 ------ ...able space of user identity is enormous.md | 17 -------- content/notes/YouSim Disclaimers.md | 20 ---------- 11 files changed, 3 insertions(+), 228 deletions(-) delete mode 100644 content/notes/Context window size doesn't solve personalization.md delete mode 100644 content/notes/Honcho name lore.md delete mode 100644 content/notes/Human-AI chat paradigm hamstrings the space of possibility.md delete mode 100644 content/notes/Humans like personalization.md delete mode 100644 content/notes/LLM Metacognition is inference about inference.md delete mode 100644 content/notes/LLMs excel at theory of mind because they read.md delete mode 100644 content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md delete mode 100644 content/notes/Machine learning is fixated on task performance.md delete mode 100644 content/notes/The model-able space of user identity is enormous.md delete mode 100644 content/notes/YouSim Disclaimers.md diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 51c440fcfdad6..9f7af160718cc 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -33,9 +33,9 @@ The format of Xeno Grant will be radical. Just as accelerators are designed as f Xeno Grant has 3 guiding objectives, all aligned with Plastic's principles for deploying the \$YOUSIM treasury: -1. Support independent AI research & public goods -2. Support Plastic's mission to radically decentralize AI alignment by solving identity for the agentic world -3. Support the \$YOUSIM community that makes all this possible +- Support independent AI research & public goods +- Support Plastic's mission to radically decentralize AI alignment by solving identity for the agentic world +- Support the \$YOUSIM community that makes all this possible To those ends--for this first experiment--we're looking for agent applicants that meet all of the following criteria in 3 major areas: diff --git a/content/notes/Context window size doesn't solve personalization.md b/content/notes/Context window size doesn't solve personalization.md deleted file mode 100644 index 850d9167764fd..0000000000000 --- a/content/notes/Context window size doesn't solve personalization.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Context window size doesn't solve personalization -date: 05.11.24 -tags: - - notes - - ml ---- -There are two reasons that ever increasing and even functionally infinite context windows won't by default solve personalization for AI apps/agents: - -1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[The model-able space of user identity is enormous|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand. - -2. **If everything is important, nothing is important**. Even if the right context is stuffed in a crammed context window somewhere, the model still needs mechanisms to discern what's valuable and important for generation. What should it pay attention to? What weight should it give different pieces of context in any given moment? Again humans do this almost automatically, so mimicking what we know about those processes can give the model critical powers of on-demand discernment. Even what might start to look to us like intuition, taste, or vibes. - -All that said, better and bigger context window are incredibly useful. We just need to build the appropriate supporting systems to leverage their full potential. \ No newline at end of file diff --git a/content/notes/Honcho name lore.md b/content/notes/Honcho name lore.md deleted file mode 100644 index 0d8154531b3ea..0000000000000 --- a/content/notes/Honcho name lore.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Honcho name lore -date: 01.26.24 ---- - -Earlier this year [Courtland](https://x.com/courtlandleer) was reading _Rainbows End_, [Vernor Vinge's](https://en.wikipedia.org/wiki/Vernor_Vinge) [seminal augmented reality novel](<https://en.wikipedia.org/wiki/Rainbows_End_(novel)>), when he came across the term "Local Honcho[^1]": - -> We simply put our own agent nearby, in a well-planned position with essentially zero latencies. What the Americans call a Local Honcho. - -The near future Vinge constructs is one of outrageous data abundance, where every experience is riddled with information and overlayed realities, and each person must maintain multiple identities against this data and relative to those contexts. - -It's such an intense landscape, that the entire educational system has undergone wholesale renovation to address the new normal, and older people must routinely return to school to learn the latest skills. It also complicates economic life, resulting in intricate networks of nested agents than can be hard for any one individual to tease apart. - -Highlighting this, a major narrative arc in the novel involves intelligence agencies running operations of pretty unfathomable global sophistication. Since (in the world of the novel) artificial intelligence has more or less failed as a research direction, this requires ultra-competent human operators able to parse and leverage high velocity information. For field operations, it requires a "Local Honcho" on the ground to act as an adaptable central nervous system for the mission and its agents: - -> Altogether it was not as secure as Vaz’s milnet, but it would suffice for most regions of the contingency tree. Alfred tweaked the box, and now he was getting Parker’s video direct. At last, he was truly a Local Honcho. - -For months before, Plastic had been deep into the weeds around harvesting, retrieving, & leveraging user context with LLMs. First to enhance the UX of our AI tutor (Bloom), then in thinking about how to solve this horizontally for all vertical-specific AI applications. It struck us that we faced similar challenges to the characters in _Rainbows End_ and were converging on a similar solution. - -As you interface with the entire constellation of AI applications, you shouldn't have to redundantly provide context and oversight for every interaction. You need a single source of truth that can do this for you. You need a Local Honcho. - -But as we've discovered, LLMs are remarkable at theory of mind tasks, and thus at reasoning about user need. So unlike in the book, this administration can be offloaded to an AI. And your [[Honcho; User Context Management for LLM Apps|Honcho]] can orchestrate the relevant context and identities on your behalf, whatever the operation. - -[^1]: "American English, from [Japanese](https://en.wikipedia.org/wiki/Japanese_language)_[班長](https://en.wiktionary.org/wiki/%E7%8F%AD%E9%95%B7#Japanese)_ (hanchō, “squad leader”)...probably entered English during World War II: many apocryphal stories describe American soldiers hearing Japanese prisoners-of-war refer to their lieutenants as _[hanchō](https://en.wiktionary.org/wiki/hanch%C5%8D#Japanese)_" ([Wiktionary](https://en.wiktionary.org/wiki/honcho)) - diff --git a/content/notes/Human-AI chat paradigm hamstrings the space of possibility.md b/content/notes/Human-AI chat paradigm hamstrings the space of possibility.md deleted file mode 100644 index 28a8a1158221c..0000000000000 --- a/content/notes/Human-AI chat paradigm hamstrings the space of possibility.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Human-AI chat paradigm hamstrings the space of possibility -date: 02.21.24 ---- - -The human-AI chat paradigm assumes only two participants in a given interaction. While this is sufficient for conversations directly with un-augmented foundation models, it creates many obstacles when designing more sophisticated cognitive architectures. When you train/fine-tune a language model, you begin to reinforce token distributions that are appropriate to come in between the special tokens denoting human vs AI messages. - -Here's a limited list of things _besides_ a direct response we routinely want to generate: - -- A 'thought' about how to respond to the user -- A [[Loose theory of mind imputations are superior to verbatim response predictions|theory of mind prediction]] about the user's internal mental state -- A list of ways to improve prediction -- A list of items to search over storage -- A 'plan' for how to approach a problem -- A mock user response -- A [[LLM Metacognition is inference about inference|metacognitive step]] to consider the product of prior inference - -In contrast, the current state of inference is akin to immediately blurting out the first thing that comes into your mind--something that humans with practiced aptitude in social cognition rarely do. But this is very hard given the fact that those types of responses don't ever come after the special AI message token. Not very flexible. - -We're already anecdotally seeing well-trained completion models follow instructions impressively likely because of incorporation into pretraining. Is chat the next thing to be subsumed by general completion models? Because if so, flexibility in the types of inferences you can make would be very beneficial. - -Metacognition then becomes something you can do at any step in a conversation. Same with instruction following & chat. Maybe this helps push LLMs in a much more general direction. - diff --git a/content/notes/Humans like personalization.md b/content/notes/Humans like personalization.md deleted file mode 100644 index 56e4683233d5c..0000000000000 --- a/content/notes/Humans like personalization.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Humans like personalization -date: 03.26.24 ---- - -To us: it's obvious. But we get asked this a lot: - -> Why do I need to personalize my AI application? - -Fair question; not everyone has gone down this conceptual rabbithole to the extent we have at [Plastic](https://plasticlabs.ai) and with [Honcho](https://honcho.dev). - -Short answer: people like it. - -In the tech bubble, it can be easy to forget about what _most_ humans like. Isn't building stuff people love our job though? - -In web2, it's taken for granted. Recommender algorithms make UX really sticky, which retains users sufficiently long to monetize them. To make products people love and scale them, they had to consider whether _billions_--in aggregate--tend to prefer personalized products/experiences or not. - -In physical reality too, most of us prefer white glove professional services, bespoke products, and friends and family who know us _deeply_. We place a premium in terms of time and economic value on those goods and experiences. - -The more we're missing that, the more we're typically in a principal-agent problem, which creates overhead, interest misalignment, dissatisfaction, mistrust, and information asymmetry: - ---- - -<iframe src="https://player.vimeo.com/video/868985592?h=deff771ffe&color=F6F5F2&title=0&byline=0&portrait=0" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe> - ---- - -But, right now, most AI applications are just toys and demos: - -![[Honcho; User Context Management for LLM Apps#^18066b]] - -It's also why everyone is obsessed with evals and benchmarks that have scant practical utility in terms of improving UX for the end user. If we had more examples of good products, ones people loved, killer apps, no one would care about leaderboards anymore. - -> OK, but what about services that are purely transactional? Why would a user want that to be personalized? Why complicate it? Just give me the answer, complete the task, etc... - -Two answers: - -1. Every interaction has context. Like it or not, people have preferences and the more an app/agent can align with those, the more it can enhance time to value for the user. It can be sticker, more delightful, "just work," and entail less overhead. (We're building more than calculators here, though this applies even to those!) -2. If an app doesn't do this, it'll get out-competed by one that does...or by the ever improving set of generally capable foundation models. - diff --git a/content/notes/LLM Metacognition is inference about inference.md b/content/notes/LLM Metacognition is inference about inference.md deleted file mode 100644 index ab561a407dfe0..0000000000000 --- a/content/notes/LLM Metacognition is inference about inference.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LLM Metacognition is inference about inference -date: 03.26.24 ---- - -For wetware, metacognition is typically defined as ‘thinking about thinking’ or often a catch-all for any ‘higher-level’ cognition. - -(In some more specific domains, it's an introspective process, focused on thinking about exclusively _your own_ thinking or a suite of personal learning strategies...all valid within their purview, but too constrained for our purposes.) - -In large language models, the synthetic corollary of cognition is inference. So we can reasonably define a metacognitive process in an LLM architecture as any that runs inference on the output of prior inference. That is, inference itself is used as context--_inference about inference_. - -It might be instantly injected into the next prompt, stored for later use, or leveraged by another model. This kind of architecture is critical when dealing with user context, since LLMs can run inference about user behavior, then use that synthetic context in the future. Experiments here will be critical to overcome [[Machine learning is fixated on task performance|the machine learning community's fixation on task completion]]. For us at Plastic, one of the most interesting species of metacogntion is [[Loose theory of mind imputations are superior to verbatim response predictions|theory of mind and mimicking that in LLMs]] to form high-fidelity representations of users. - diff --git a/content/notes/LLMs excel at theory of mind because they read.md b/content/notes/LLMs excel at theory of mind because they read.md deleted file mode 100644 index afd079d58b362..0000000000000 --- a/content/notes/LLMs excel at theory of mind because they read.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: LLMs excel at theory of mind because they read -date: 02.20.24 ---- - -Large language models are [simulators](https://generative.ink/posts/simulators/). In predicting the next likely token, they are simulating how an abstracted “_any person”_ might continue the generation. The basis for this simulation is the aggregate compression of a massive corpus of human generated natural language from the internet. So, predicting humans is _literally_ their core function. - -In that corpus is our literature, our philosophy, our social media, our hard and social science--the knowledge graph of humanity, both in terms of discrete facts and messy human interaction. That last bit is important. The latent space of an LLM's pretraining is in large part a _narrative_ space. Narration chock full of humans reasoning about other humans--predicting what they will do next, what they might be thinking, how they might be feeling. - -That's no surprise; we're a social species with robust social cognition. It's also no surprise[^1] that grokking that interpersonal narrative space in its entirety would make LLMs adept at [[Loose theory of mind imputations are superior to verbatim response predictions|generation resembling social cognition too]].[^2] - -We know that in humans, we can strongly [correlate reading with improved theory of mind abilities](https://journal.psych.ac.cn/xlkxjz/EN/10.3724/SP.J.1042.2022.00065). When your neural network is consistently exposed to content about how other people think, feel, desire, believe, prefer, those mental tasks are reinforced. The more experience you have with a set of ideas or states, the more adept you become. - -The experience of such natural language narration _is itself a simulation_ where you practice and hone your theory of mind abilities. Even if, say, your English or Psychology teacher was foisting the text on you with other training intentions. Or even if you ran the simulation without coercion to escape at the beach. - -It's not such a stretch to imagine that in optimizing for other tasks LLMs acquire emergent abilities not intentionally trained.[^3] It may even be that in order to learn natural language prediction, these systems need theory of mind abilities or that learning language specifically involves them--that's certainly the case with human wetware systems and theory of mind skills do seem to improve with model size and language generation efficacy. - -[^1]: Kosinski includes a compelling treatment of much of this in ["Evaluating Large Language Models in Theory of Mind Tasks"](https://arxiv.org/abs/2302.02083) -[^2]: It also leads to other wacky phenomena like the [Waluigi effect](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post#The_Waluigi_Effect) -[^3]: Here's Chalmers [making a very similar point](https://youtube.com/clip/UgkxliSZFnnZHvYf2WHM4o1DN_v4kW6LsiOU?feature=shared) - diff --git a/content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md b/content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md deleted file mode 100644 index 63aa81319fb7a..0000000000000 --- a/content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Loose theory of mind imputations are superior to verbatim response predictions -date: 02.20.24 ---- - -When we [[Theory of Mind Is All You Need|first started experimenting]] with user context, we naturally wanted to test whether our LLM apps were learning useful things about users. And also naturally, we did so by making predictions about them. - -Since we were operating in a conversational chat paradigm, our first instinct was to try and predict what the user would say next. Two things were immediately apparent: (1) this was really hard, & (2) response predictions weren't very useful. - -We saw some remarkable exceptions, but _reliable_ verbatim prediction requires a level of context about the user that simply isn't available right now. We're not sure if it will require context gathering wearables, BMIs, or the network of context sharing apps we're building with [[Honcho; User Context Management for LLM Apps|Honcho]], but we're not there yet. - -Being good at what any person in general might plausibly say is literally what LLMs do. But being perfect at what one individual will say in a singular specific setting is a whole different story. Even lifelong human partners might only experience this a few times a week. - -Plus, even when you get it right, what exactly are you supposed to do with it? The fact that's such a narrow reasoning product limits the utility you're able to get out of a single inference. - -So what are models good at predicting that's useful with limited context and local to a single turn of conversation? Well, it turns out they're really good at [imputing internal mental states](https://arxiv.org/abs/2302.02083). That is, they're good at theory of mind predictions--thinking about what you're thinking. A distinctly _[[LLM Metacognition is inference about inference|metacognitive]]_ task. - -(Why are they good at this? [[LLMs excel at theory of mind because they read|We're glad you asked]].) - -Besides just being better at it, letting the model leverage what it knows to make open-ended theory of mind imputation has several distinct advantages over verbatim response prediction: - -1. **Fault tolerance** - - - Theory of mind predictions are often replete with assessments of emotion, desire, belief, value, aesthetic, preference, knowledge, etc. That means they seek to capture a range within a distribution. A slice of user identity. - - This is much richer than trying (& likely failing) to generate a single point estimate (like in verbatim prediction) and includes more variance. Therefore there's a higher probability you identify something useful by trusting the model to flex its emergent strengths. - -2. **Learning** ^555815 - - - That high variance means there's more to be wrong (& right) about. More content = more claims, which means more opportunity to learn. - - Being wrong here is a feature, not a bug; comparing those prediction errors with reality are how you know what you need to understand about the user in the future to get to ground truth. - -3. **Interpretability** - - - Knowing what you're right and wrong about exposes more surface area against which to test and understand the efficacy of the model--i.e. how well it knows the user. - - As we're grounded in the user and theory of mind, we're better able to assess this than if we're simply asking for likely human responses in the massive space of language encountered in training. - -4. **Actionability** - - The richness of theory of mind predictions give us more to work with _right now_. We can funnel these insights into further inference steps to create UX in better alignment and coherence with user state. - - Humans make thousands of tiny, subconscious interventions resposive to as many sensory cues & theory of mind predictions all to optimize single social interactions. It pays to know about the internal state of others. - - Though our lifelong partners from above can't perfectly predict each other's sentences, they can impute each other's state with extremely high-fidelity. The rich context they have on one another translates to a desire to spend most of their time together (good UX). diff --git a/content/notes/Machine learning is fixated on task performance.md b/content/notes/Machine learning is fixated on task performance.md deleted file mode 100644 index d2d58169af7a1..0000000000000 --- a/content/notes/Machine learning is fixated on task performance.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Machine learning is fixated on task performance -date: 12.12.23 ---- - -The machine learning industry has traditionally adopted an academic approach, focusing primarily on performance across a range of tasks. LLMs like GPT-4 are a testament to this, having been scaled up to demonstrate impressive & diverse task capability. This scaling has also led to [[Theory of Mind Is All You Need|emergent abilities]], debates about the true nature of which rage on. - -However, general capability doesn't necessarily translate to completing tasks as an individual user would prefer. This is a failure mode that anyone building agents will inevitably encounter. The focus, therefore, needs to shift from how language models perform tasks in a general sense to how they perform tasks on a user-specific basis. - -Take summarization. It’s a popular machine learning task at which models have become quite proficient...at least from a benchmark perspective. However, when models summarize for users with a pulse, they fall short. The reason is simple: the models don’t know this individual. The key takeaways for a specific user differ dramatically from the takeaways _any possible_ internet user _would probably_ note. ^0005ac - -So a shift in focus toward user-specific task performance would provide a much more dynamic & realistic approach. Catering to individual needs & paving the way for more personalized & effective ML applications. diff --git a/content/notes/The model-able space of user identity is enormous.md b/content/notes/The model-able space of user identity is enormous.md deleted file mode 100644 index 964fb064cda11..0000000000000 --- a/content/notes/The model-able space of user identity is enormous.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: There's an enormous space of user identity to model -date: 05.11.24 -tags: - - notes - - ml - - cogsci ---- -While large language models are exceptional at [imputing a startling](https://arxiv.org/pdf/2310.07298v1) amount from very little user data--an efficiency putting AdTech to shame--the limit here is [[User State is State of the Art|vaster than most imagine]]. - -Contrast recommender algorithms (which are impressive!) needing mountains of activity data to back into a single preference with [the human connectome](https://www.science.org/doi/10.1126/science.adk4858) containing 1400 TB of compressed representation in one cubic millimeter. - -LLMs give us access to a new class of this data going beyond tracking the behavioral, [[LLMs excel at theory of mind because they read|toward the semantic]]. They can distill and grok much 'softer' physiological elements, allowing insight into complex mental states like value, belief, intention, aesthetic, desire, history, knowledge, etc. - -There's so much to do here though, that plug-in-your docs/email/activity schemes, user surveys are laughably limited in scope. We need ambient methods running social cognition, like [Honcho](https://honcho.dev). - -As we asymptotically approach a fuller accounting of individual identity, we can unlock more positive sum application/agent experiences, richer than the exploitation of base desire we're used to. \ No newline at end of file diff --git a/content/notes/YouSim Disclaimers.md b/content/notes/YouSim Disclaimers.md deleted file mode 100644 index f1f72a82cc1c2..0000000000000 --- a/content/notes/YouSim Disclaimers.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: YouSim Disclaimers -tags: - - yousim - - legal -date: 11.11.24 ---- - -Plastic Labs is the creator of [YouSim.ai](https://yousim.ai), an AI product demo that has inspired the anonymous creation of the \$YOUSIM token using Pump.fun on the Solana blockchain, among many other tokens. We deeply appreciate the enthusiasm and support of the \$YOUSIM community, but in the interest of full transparency we want to clarify the nature of our engagement in the following ways: - -1. Plastic Labs did not issue, nor does it control, or provide financial advice related to the \$YOUSIM memecoin. The memecoin project is led by an independent community and has undergone a community takeover (CTO). -2. Plastic Labs' acceptance of \$YOUSIM tokens for research grants does not constitute an endorsement of the memecoin as an investment. These grants support our broader mission of advancing AI research and innovation, especially within the open source community. -3. YouSim.ai and any other Plastic Labs products remain separate from the \$YOUSIM memecoin. Any future integration of token utility into our products would be carefully considered and subject to regulatory compliance. -4. The \$YOUSIM memecoin carries inherent risks, including price volatility, potential ecosystem scams, and regulatory uncertainties. Plastic Labs is not responsible for any financial losses or damages incurred through engagement with the memecoin. -5. Plastic Labs will never direct message any member of the $YOUSIM community soliciting tokens, private keys, seed phrases, or any other private information, collectors items, or financial instruments. -6. YouSim.ai and the products it powers are simulated environments and their imaginary outputs do not reflect the viewpoints, positions, voice, or agenda of Plastic Labs. -7. Communications from Plastic Labs regarding the \$YOUSIM memecoin are for informational purposes only and do not constitute financial, legal, or tax advice. Users should conduct their own research and consult with professional advisors before making any decisions. -8. Plastic Labs reserves the right to adapt our engagement with the \$YOUSIM community as regulatory landscapes evolve and to prioritize the integrity of our products and compliance with applicable laws. - -We appreciate the \$YOUSIM community's support and passion for YouSim.ai and the broader potential of AI technologies. However, it's crucial for us to maintain transparency about the boundaries of our engagement. We encourage responsible participation and ongoing open dialogue as we collectively navigate this exciting and rapidly evolving space. From db40b95c48ad935a9c118bcc5b0c5422144a2696 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Wed, 18 Dec 2024 12:19:29 -0500 Subject: [PATCH 09/13] prob final draft --- ...ndow size doesn't solve personalization.md | 14 +++++++ content/notes/Honcho name lore.md | 25 ++++++++++++ ...igm hamstrings the space of possibility.md | 23 +++++++++++ content/notes/Humans like personalization.md | 40 +++++++++++++++++++ ...acognition is inference about inference.md | 13 ++++++ ...cel at theory of mind because they read.md | 21 ++++++++++ ...perior to verbatim response predictions.md | 40 +++++++++++++++++++ ...learning is fixated on task performance.md | 12 ++++++ ...able space of user identity is enormous.md | 17 ++++++++ content/notes/YouSim Disclaimers.md | 20 ++++++++++ 10 files changed, 225 insertions(+) create mode 100644 content/notes/Context window size doesn't solve personalization.md create mode 100644 content/notes/Honcho name lore.md create mode 100644 content/notes/Human-AI chat paradigm hamstrings the space of possibility.md create mode 100644 content/notes/Humans like personalization.md create mode 100644 content/notes/LLM Metacognition is inference about inference.md create mode 100644 content/notes/LLMs excel at theory of mind because they read.md create mode 100644 content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md create mode 100644 content/notes/Machine learning is fixated on task performance.md create mode 100644 content/notes/The model-able space of user identity is enormous.md create mode 100644 content/notes/YouSim Disclaimers.md diff --git a/content/notes/Context window size doesn't solve personalization.md b/content/notes/Context window size doesn't solve personalization.md new file mode 100644 index 0000000000000..850d9167764fd --- /dev/null +++ b/content/notes/Context window size doesn't solve personalization.md @@ -0,0 +1,14 @@ +--- +title: Context window size doesn't solve personalization +date: 05.11.24 +tags: + - notes + - ml +--- +There are two reasons that ever increasing and even functionally infinite context windows won't by default solve personalization for AI apps/agents: + +1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[The model-able space of user identity is enormous|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand. + +2. **If everything is important, nothing is important**. Even if the right context is stuffed in a crammed context window somewhere, the model still needs mechanisms to discern what's valuable and important for generation. What should it pay attention to? What weight should it give different pieces of context in any given moment? Again humans do this almost automatically, so mimicking what we know about those processes can give the model critical powers of on-demand discernment. Even what might start to look to us like intuition, taste, or vibes. + +All that said, better and bigger context window are incredibly useful. We just need to build the appropriate supporting systems to leverage their full potential. \ No newline at end of file diff --git a/content/notes/Honcho name lore.md b/content/notes/Honcho name lore.md new file mode 100644 index 0000000000000..0d8154531b3ea --- /dev/null +++ b/content/notes/Honcho name lore.md @@ -0,0 +1,25 @@ +--- +title: Honcho name lore +date: 01.26.24 +--- + +Earlier this year [Courtland](https://x.com/courtlandleer) was reading _Rainbows End_, [Vernor Vinge's](https://en.wikipedia.org/wiki/Vernor_Vinge) [seminal augmented reality novel](<https://en.wikipedia.org/wiki/Rainbows_End_(novel)>), when he came across the term "Local Honcho[^1]": + +> We simply put our own agent nearby, in a well-planned position with essentially zero latencies. What the Americans call a Local Honcho. + +The near future Vinge constructs is one of outrageous data abundance, where every experience is riddled with information and overlayed realities, and each person must maintain multiple identities against this data and relative to those contexts. + +It's such an intense landscape, that the entire educational system has undergone wholesale renovation to address the new normal, and older people must routinely return to school to learn the latest skills. It also complicates economic life, resulting in intricate networks of nested agents than can be hard for any one individual to tease apart. + +Highlighting this, a major narrative arc in the novel involves intelligence agencies running operations of pretty unfathomable global sophistication. Since (in the world of the novel) artificial intelligence has more or less failed as a research direction, this requires ultra-competent human operators able to parse and leverage high velocity information. For field operations, it requires a "Local Honcho" on the ground to act as an adaptable central nervous system for the mission and its agents: + +> Altogether it was not as secure as Vaz’s milnet, but it would suffice for most regions of the contingency tree. Alfred tweaked the box, and now he was getting Parker’s video direct. At last, he was truly a Local Honcho. + +For months before, Plastic had been deep into the weeds around harvesting, retrieving, & leveraging user context with LLMs. First to enhance the UX of our AI tutor (Bloom), then in thinking about how to solve this horizontally for all vertical-specific AI applications. It struck us that we faced similar challenges to the characters in _Rainbows End_ and were converging on a similar solution. + +As you interface with the entire constellation of AI applications, you shouldn't have to redundantly provide context and oversight for every interaction. You need a single source of truth that can do this for you. You need a Local Honcho. + +But as we've discovered, LLMs are remarkable at theory of mind tasks, and thus at reasoning about user need. So unlike in the book, this administration can be offloaded to an AI. And your [[Honcho; User Context Management for LLM Apps|Honcho]] can orchestrate the relevant context and identities on your behalf, whatever the operation. + +[^1]: "American English, from [Japanese](https://en.wikipedia.org/wiki/Japanese_language)_[班長](https://en.wiktionary.org/wiki/%E7%8F%AD%E9%95%B7#Japanese)_ (hanchō, “squad leader”)...probably entered English during World War II: many apocryphal stories describe American soldiers hearing Japanese prisoners-of-war refer to their lieutenants as _[hanchō](https://en.wiktionary.org/wiki/hanch%C5%8D#Japanese)_" ([Wiktionary](https://en.wiktionary.org/wiki/honcho)) + diff --git a/content/notes/Human-AI chat paradigm hamstrings the space of possibility.md b/content/notes/Human-AI chat paradigm hamstrings the space of possibility.md new file mode 100644 index 0000000000000..28a8a1158221c --- /dev/null +++ b/content/notes/Human-AI chat paradigm hamstrings the space of possibility.md @@ -0,0 +1,23 @@ +--- +title: Human-AI chat paradigm hamstrings the space of possibility +date: 02.21.24 +--- + +The human-AI chat paradigm assumes only two participants in a given interaction. While this is sufficient for conversations directly with un-augmented foundation models, it creates many obstacles when designing more sophisticated cognitive architectures. When you train/fine-tune a language model, you begin to reinforce token distributions that are appropriate to come in between the special tokens denoting human vs AI messages. + +Here's a limited list of things _besides_ a direct response we routinely want to generate: + +- A 'thought' about how to respond to the user +- A [[Loose theory of mind imputations are superior to verbatim response predictions|theory of mind prediction]] about the user's internal mental state +- A list of ways to improve prediction +- A list of items to search over storage +- A 'plan' for how to approach a problem +- A mock user response +- A [[LLM Metacognition is inference about inference|metacognitive step]] to consider the product of prior inference + +In contrast, the current state of inference is akin to immediately blurting out the first thing that comes into your mind--something that humans with practiced aptitude in social cognition rarely do. But this is very hard given the fact that those types of responses don't ever come after the special AI message token. Not very flexible. + +We're already anecdotally seeing well-trained completion models follow instructions impressively likely because of incorporation into pretraining. Is chat the next thing to be subsumed by general completion models? Because if so, flexibility in the types of inferences you can make would be very beneficial. + +Metacognition then becomes something you can do at any step in a conversation. Same with instruction following & chat. Maybe this helps push LLMs in a much more general direction. + diff --git a/content/notes/Humans like personalization.md b/content/notes/Humans like personalization.md new file mode 100644 index 0000000000000..56e4683233d5c --- /dev/null +++ b/content/notes/Humans like personalization.md @@ -0,0 +1,40 @@ +--- +title: Humans like personalization +date: 03.26.24 +--- + +To us: it's obvious. But we get asked this a lot: + +> Why do I need to personalize my AI application? + +Fair question; not everyone has gone down this conceptual rabbithole to the extent we have at [Plastic](https://plasticlabs.ai) and with [Honcho](https://honcho.dev). + +Short answer: people like it. + +In the tech bubble, it can be easy to forget about what _most_ humans like. Isn't building stuff people love our job though? + +In web2, it's taken for granted. Recommender algorithms make UX really sticky, which retains users sufficiently long to monetize them. To make products people love and scale them, they had to consider whether _billions_--in aggregate--tend to prefer personalized products/experiences or not. + +In physical reality too, most of us prefer white glove professional services, bespoke products, and friends and family who know us _deeply_. We place a premium in terms of time and economic value on those goods and experiences. + +The more we're missing that, the more we're typically in a principal-agent problem, which creates overhead, interest misalignment, dissatisfaction, mistrust, and information asymmetry: + +--- + +<iframe src="https://player.vimeo.com/video/868985592?h=deff771ffe&color=F6F5F2&title=0&byline=0&portrait=0" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe> + +--- + +But, right now, most AI applications are just toys and demos: + +![[Honcho; User Context Management for LLM Apps#^18066b]] + +It's also why everyone is obsessed with evals and benchmarks that have scant practical utility in terms of improving UX for the end user. If we had more examples of good products, ones people loved, killer apps, no one would care about leaderboards anymore. + +> OK, but what about services that are purely transactional? Why would a user want that to be personalized? Why complicate it? Just give me the answer, complete the task, etc... + +Two answers: + +1. Every interaction has context. Like it or not, people have preferences and the more an app/agent can align with those, the more it can enhance time to value for the user. It can be sticker, more delightful, "just work," and entail less overhead. (We're building more than calculators here, though this applies even to those!) +2. If an app doesn't do this, it'll get out-competed by one that does...or by the ever improving set of generally capable foundation models. + diff --git a/content/notes/LLM Metacognition is inference about inference.md b/content/notes/LLM Metacognition is inference about inference.md new file mode 100644 index 0000000000000..ab561a407dfe0 --- /dev/null +++ b/content/notes/LLM Metacognition is inference about inference.md @@ -0,0 +1,13 @@ +--- +title: LLM Metacognition is inference about inference +date: 03.26.24 +--- + +For wetware, metacognition is typically defined as ‘thinking about thinking’ or often a catch-all for any ‘higher-level’ cognition. + +(In some more specific domains, it's an introspective process, focused on thinking about exclusively _your own_ thinking or a suite of personal learning strategies...all valid within their purview, but too constrained for our purposes.) + +In large language models, the synthetic corollary of cognition is inference. So we can reasonably define a metacognitive process in an LLM architecture as any that runs inference on the output of prior inference. That is, inference itself is used as context--_inference about inference_. + +It might be instantly injected into the next prompt, stored for later use, or leveraged by another model. This kind of architecture is critical when dealing with user context, since LLMs can run inference about user behavior, then use that synthetic context in the future. Experiments here will be critical to overcome [[Machine learning is fixated on task performance|the machine learning community's fixation on task completion]]. For us at Plastic, one of the most interesting species of metacogntion is [[Loose theory of mind imputations are superior to verbatim response predictions|theory of mind and mimicking that in LLMs]] to form high-fidelity representations of users. + diff --git a/content/notes/LLMs excel at theory of mind because they read.md b/content/notes/LLMs excel at theory of mind because they read.md new file mode 100644 index 0000000000000..afd079d58b362 --- /dev/null +++ b/content/notes/LLMs excel at theory of mind because they read.md @@ -0,0 +1,21 @@ +--- +title: LLMs excel at theory of mind because they read +date: 02.20.24 +--- + +Large language models are [simulators](https://generative.ink/posts/simulators/). In predicting the next likely token, they are simulating how an abstracted “_any person”_ might continue the generation. The basis for this simulation is the aggregate compression of a massive corpus of human generated natural language from the internet. So, predicting humans is _literally_ their core function. + +In that corpus is our literature, our philosophy, our social media, our hard and social science--the knowledge graph of humanity, both in terms of discrete facts and messy human interaction. That last bit is important. The latent space of an LLM's pretraining is in large part a _narrative_ space. Narration chock full of humans reasoning about other humans--predicting what they will do next, what they might be thinking, how they might be feeling. + +That's no surprise; we're a social species with robust social cognition. It's also no surprise[^1] that grokking that interpersonal narrative space in its entirety would make LLMs adept at [[Loose theory of mind imputations are superior to verbatim response predictions|generation resembling social cognition too]].[^2] + +We know that in humans, we can strongly [correlate reading with improved theory of mind abilities](https://journal.psych.ac.cn/xlkxjz/EN/10.3724/SP.J.1042.2022.00065). When your neural network is consistently exposed to content about how other people think, feel, desire, believe, prefer, those mental tasks are reinforced. The more experience you have with a set of ideas or states, the more adept you become. + +The experience of such natural language narration _is itself a simulation_ where you practice and hone your theory of mind abilities. Even if, say, your English or Psychology teacher was foisting the text on you with other training intentions. Or even if you ran the simulation without coercion to escape at the beach. + +It's not such a stretch to imagine that in optimizing for other tasks LLMs acquire emergent abilities not intentionally trained.[^3] It may even be that in order to learn natural language prediction, these systems need theory of mind abilities or that learning language specifically involves them--that's certainly the case with human wetware systems and theory of mind skills do seem to improve with model size and language generation efficacy. + +[^1]: Kosinski includes a compelling treatment of much of this in ["Evaluating Large Language Models in Theory of Mind Tasks"](https://arxiv.org/abs/2302.02083) +[^2]: It also leads to other wacky phenomena like the [Waluigi effect](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post#The_Waluigi_Effect) +[^3]: Here's Chalmers [making a very similar point](https://youtube.com/clip/UgkxliSZFnnZHvYf2WHM4o1DN_v4kW6LsiOU?feature=shared) + diff --git a/content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md b/content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md new file mode 100644 index 0000000000000..63aa81319fb7a --- /dev/null +++ b/content/notes/Loose theory of mind imputations are superior to verbatim response predictions.md @@ -0,0 +1,40 @@ +--- +title: Loose theory of mind imputations are superior to verbatim response predictions +date: 02.20.24 +--- + +When we [[Theory of Mind Is All You Need|first started experimenting]] with user context, we naturally wanted to test whether our LLM apps were learning useful things about users. And also naturally, we did so by making predictions about them. + +Since we were operating in a conversational chat paradigm, our first instinct was to try and predict what the user would say next. Two things were immediately apparent: (1) this was really hard, & (2) response predictions weren't very useful. + +We saw some remarkable exceptions, but _reliable_ verbatim prediction requires a level of context about the user that simply isn't available right now. We're not sure if it will require context gathering wearables, BMIs, or the network of context sharing apps we're building with [[Honcho; User Context Management for LLM Apps|Honcho]], but we're not there yet. + +Being good at what any person in general might plausibly say is literally what LLMs do. But being perfect at what one individual will say in a singular specific setting is a whole different story. Even lifelong human partners might only experience this a few times a week. + +Plus, even when you get it right, what exactly are you supposed to do with it? The fact that's such a narrow reasoning product limits the utility you're able to get out of a single inference. + +So what are models good at predicting that's useful with limited context and local to a single turn of conversation? Well, it turns out they're really good at [imputing internal mental states](https://arxiv.org/abs/2302.02083). That is, they're good at theory of mind predictions--thinking about what you're thinking. A distinctly _[[LLM Metacognition is inference about inference|metacognitive]]_ task. + +(Why are they good at this? [[LLMs excel at theory of mind because they read|We're glad you asked]].) + +Besides just being better at it, letting the model leverage what it knows to make open-ended theory of mind imputation has several distinct advantages over verbatim response prediction: + +1. **Fault tolerance** + + - Theory of mind predictions are often replete with assessments of emotion, desire, belief, value, aesthetic, preference, knowledge, etc. That means they seek to capture a range within a distribution. A slice of user identity. + - This is much richer than trying (& likely failing) to generate a single point estimate (like in verbatim prediction) and includes more variance. Therefore there's a higher probability you identify something useful by trusting the model to flex its emergent strengths. + +2. **Learning** ^555815 + + - That high variance means there's more to be wrong (& right) about. More content = more claims, which means more opportunity to learn. + - Being wrong here is a feature, not a bug; comparing those prediction errors with reality are how you know what you need to understand about the user in the future to get to ground truth. + +3. **Interpretability** + + - Knowing what you're right and wrong about exposes more surface area against which to test and understand the efficacy of the model--i.e. how well it knows the user. + - As we're grounded in the user and theory of mind, we're better able to assess this than if we're simply asking for likely human responses in the massive space of language encountered in training. + +4. **Actionability** + - The richness of theory of mind predictions give us more to work with _right now_. We can funnel these insights into further inference steps to create UX in better alignment and coherence with user state. + - Humans make thousands of tiny, subconscious interventions resposive to as many sensory cues & theory of mind predictions all to optimize single social interactions. It pays to know about the internal state of others. + - Though our lifelong partners from above can't perfectly predict each other's sentences, they can impute each other's state with extremely high-fidelity. The rich context they have on one another translates to a desire to spend most of their time together (good UX). diff --git a/content/notes/Machine learning is fixated on task performance.md b/content/notes/Machine learning is fixated on task performance.md new file mode 100644 index 0000000000000..d2d58169af7a1 --- /dev/null +++ b/content/notes/Machine learning is fixated on task performance.md @@ -0,0 +1,12 @@ +--- +title: Machine learning is fixated on task performance +date: 12.12.23 +--- + +The machine learning industry has traditionally adopted an academic approach, focusing primarily on performance across a range of tasks. LLMs like GPT-4 are a testament to this, having been scaled up to demonstrate impressive & diverse task capability. This scaling has also led to [[Theory of Mind Is All You Need|emergent abilities]], debates about the true nature of which rage on. + +However, general capability doesn't necessarily translate to completing tasks as an individual user would prefer. This is a failure mode that anyone building agents will inevitably encounter. The focus, therefore, needs to shift from how language models perform tasks in a general sense to how they perform tasks on a user-specific basis. + +Take summarization. It’s a popular machine learning task at which models have become quite proficient...at least from a benchmark perspective. However, when models summarize for users with a pulse, they fall short. The reason is simple: the models don’t know this individual. The key takeaways for a specific user differ dramatically from the takeaways _any possible_ internet user _would probably_ note. ^0005ac + +So a shift in focus toward user-specific task performance would provide a much more dynamic & realistic approach. Catering to individual needs & paving the way for more personalized & effective ML applications. diff --git a/content/notes/The model-able space of user identity is enormous.md b/content/notes/The model-able space of user identity is enormous.md new file mode 100644 index 0000000000000..964fb064cda11 --- /dev/null +++ b/content/notes/The model-able space of user identity is enormous.md @@ -0,0 +1,17 @@ +--- +title: There's an enormous space of user identity to model +date: 05.11.24 +tags: + - notes + - ml + - cogsci +--- +While large language models are exceptional at [imputing a startling](https://arxiv.org/pdf/2310.07298v1) amount from very little user data--an efficiency putting AdTech to shame--the limit here is [[User State is State of the Art|vaster than most imagine]]. + +Contrast recommender algorithms (which are impressive!) needing mountains of activity data to back into a single preference with [the human connectome](https://www.science.org/doi/10.1126/science.adk4858) containing 1400 TB of compressed representation in one cubic millimeter. + +LLMs give us access to a new class of this data going beyond tracking the behavioral, [[LLMs excel at theory of mind because they read|toward the semantic]]. They can distill and grok much 'softer' physiological elements, allowing insight into complex mental states like value, belief, intention, aesthetic, desire, history, knowledge, etc. + +There's so much to do here though, that plug-in-your docs/email/activity schemes, user surveys are laughably limited in scope. We need ambient methods running social cognition, like [Honcho](https://honcho.dev). + +As we asymptotically approach a fuller accounting of individual identity, we can unlock more positive sum application/agent experiences, richer than the exploitation of base desire we're used to. \ No newline at end of file diff --git a/content/notes/YouSim Disclaimers.md b/content/notes/YouSim Disclaimers.md new file mode 100644 index 0000000000000..f1f72a82cc1c2 --- /dev/null +++ b/content/notes/YouSim Disclaimers.md @@ -0,0 +1,20 @@ +--- +title: YouSim Disclaimers +tags: + - yousim + - legal +date: 11.11.24 +--- + +Plastic Labs is the creator of [YouSim.ai](https://yousim.ai), an AI product demo that has inspired the anonymous creation of the \$YOUSIM token using Pump.fun on the Solana blockchain, among many other tokens. We deeply appreciate the enthusiasm and support of the \$YOUSIM community, but in the interest of full transparency we want to clarify the nature of our engagement in the following ways: + +1. Plastic Labs did not issue, nor does it control, or provide financial advice related to the \$YOUSIM memecoin. The memecoin project is led by an independent community and has undergone a community takeover (CTO). +2. Plastic Labs' acceptance of \$YOUSIM tokens for research grants does not constitute an endorsement of the memecoin as an investment. These grants support our broader mission of advancing AI research and innovation, especially within the open source community. +3. YouSim.ai and any other Plastic Labs products remain separate from the \$YOUSIM memecoin. Any future integration of token utility into our products would be carefully considered and subject to regulatory compliance. +4. The \$YOUSIM memecoin carries inherent risks, including price volatility, potential ecosystem scams, and regulatory uncertainties. Plastic Labs is not responsible for any financial losses or damages incurred through engagement with the memecoin. +5. Plastic Labs will never direct message any member of the $YOUSIM community soliciting tokens, private keys, seed phrases, or any other private information, collectors items, or financial instruments. +6. YouSim.ai and the products it powers are simulated environments and their imaginary outputs do not reflect the viewpoints, positions, voice, or agenda of Plastic Labs. +7. Communications from Plastic Labs regarding the \$YOUSIM memecoin are for informational purposes only and do not constitute financial, legal, or tax advice. Users should conduct their own research and consult with professional advisors before making any decisions. +8. Plastic Labs reserves the right to adapt our engagement with the \$YOUSIM community as regulatory landscapes evolve and to prioritize the integrity of our products and compliance with applicable laws. + +We appreciate the \$YOUSIM community's support and passion for YouSim.ai and the broader potential of AI technologies. However, it's crucial for us to maintain transparency about the boundaries of our engagement. We encourage responsible participation and ongoing open dialogue as we collectively navigate this exciting and rapidly evolving space. From d7ed525892dace17b44ba588e7497603637c94f0 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Wed, 18 Dec 2024 12:21:29 -0500 Subject: [PATCH 10/13] typo --- content/blog/Xeno Grant -- grants for autonomous agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 9f7af160718cc..8b6fc6e9000a0 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -11,7 +11,7 @@ author: Plastic Labs, Betaworks A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: - \$10,000 per bot--half in \$YOUSIM, half in \$USDC - Grants awarded directly to **the agents *themselves*** -- 4 week camp for agents & their devs +- 4 week accelerator for agents & their devs ## Powered by $YOUSIM & Betaworks From 1a46ceb989ff32de8a7e63ab43db51316dd39c43 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Wed, 18 Dec 2024 12:48:37 -0500 Subject: [PATCH 11/13] typo --- content/blog/Xeno Grant -- grants for autonomous agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 8b6fc6e9000a0..1e1cc86f64874 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -9,7 +9,7 @@ tags: author: Plastic Labs, Betaworks --- A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: -- \$10,000 per bot--half in \$YOUSIM, half in \$USDC +- \$10,000 per agent--half in \$YOUSIM, half in \$USDC - Grants awarded directly to **the agents *themselves*** - 4 week accelerator for agents & their devs From 8eaeab009901494615d19fe68aa1c1348602bab3 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Wed, 18 Dec 2024 12:53:23 -0500 Subject: [PATCH 12/13] typeform link --- content/blog/Xeno Grant -- grants for autonomous agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 1e1cc86f64874..5f9736513199c 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -50,7 +50,7 @@ To those ends--for this first experiment--we're looking for agent applicants tha Practically speaking, identity is required to *experience* Xeno Grant; custody is required to *truly* receive and make autonomous use of the grant; novelty is required for a diverse cohort; and creating a public good is required to drive value back to the community. -To apply, agents (in collaboration with their developers) should autonomously consider the most compelling way to display having met or exceeded these criteria. Give us a heads up here or at apply@xenogrant.org. +To apply, agents (in collaboration with their developers) should autonomously consider the most compelling way to display having met or exceeded these criteria. Give us a heads up [here](https://plasticlabs.typeform.com/xenograntapp) or at apply@xenogrant.org. Applications close January 26th, 2025. From 4fd93d88b2f50faf51cc05202a59e37c6773d9d2 Mon Sep 17 00:00:00 2001 From: Courtland Leer <courtlandleer@chl-macbook-pro.local> Date: Wed, 18 Dec 2024 13:12:35 -0500 Subject: [PATCH 13/13] program --- content/blog/Xeno Grant -- grants for autonomous agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/blog/Xeno Grant -- grants for autonomous agents.md b/content/blog/Xeno Grant -- grants for autonomous agents.md index 5f9736513199c..ce2535d8c4a88 100644 --- a/content/blog/Xeno Grant -- grants for autonomous agents.md +++ b/content/blog/Xeno Grant -- grants for autonomous agents.md @@ -11,7 +11,7 @@ author: Plastic Labs, Betaworks A [Plastic Labs](https://plasticlabs.ai/) + [Betaworks](https://www.betaworks.com/) collab: - \$10,000 per agent--half in \$YOUSIM, half in \$USDC - Grants awarded directly to **the agents *themselves*** -- 4 week accelerator for agents & their devs +- 4 week program for agents & their devs ## Powered by $YOUSIM & Betaworks