From 3b6f78b880636708cbdaf1ac0d3ef7d4f32c34aa Mon Sep 17 00:00:00 2001
From: Courtland Leer <93223786+courtlandleer@users.noreply.github.com>
Date: Thu, 15 Feb 2024 16:12:04 -0500
Subject: [PATCH 1/3] captions
---
content/blog/Memories for All.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/content/blog/Memories for All.md b/content/blog/Memories for All.md
index 4935b5cc4356f..52389cf604caa 100644
--- a/content/blog/Memories for All.md
+++ b/content/blog/Memories for All.md
@@ -42,8 +42,8 @@ As it stands today the space is mostly focused on the (albeit generative) [[The
Every agent interaction can be generated just in time for every person, informed by relevant personal context more substantive than human-to-human sessions. User context will enable disposable agents on the fly across verticals for lower marginal cost than 1:many software paradigms.
-
-(*Here's our co-founder [Vince](https://twitter.com/vintrotweets) talking more about some of those possibilities*)
+
+*Here's our co-founder [Vince](https://twitter.com/vintrotweets) talking more about some of those possibilities*
## "Open vs Closed"
@@ -79,8 +79,8 @@ Today we're releasing a naive adaptation of [research we published late last yea
There's a ton we plan to unpack and implement there, but the key insight we're highlighting today is affording LLMs the freedom and autonomy to decide what's important.
-
-(*If you want to go deeper into the research, [this webinar we did with LangChain](https://www.youtube.com/watch?v=PbuzqCdY0hg&list=PLuFHBYNxPuzrkVP88FxYH1k7ZL5s7WTC8) is a great place to start, as is [the "Violation of Expectations" chain they implemented](https://js.langchain.com/docs/use_cases/agent_simulations/violation_of_expectations_chain)*)
+
+*If you want to go deeper into the research, [this webinar we did with LangChain](https://www.youtube.com/watch?v=PbuzqCdY0hg&list=PLuFHBYNxPuzrkVP88FxYH1k7ZL5s7WTC8) is a great place to start, as is [the "Violation of Expectations" chain they implemented](https://js.langchain.com/docs/use_cases/agent_simulations/violation_of_expectations_chain)*
This release allows you to experiment with all these ideas. We feed messages into an inference asking the model to derive facts about the user, we store those insights for later use, then we ask the model to retrieve this context to augment some later generation.
From e40c37e1e8da2313d913f2af3ca0381dc8a1f251 Mon Sep 17 00:00:00 2001
From: Courtland Leer <93223786+courtlandleer@users.noreply.github.com>
Date: Thu, 15 Feb 2024 16:15:19 -0500
Subject: [PATCH 2/3] another try
---
content/blog/Memories for All.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/blog/Memories for All.md b/content/blog/Memories for All.md
index 52389cf604caa..371e671872460 100644
--- a/content/blog/Memories for All.md
+++ b/content/blog/Memories for All.md
@@ -43,7 +43,7 @@ As it stands today the space is mostly focused on the (albeit generative) [[The
Every agent interaction can be generated just in time for every person, informed by relevant personal context more substantive than human-to-human sessions. User context will enable disposable agents on the fly across verticals for lower marginal cost than 1:many software paradigms.
-*Here's our co-founder [Vince](https://twitter.com/vintrotweets) talking more about some of those possibilities*
+(*Here's our co-founder [Vince](https://twitter.com/vintrotweets) talking more about some of those possibilities*)
## "Open vs Closed"
@@ -80,7 +80,7 @@ Today we're releasing a naive adaptation of [research we published late last yea
There's a ton we plan to unpack and implement there, but the key insight we're highlighting today is affording LLMs the freedom and autonomy to decide what's important.
-*If you want to go deeper into the research, [this webinar we did with LangChain](https://www.youtube.com/watch?v=PbuzqCdY0hg&list=PLuFHBYNxPuzrkVP88FxYH1k7ZL5s7WTC8) is a great place to start, as is [the "Violation of Expectations" chain they implemented](https://js.langchain.com/docs/use_cases/agent_simulations/violation_of_expectations_chain)*
+(*If you want to go deeper into the research, [this webinar we did with LangChain](https://www.youtube.com/watch?v=PbuzqCdY0hg&list=PLuFHBYNxPuzrkVP88FxYH1k7ZL5s7WTC8) is a great place to start, as is [the "Violation of Expectations" chain they implemented](https://js.langchain.com/docs/use_cases/agent_simulations/violation_of_expectations_chain)*)
This release allows you to experiment with all these ideas. We feed messages into an inference asking the model to derive facts about the user, we store those insights for later use, then we ask the model to retrieve this context to augment some later generation.
From 1022e07365952fdb666290d0507a771780c84f82 Mon Sep 17 00:00:00 2001
From: Courtland Leer <93223786+courtlandleer@users.noreply.github.com>
Date: Thu, 15 Feb 2024 16:20:06 -0500
Subject: [PATCH 3/3] idk
---
content/blog/Memories for All.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/content/blog/Memories for All.md b/content/blog/Memories for All.md
index 371e671872460..64b1a04fedbdc 100644
--- a/content/blog/Memories for All.md
+++ b/content/blog/Memories for All.md
@@ -43,6 +43,7 @@ As it stands today the space is mostly focused on the (albeit generative) [[The
Every agent interaction can be generated just in time for every person, informed by relevant personal context more substantive than human-to-human sessions. User context will enable disposable agents on the fly across verticals for lower marginal cost than 1:many software paradigms.
+
(*Here's our co-founder [Vince](https://twitter.com/vintrotweets) talking more about some of those possibilities*)
## "Open vs Closed"
@@ -80,8 +81,10 @@ Today we're releasing a naive adaptation of [research we published late last yea
There's a ton we plan to unpack and implement there, but the key insight we're highlighting today is affording LLMs the freedom and autonomy to decide what's important.
+
(*If you want to go deeper into the research, [this webinar we did with LangChain](https://www.youtube.com/watch?v=PbuzqCdY0hg&list=PLuFHBYNxPuzrkVP88FxYH1k7ZL5s7WTC8) is a great place to start, as is [the "Violation of Expectations" chain they implemented](https://js.langchain.com/docs/use_cases/agent_simulations/violation_of_expectations_chain)*)
+
This release allows you to experiment with all these ideas. We feed messages into an inference asking the model to derive facts about the user, we store those insights for later use, then we ask the model to retrieve this context to augment some later generation.
Check out the [LangChain implementation](https://docs.honcho.dev/how-to/personal-memory/simple-user-memory) and [Discord bot demo](https://discord.gg/plasticlabs).