Skip to content

Commit

Permalink
Merge branch 'main' into Future-Work-minor-updates
Browse files Browse the repository at this point in the history
  • Loading branch information
nielsleadholm authored Jan 21, 2025
2 parents db25695 + d089d92 commit 807869c
Show file tree
Hide file tree
Showing 13 changed files with 79 additions and 82 deletions.
2 changes: 1 addition & 1 deletion .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ StylesPath = .vale/styles
Vocab = TBP

[{docs/*.md,README.md}]
BasedOnStyles = Docs
BasedOnStyles = Vale, Docs

[*.md]
BlockIgnores = (?s) *\[block:embed\].*?\[/block\]
5 changes: 0 additions & 5 deletions .vale/styles/Docs/Programming.yml

This file was deleted.

109 changes: 54 additions & 55 deletions .vale/styles/config/vocabularies/TBP/accept.txt
Original file line number Diff line number Diff line change
@@ -1,48 +1,48 @@
\bBluesky\b
neocortex
\bMountcastle\b
\bconfigs\b
\bConfig\b
\bconfig\b
# This file is case-sensitive by default and you can use regex
# see the docs here: https://vale.sh/docs/keys/vocab#file-format
Bluesky
[Nn]eocortex
[a-zA-Z.]+(_[a-zA-Z.]+)+
Mountcastle
[Cc]onfigs?
skillset
\bLMs*\b
\bSMs*\b
\bSDRs*\b
hippocampal
\bheterarchy\b
LMs?
SMs?
SDRs?
[hH]ippocampal
[hH]eterarchy
saccading
\bHTM\b
\bANNs*\b
\bCNNs*\b
\bGNNs*\b
\bDNNs*\b
\bLLMs*\b
\bCPUs*\b
\bTGZ\b
HTM
ANNs?
CNNs?
GNNs?
DNNs?
LLMs?
CPUs?
TGZ|tgz
saccade
colliculus
affordances
subgoals
[Ss]ubgoals
KDTree
resampling
wandb
distractor
parallelization
arxiv
\bNum\b
[aA]rXiv
Num
bioRxiv
worldimages
neocortical
Initializer
[nN]eocortical
[Ii]nitializer
resample
actuations
Calli
Omniglot
[oO]mniglot
iter
Dataloader
kd
thresholding
Thresholding
[Tt]hresholding
multimodal
heterarchical
subfolders
Expand All @@ -63,20 +63,19 @@ voxel
voxels
subclassed
youtube
url
URL
fullscreen
favicon
href
dataclass
Pretrained
pretrained
runtimes
[pP]retrained
[Rr]untimes
neurophysiologist
mixin
Conda
Miniconda
[Mm]ixin
[cC]onda
[mM]iniconda
zsh
wandb
[Ww]andb
utils
matplotlib
timeseries
Expand All @@ -85,17 +84,17 @@ misclassification
_all_
args
eval
dataloader
[Dd]ata[Ll]oader
docstring
Leyman
Sampath
Shen
Neuromorphic
[nN]euromorphic
interpretability
Walkthrough
presynaptic
subcortically
Pytorch
PyTorch
Guillery
Blekeslee
Felleman
Expand Down Expand Up @@ -125,8 +124,8 @@ Aru
Scheinkman
Klukas
Purdy
xyz
readme
xyz|XYZ
[Rr]eadme|README
substep
CLA
unmerged
Expand All @@ -139,39 +138,39 @@ repo
rdme
cli
semver
callouts
discretized
discretize
[Cc]allouts
[Dd]iscretized?
discretization
profiler
overconstrained
loopdown
perceptrons
bool
gaussian
[gG]aussian
Cui
learnable
Eitan
Azoff
Sync'ing
subdocuments
sync'd
\bNumpy\b
\bNumenta\b
\bLeadholm\b
\bKalman\b
NumPy|Numpy
Numenta
Leadholm
Kalman
biofilm
\bTolman's\b
\befference\b
\bEfference\b
\bTriaged\b
neuroscientists
Tolman's
[eE]fference
[Tt]riaged
YouTube
GitHub
[nN]euroscientists
Constantinescu
O'Keefe
Nadel
Milner
Goodale
allocentric
[aA]llocentric
Hebbian
Hopfield
Arcimboldo
Arcimboldo
2 changes: 1 addition & 1 deletion docs/contributing/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ We appreciate all of your contributions. Below, you will find a list of ways to

There are many ways in which you can contribute to the code. For some suggestions, see the [Contributing Code Guide](ways-to-contribute-to-code.md).

Monty integrates code changes using Github Pull Requests. For details on how Monty uses Pull Requests, please consult the [Contributing Pull Requests](ways-to-contribute-to-code.md) guide.
Monty integrates code changes using GitHub Pull Requests. For details on how Monty uses Pull Requests, please consult the [Contributing Pull Requests](ways-to-contribute-to-code.md) guide.

# Document

Expand Down
13 changes: 8 additions & 5 deletions docs/contributing/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,16 @@ We use [Vale](https://vale.sh/) to lint our documentation. The linting process c

The linting rules are defined in the `/.vale/` directory.

### How to Install and Use Vale
## Adding New Words to the Dictionary

1. **Install Vale**
You can add new words to the dictionary by adding them to `.vale/styles/config/vocabularies/TBP/accept.txt` - for more information see [Vale's documentation](https://vale.sh/docs/keys/vocab#file-format).

## How to Install and Use Vale

1. **Install Vale**
Download Vale from its [installation page](https://vale.sh/docs/install).

2. **Run Vale**
2. **Run Vale**
Use the following command in your terminal to run Vale:

```bash
Expand All @@ -35,7 +39,6 @@ The linting rules are defined in the `/.vale/` directory.
✔ 0 errors, 0 warnings and 0 suggestions in 141 files.
```


# Relative Links

Links to other documents should use the standard markdown link syntax, and should be relative to the documents location.
Expand Down Expand Up @@ -78,7 +81,7 @@ title: 'New Placeholder Example Doc'
>Please put the title in single quotes and, if applicable, escape any single quotes using two single quotes in a row.
Example: `title: 'My New Doc''s'`

> 🚧 Your title must match the url-safe slug
> 🚧 Your title must match the URL-safe slug
>
>If your title is `My New Doc's` then your file name should be `my-new-docs.md`
Expand Down
4 changes: 2 additions & 2 deletions docs/contributing/guides-for-maintainers/triage.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ The desired cadence for Pull Request Triage is at least once per business day.
First, review any Pull Requests pending CLA.

> [!NOTE]
>Pending CLA link (is pull request, is open, is not a draft, is not triaged, is pending cla)
>Pending CLA link (is pull request, is open, is not a draft, is not triaged, is pending CLA)
>
> <https://github.com/thousandbrainsproject/tbp.monty/pulls?q=is%3Apr+is%3Aopen+-label%3Atriaged+draft%3Afalse+label%3Acla>
Expand All @@ -73,7 +73,7 @@ If the Pull Request CLA check is passing (you may need to rerun the CLA check),
## 2. Triage

> [!NOTE]
> Triage link (is pull request, is open, is not a draft, is not triaged, is not pending cla)
> Triage link (is pull request, is open, is not a draft, is not triaged, is not pending CLA)
>
> <https://github.com/thousandbrainsproject/tbp.monty/pulls?q=is%3Apr+is%3Aopen+-label%3Atriaged+draft%3Afalse+-label%3Acla>
Expand Down
4 changes: 2 additions & 2 deletions docs/contributing/pull-requests.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Pull Requests
---
Monty uses Github Pull Requests to integrate code changes.
Monty uses GitHub Pull Requests to integrate code changes.

# Before Making a Pull Request

Expand Down Expand Up @@ -36,7 +36,7 @@ Before submitting a Pull Request, you should set up your development environment
```shell
git push
```
7. [Create a new Github Pull Request from your fork to the official Monty repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork).
7. [Create a new GitHub Pull Request from your fork to the official Monty repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork).
8. Respond to and address any comments on your Pull Request. See [Pull Request Flow](pull-requests/pull-request-flow.md) for what to expect.
9. Once your Pull Request is approved, it will be merged by one of the Maintainers. Thank you for contributing! 🥳🎉🎊

Expand Down
6 changes: 3 additions & 3 deletions docs/contributing/style-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ We adopted the Google Style for docstrings. For more details, see the [Google Py

## Libraries

### Numpy Preferred Over PyTorch
### NumPy Preferred Over PyTorch

After discovering that torch-to-numpy conversions (and the reverse) were a significant speed bottleneck in our algorithms, we decided to consistently use NumPy to represent the data in our system.
After discovering that PyTorch-to-NumPy conversions (and the reverse) were a significant speed bottleneck in our algorithms, we decided to consistently use NumPy to represent the data in our system.

We still require the PyTorch library since we use it for certain things, such as multiprocessing. However, please use NumPy operations for any vector and matrix operations whenever possible. If you think you cannot work with NumPy and need to use Torch, consider opening an RFC first to increase the chances of your PR being merged.

Another reason we discourage using PyTorch is to add a barrier for deep-learning to creep into Monty. Although we don't have a fundamental issue with contributors using deep learning, we worry that it will be the first thing someone's mind goes to when solving a problem (when you have a hammer...). We want contributors to think intentionally about whether deep-learning is the best solution for what they want to solve. Monty relies on very different principles than those most ML practitioners are used to, and so it is useful to think outside of the mental framework of deep-learning. More importantly, evidence that the brain can perform the long-range weight transport required by deep-learning's cornerstone algorithm - back-propagation - is extremely scarce. We are developing a system that, like the mammalian brain, should be able to use _local_ learning signals to rapidly update representations, while also remaining robust under conditions of continual learning. As a general rule therefore, please avoid Pytorch, and the algorithm that it is usually leveraged to support - back-propagation!
Another reason we discourage using PyTorch is to add a barrier for deep-learning to creep into Monty. Although we don't have a fundamental issue with contributors using deep learning, we worry that it will be the first thing someone's mind goes to when solving a problem (when you have a hammer...). We want contributors to think intentionally about whether deep-learning is the best solution for what they want to solve. Monty relies on very different principles than those most ML practitioners are used to, and so it is useful to think outside of the mental framework of deep-learning. More importantly, evidence that the brain can perform the long-range weight transport required by deep-learning's cornerstone algorithm - back-propagation - is extremely scarce. We are developing a system that, like the mammalian brain, should be able to use _local_ learning signals to rapidly update representations, while also remaining robust under conditions of continual learning. As a general rule therefore, please avoid PyTorch, and the algorithm that it is usually leveraged to support - back-propagation!

You can read more about our views on deep learning in Monty in our [FAQ](../how-monty-works/faq-monty.md#why-does-monty-not-make-use-of-deep-learning).

Expand Down
2 changes: 1 addition & 1 deletion docs/contributing/ways-to-contribute-to-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ There are many ways in which you can contribute to the code. The list below is n

# How To Contribute Code

Monty integrates code changes using Github Pull Requests. To start contributing code to Monty, please consult the [Contributing Pull Requests](pull-requests.md) guide.
Monty integrates code changes using GitHub Pull Requests. To start contributing code to Monty, please consult the [Contributing Pull Requests](pull-requests.md) guide.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Implement & Test GNNs to Model Object Behaviors & States

We would like to test using local functions between nodes of an LM's graph to model object behaviors. In particular, we would like to model how an object evolves over time due to external and internal influences, by learning how nodes within the object impact one-another based on these factors. This relates to graph-neural networks, and [graph networks more generally](https://arxiv.org/pdf/1806.01261), however learning should rely on sensory and motor information local to the LM. Ideally learned relations will generalize across different edges, e.g. the understanding that two nodes are connected by a rigid edge vs. a spring.

As noted, all learning should happen locally within the graph, so although gradient descent can be used, we should not back-propagate error signals through other LMs. Please see our related policy on [using Numpy rather than Pytorch for contributions](../../contributing/style-guide#numpy-preferred-over-pytorch). For further reading, see our discussion on [Modeling Object Behavior Using Graph Message Passing](https://github.com/thousandbrainsproject/monty_lab/tree/main/object_behaviors#implementation-routes-for-the-relational-inference-model) in the Monty Labs repository.
As noted, all learning should happen locally within the graph, so although gradient descent can be used, we should not back-propagate error signals through other LMs. Please see our related policy on [using Numpy rather than PyTorch for contributions](../../contributing/style-guide#numpy-preferred-over-pytorch). For further reading, see our discussion on [Modeling Object Behavior Using Graph Message Passing](https://github.com/thousandbrainsproject/monty_lab/tree/main/object_behaviors#implementation-routes-for-the-relational-inference-model) in the Monty Labs repository.

We have a dataset that should be useful for testing approaches to this task, which can be found in [Monty Labs](https://github.com/thousandbrainsproject/monty_lab/tree/main/object_behaviors).

Expand Down
8 changes: 4 additions & 4 deletions docs/how-monty-works/experiment.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ In reality an agent interacts continuously with the world and time is not explic

- **monty_step** (model.episode_steps total_steps): number of observations sent to the Monty model. This includes observations that were not interesting enough to be sent to an LM such as off-object observations. It includes both matching and exploratory steps.

- **monty_matching_step** (model.matching_steps): At least one LM performed a matching step (updating its possible matches using an observation). There are also exploratory steps which do not update possible matches and only store an observation in the LMs buffer. These are not counted here.
- **monty_matching_step** (`model.matching_steps`): At least one LM performed a matching step (updating its possible matches using an observation). There are also exploratory steps which do not update possible matches and only store an observation in the LMs buffer. These are not counted here.

- **num_steps** (lm.buffer.get_num_matching_steps): Number of matching steps that a specific LM performed.
- **num_steps** (`lm.buffer.get_num_matching_steps`): Number of matching steps that a specific LM performed.

- **lm_step** (max(num_steps)): Number of matching steps performed by the LM that took the most steps.
- **lm_step** (`max(num_steps)`): Number of matching steps performed by the LM that took the most steps.

- **lm_steps_indv_ts** (lm.buffer\["individual_ts_reached_at_step"\]): Number of matching steps a specific LM performed until reaching a local terminal state. A local terminal state means that a specific LM has settled on a result (match or no match). This does not mean that the entire Monty system has reached a terminal state since it usually requires multiple LMs to have reached a local terminal state. For more details see section [Terminal Condition](doc:evidence-based-learning-module#terminal-condition)
- **lm_steps_indv_ts** (`lm.buffer["individual_ts_reached_at_step"]`): Number of matching steps a specific LM performed until reaching a local terminal state. A local terminal state means that a specific LM has settled on a result (match or no match). This does not mean that the entire Monty system has reached a terminal state since it usually requires multiple LMs to have reached a local terminal state. For more details see section [Terminal Condition](doc:evidence-based-learning-module#terminal-condition)

- **Episode:** putting a single object in the environment and taking steps until a terminal condition is reached, like recognizing the object or exceeding max steps.

Expand Down
2 changes: 1 addition & 1 deletion docs/how-monty-works/faq-monty.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ So while there is hierarchy in both CNNs and the human visual system, the former

Concepts from symbolic AI have some similarities to the Thousand Brains Theory, including the importance of discrete entities, and mapping how these are structurally related to one another. However, we believe that it is important that representations are grounded in a sensorimotor model of the world, whereas symbolic approaches typically begin at high levels of abstraction.

However, the approach we are adopting contrasts to some "neuro-symbolic" approaches that have been proposed. In particular, we are not attempting to embed entangled, object-impoverished deep-learning representations within abstract, symbolic spaces. Rather, we believe that object-centric representations using reference frames should be the representational substrate from the lowest-level of representations (vision, touch) all the way up to to abstract concepts (languages, societies, mathematics, etc.). Such a commonality in representation is consistent with the re-use of the same neural hardware (the cortical column) through the human neocortex, from sensory regions to higher-level, "cognitive" regions.
However, the approach we are adopting contrasts to some "neuro-symbolic" approaches that have been proposed. In particular, we are not attempting to embed entangled, object-impoverished deep-learning representations within abstract, symbolic spaces. Rather, we believe that object-centric representations using reference frames should be the representational substrate from the lowest-level of representations (vision, touch) all the way up to abstract concepts (languages, societies, mathematics, etc.). Such a commonality in representation is consistent with the re-use of the same neural hardware (the cortical column) through the human neocortex, from sensory regions to higher-level, "cognitive" regions.

# Applications of Monty

Expand Down
2 changes: 1 addition & 1 deletion docs/how-to-use-monty/common-issues-and-how-to-fix-them.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Below we highlight a few issues that often crop up and can present problems when

## Quaternions

Be aware that in Numpy, and in the saved CSV result files, quaternions follow the wxyz format, where "w" is the real component. Thus the identity rotation would be [1, 0, 0, 0]. In contrast however, Scipy.Rotation expects them to be in xyzw format. When operating with quaternions, it is therefore important to be aware of what format you should be using for the particular setting.
Be aware that in Numpy, and in the saved CSV result files, quaternions follow the wxyz format, where "w" is the real component. Thus the identity rotation would be [1, 0, 0, 0]. In contrast however, `Scipy.Rotation` expects them to be in xyzw format. When operating with quaternions, it is therefore important to be aware of what format you should be using for the particular setting.

## XYZ Conventions

Expand Down

0 comments on commit 807869c

Please sign in to comment.