Skip to content

Latest commit

 

History

History
90 lines (65 loc) · 21.4 KB

GUIDANCE.md

File metadata and controls

90 lines (65 loc) · 21.4 KB

Psychosecurity

Symbols Are Units Of Belief

"Never attempt to win by force what can be won by deception."

Influence operations are well-documented in history, demonstrating how the dishonest overcame the vulnerable. The deceitful were exceptionally good at turning lies into power, from Alexander the Great's camp appearing larger than it was to the shorting of the London Stock Exchange after Waterloo. Regardless of social, geographic, or chronological differences, this form of storytelling resurfaces time and time again. The deceased are powerless to refute falsehoods about themselves, yet this inability is insufficient to explain historical deceptions. The symbols that make up history are likewise steeped in deception.

The human mind cannot accurately perceive the totality of reality. Our brains' three pounds of meat consume 20% of all glucose-derived energy. Both human eyes combined see an impressive 400 million photons per second. A standard 60W/2800K lightbulb generates 8.2 quintillion visible spectrum photons every second. Only 0.0000000004% of the photons emitted by the lightbulb are visible to our eyes. Despite the scant coverage, the information we extract from that photon stream is vivid, compelling, and a major factor in our decision-making. Deception can be achieved by adjusting the frequency, timing, intensity, origin, and distance of that photon stream. (Visit your local magician for more details.) How the mind constructs representations of reality from that stream is of particular interest.

The mind cannot see reality as it is (it would only see raw atomic behavior if it could). It may be argued that the human body should be altered to maximize photon intake. Adjusting the reflectivity of the eyes, altering the noise management of rhodopsin in the eye rods, modifying the visual cortex system, and rebalancing all related glyosidic processes that may be affected are all required. These biological restrictions are operating at peak efficiency. They provide a best-effort picture to inform your awareness. This necessary deception enables you to navigate reality in order to consume more energy. All perceptions of this reality are constructions of the mind. Those constructions are made up of symbols.

A symbol is a unit of belief. A map (a symbol) can only represent a territory (a symbol) if the relationship between the two is believed. Symbols are constantly created and recalled by experiences, stimuli, memories, thoughts, dreams, instincts, and biochemistry. The fidelity of a symbol is established by determining how closely it resembles reality. These determinations are often influenced by personal choice or convenience, making fidelity an incomplete property of a symbol. The intention of a symbol is assumed to be the conscious purpose behind its usage. However, symbols can be generated from unconscious or faulty biochemical processes (unconscious proprioception, the ringing of tinnitus, etc.). Despite these observations, deceit is commonly thought to be a simple corruption of symbol fidelity and intent. ("Moreover, any realistic account of honesty and deception in humans must be able to account for the fact that information sharing is dominated by unstructured communication involving natural language and a diverse collection of nonverbal cues. However, no study to our knowledge has studied the neural mechanisms of honesty and deception in the context of unstructured communication.")

Techniques, practices, and rituals to prove the veracity of symbols (high fidelity and honest intentions) can be found in legal documents, cultural institutions, commercial activity, academic facilities, scientific disciplines, philosophical definitions of will, and even theology. Organizations have traditionally used symbols without being as influenced by them as the gullible and imaginative people they target. These behaviors make sense from a neuroeconomic viewpoint: humans can alter each other's neural chemistry to aid their own decision-making. Symbols that come from trusted sources will be believed much more than symbols from unstable or unworthy sources. The upkeep of institutions is driven by this self-reinforcing need for high veracity symbols. Organizations can transfer and revoke trust across enormous populations through symbols alone. (A badge of authority, a certification, a holy symbol, a Blue Check, etc.)

Each organization assesses the benefits and costs of its symbols to reduce schism vulnerability. Organizations reduce schism risk by creating hierarchies of secrecy and enforcement cliques to protect symbols, utilizing rituals to establish veracity, and avoiding imposter and threatening symbols. Organizations endure by propagating the symbols they rely on. This propagation allows the organization to sustainably transfer frames of reality to large numbers of people. With the advent of printing presses, radio, television, and the Internet, this transference can be done very rapidly and on a large scale. Organizations then influence a population's behavior to be predictable enough to efficiently capture their resources.

Psychosecurity Is The Resistance Against Symbols

"If the mind were so simple to protect, we would be so simple we couldn't."

Modern steps have been made to protect people from institutions abusing symbols carelessly. Cryptocurrency utilizes non-human brute force cryptographic puzzles as a lottery to determine who can create the next symbol in a shared ledger. These mechanisms are a psychosecurity response to the problem of communal symbol veracity. Man's defects are numerous, whereas computational encryption's flaws are few. Mathematically sound ledger entries significantly reduce the need for human-driven veracity, which in turn reduces the chances of humans abusing symbols for profit. These entries are high fidelity and honestly intended symbols. Those symbols transcend belief, transitioning from units of belief to units of truth. Cryptographic puzzles operate as dependable agents of a public ledger and guarantee symbol veracity. Influence operations have no effect on these agents, which means they have perfect psychosecurity.

Here's an example of how perfect psychosecurity agents affect human behavior. You're chatting with someone online. You find out they're a bot. Do you continue the conversation? If you do, you'll get sloppy context and no chance of having an impact. You'll also have an increased chance of being influenced by the bot. The bot's intention cannot be influenced by any words you offer. We sense this inequality of influence and it causes us concern. What if you knew how immune a person was to your influence? If they were more like bots, influencing them would become harder. If they were less like bots, the perception of influencing them would become more human-accessible. Would you change your opinion of friends, family, influencers, journalists, and authorities if you knew they were susceptible to influence? What would happen if you knew they were influenced by a bot?

The agent's developers have very little psychosecurity because they are human. They have influenceable preferences, biases, interests, passions, fears, and pressure points. These blockchain developers can alter the code. They can create forks. They can embrace fewer security protocols and untested methodologies. They will, however, always have the option to create perfect psychosecurity agents that ensure the veracity of new ledger entries. Another example of a perfect psychosecurity agent are bots on social media networks. They labor diligently to generate symbols across a network, and no influence operation can deter them. That bot has perfect psychosecurity as well. They are immune to psychological games of deception aimed at undermining their intentions. (A bot's performance can be degraded through influence operations, but its intent cannot be affected.) Perfect psychosecurity agents are much more expensive to alter. Social networks have evolved a collection of haphazard techniques to counter these agents.

The first counter is to undermine the fidelity of the symbols the bots create. This allows a social network to engage in classical institutional approaches to symbol veracity because they can apply artificial social penalties (like public shaming) to those who believe bot symbols. Undermining the fidelity of bot symbols can only be done in relation to the psychosecurity threshold of an audience. Meaning, a bot whose symbols are below the psychosecurity threshold of an audience is seen as worthy of termination. However, bot symbols that exceed the psychosecurity threshold of an audience are seen as friendly and useful, making their removal a threat to the human audience. The second counter is algorithmic behavior detection. Bots have identifiable behavior signatures that can be detected on a social network. The closer a bot can mimic human behavior, the more algorithmic counters will accidently target and terminate human users. Paradoxically, the closer a bot gets to emulating human grammatical intelligence, the less likely it will exceed the psychosecurity threshold of a human.

The mind may not have a firewall, but it does have a psychosecurity threshold. If a symbol is a unit of belief, then psychosecurity is the resistance a symbol has to overcome to be considered an accurate representation of reality. The higher the psychosecurity, the less likely that symbol will be considered accurate. The lower the psychosecurity, the more likely a symbol will be considered accurate. Psychosecurity is not a global property of the human mind. A person may have a high psychosecurity against eating certain foods, but have a very low psychosecurity against content from their favorite creator. The failure of a symbol to be considered accurate does not eliminate the symbol from a person's mind. A rejected symbol can linger in the mind over time and be repeatedly recalled, giving it the chance to influence the future. Compared to a human, a bot or a cryptographic puzzle has perfect psychosecurity across all contexts, preventing all psychosecurity failures. The humans those agents depend on are susceptible to symbols and have an adjustable psychosecurity. Controlling symbol flow, fidelity, intention, origin, and intensity are all important aspects of influence operations. Adjusting the psychosecurity thresholds of targets is also important. Deploying bots to validate every activity a human makes on a social network (known as heavenbanning, the theoretical opposite of shadowbanning) will reduce their psychosecurity. Influence operations will also analyze social media behavior to identify what psychosecurity groupings can be targeted and in what order. The ability to group people based on their psychosecurity signature can forecast the behavior of institutions, organizations, cults, governance bodies, militaries, corporations, customers, and political factions.

A Framework To Create Psychosecurity-First Organizations

"Religion is a culture of faith; science is a culture of doubt."

A rudimentary way to measure psychosecurity clusters is to gather the social media behavior of known bots. Record their engagement, activity times, who they prefer to interact with, how quickly they respond, their readability preferences, the domains of the URLs they post, their media submissions, and any other non-linguistic or non-cultural factor that helps establish an individual signature. Then do the same for a human and compare the "distance" between the two. The "further" a person's behavior is from a bot, the lower their psychosecurity is. This is a rough example of how to assess psychosecurity levels. Identifying the structure and velocity of psychosecurity clusters can reframe how institutions impact populations. For example, are political parties merely a collection of similar psychosecurity clusters? Are cultures' shared psychosecurity clusters expressing themselves? Are institutions acting on behalf of preferred psychosecurity signatures?

An open source initiative dedicated to psychosecurity-first solutions can be beneficial. For example, 82% of all cyber attacks involved exploiting the psychosecurity of a human. This means overcoming the psychosecurity thresholds of humans is at the center of practically every major cyber attack. We would significantly adjust our cybersecurity planning if we understood the psychosecurity clustering of our employees and peers. We would account for their symbol exposure and pair them with the right psychosecurity protections. We could measure how psychosecurity clusters impact cybersecurity costs and risk in a more predictable manner and change how cyber insurance is evaluated.

Applying psychosecurity-first techniques to social media can improve content moderation. Content is commonly removed based on an interpretation of implied intent. It is widely assumed that the intent of the comment poster can be inferred from the content itself, leading to justified penalties for violating terms. Instead of making an implicit assumption about a user's intent, it could be more effective to assess the poster's psychosecurity at the moment of submission. Other users could filter for content based on their psychosecurity preferences, potentially improving moderation and market targeting.

User experience design can employ psychosecurity-first approaches to measure the ethicality of a design. They can also develop tools that enable rapid psychosecurity assessments of app designs. Pixel reporting to non-governmental impartial monitors can also collect the user's psychosecurity ratings anonymously and offer public certification of ethical compliance. It can also be used to detect bots, providing a helpful service in exchange for ethical compliance.

Legal teams will be able to use psychosecurity evidence to "know the heart and mind of the offender" in criminal and civil proceedings. Institutions and private actors alike could be recognized, traced, and ordered to pay damages for reckless use of influence operations. This would provide a much-needed non-kinetic dispute resolution mechanism in the face of ever-growing AI-assisted influence operations. By improving the psychosecurity of a population, cyber defenses can qualify for UK-style military offsets.

Conclusions

Psychosecurity is a novel idea that combines psychographic data with the ethos of personal well-being to assess how vulnerable people are to influence operations. The use of perfect psychosecurity agents for both trust enforcement and distance measurement is prioritized in a psychosecurity-first paradigm. Knowing the psychosecurity measurements of the networks you rely on will greatly influence how you interact with them. The ordinary individual can reason about psychosecurity from the frame of personal responsibility. Institutions can use psychosecurity to better assess their threats and opportunities. Entire symbol spaces can be compiled into a "daily human weather forecast," reducing the efficacy of widespread influence operations. Psychosecurity-first frameworks can be developed through open source initiatives and bring together the insights of the many industries facing cybersecurity, existential, exponential, and epistemic threats.

~Patrick Ryan
Epistemic cowboy and psychosecurity advocate
[email protected]