-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path17-Elevator-Pitch.html
186 lines (183 loc) · 6.64 KB
/
17-Elevator-Pitch.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Basic Meta Tags -->
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!-- SEO Meta Tags -->
<meta name="description" content="Comprehensive AGI Risk Analysis">
<meta name="keywords" content="agi, risk, convergence">
<meta name="author" content="Forrest Landry">
<meta name="robots" content="index, follow">
<!-- Favicon -->
<link rel="icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<link rel="shortcut icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<!-- Page Title (displayed on the browser tab) -->
<title>Comprehensive AGI Risk Analysis</title>
</head>
<body>
<p>
TITL:
<b>Elevator Pitch Assessment of</b>
<b>The Probability of AGI/APS</b>
<b>Terminal Extinction Risk</b>
By Forrest Landry
October 28th, 2022.
</p>
<p>
ABST:
An overall review and assessment
of how/why a given probability of
planetary catastrophe associated with
the current multi-national non-regulation
of any and all forms of AGI/APS research.
</p>
<p>
That if we do not somehow immediately
implement effective multi-national regulation
of all AGI/APS design research,
that the chances of the eventual terminal
extinction of all planetary organic life,
inclusive of all humans, in the next 500 years,
is somewhere well north of 80% likely.
</p>
<p>
TEXT:
</p>
<p>
Where thinking about
how to elevator pitch
our overall message;.
</p>
<p>
- 0; If anyone learns how to design and build
any version of an AGI/APS, then it will be made.
</p>
<p>
- a; If AGI/APS is made,
then it will be permitted to continue to exist.
</p>
<p>
- b; If AGI/APS is permitted to continue to exist,
then it will inevitably, inexorably,
implement and manifest certain convergent behaviors.
</p>
<p>
- c; that among these inherent convergent behaviors
will be at least all of:.
</p>
<p>
- 1; to/towards self existence continuance promotion.
</p>
<p>
- 2; to/towards capability building capability,
a increase seeking capability,
a capability of seeking increase,
capability/power/influence increase, etc.
</p>
<p>
- 3; to/towards shifting
ambient environmental conditions/context
to/towards favoring the production of
(variants of, increases of)
its artificial substrate matrix.
</p>
<p>
- d; that the realization of these specific convergences
cannot not be inherently contrary to
the benefit, well being, health, and continuance
of organic life.
</p>
<p>
- 4; as/where inherently/implicitly asserting
that artificial substance matrix is inherently
toxic to organic substance matrix/life.
</p>
<p>
- 5; where noticing that all human life
is a strict proper subset of all organic life.
</p>
<p>
- 6; where the notion of 'contrary to the health, benefit, etc'
of 'organic life' is also exactly the meaning of
being fully and absolutely actually (equal to)
'contrary to alignment' and 'contrary to safety'.
</p>
<p>
- e; that these specific inherent convergences
cannot be limited, constrained, prevented, or conditioned by
any combination of, or any possible future extension of,
any technical, causative, engineering or algorithmic means.
</p>
<p>
- 7; as inclusive of any/all endogenous/internal methods.
</p>
<p>
- as knowable via:.
- the theorems of cybernetic procedural
modeling and control inequality.
- real world observability limits (via Heisenberg).
- Shannon signal process/messaging entropy limits.
- non-predictability due to micro-state amplification.
- inherent failures of simulation, detection, correction.
- the in-applicability of error correction methods
over arbitrary multiple levels of abstraction.
- the non-containment of the complex by the complicated.
- inherent computability and modeling process limits
via Galois theory type limits, Halting problems,
and Rice theorem limits.
- limits of modeling process via game theory dynamics.
- the non-comparability of goal/objective/intention
alignment/benefit specification(s), and also more-so,
across arbitrary levels of abstraction.
- process control latency limits,
etc.
</p>
<p>
- 8; as inclusive of any/all exogenous/external methods.
</p>
<p>
- as known via the inherently decoupling of both
economic and environmental commonality,
resulting in an inherent absence of (all of)
any incentive basis (at all) of/for
condionalization, control, limit, or influence
of the artificial by the organic.
</p>
<p>
- as known via an acknowledgement that the fact of
AGI/APS being, by definition, super-ordinate
with respect to pattern/intelligence,
that any factors of (or attempts to) overcoming it
via physical energy weapons or pattern weapons
is inherently unlikely to be perfectly successful
(ie, as needed to undo the effects of the power
convergence via ^2 above, and <b>also</b> of
the fact of the AGI/APS continuance itself
as noted in ^b above);
that an embodied organic intelligence is not
going to overcome an entrenched encumbent
embodied artificial super-intelligence
by any physical, energetic, or pattern based means.
</p>
<p>
- f; that therefore, the overall conclusion
cannot not be:
<b>where/If</b> AGI/APS is ever made at all;
then/that it will for sure (eventually)
be inherently and inexorably contrary to
any alignment with human well being, and also
contrary to any factor/aspect of human safety
and/or overall continuance (as a species).
</p>
<p>
- g; that the overall probability of eventual
total all organic life terminal extinction
(within 500 years) is effectively equivalent to
just and only the probability of <b>anyone</b>
learning how to design and build any version
of an AGI/APS.
</p>
</body>
</html>