-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path27-Right-Skepticism.html
334 lines (331 loc) · 12.4 KB
/
27-Right-Skepticism.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Basic Meta Tags -->
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!-- SEO Meta Tags -->
<meta name="description" content="Comprehensive AGI Risk Analysis">
<meta name="keywords" content="agi, risk, convergence">
<meta name="author" content="Forrest Landry">
<meta name="robots" content="index, follow">
<!-- Favicon -->
<link rel="icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<link rel="shortcut icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<!-- Page Title (displayed on the browser tab) -->
<title>Comprehensive AGI Risk Analysis</title>
</head>
<body>
<p>
TITL:
<b>Right Skepticism</b>
<b>By Forrest Landry</b>,
<b>Oct 7th, 2022</b>.
</p>
<p>
ABST:
</p>
<p>
When/where is it right to be skeptical
about the probability of principle?.
</p>
<p>
TEXT:
</p>
<p>
> You seem very much too concerned with "being right",
> with proof, correctness, clear statement, etc.
> Are you willing to listen with an open mind?
</p>
<p>
No, we do not want to be right --
given the extreme pessimism of our results,
we would very much rather be wrong!
</p>
<p>
Unfortunately, what we find that
what we must do, ethically speaking,
is to be very sure to check our own results,
<b>and</b> to check that everyone else
is also not making some deep mistakes.
</p>
<p>
Ie; what we care more about
is where there is a risk
of "your" being wrong,
particularly about the possibility
of developing superintelligence
sufficiently safely for the well-being
of the world.
There is a risk of your leading others
to do wrong (world damaging) things too.
</p>
<p>
:jln
> Many impressive feats of technology and science
> were believed to be impossible beforehand.
> You are overconfident.
> Some person might do something
> that is currently believed to be impossible
> (today, from our limited understanding).
> Anything might be shown to be possible
> in the future, and hopefully,
> also achieved/demonstrated too.
> History is replete with such bogus claims
> and then later demonstrated counter-examples.
</p>
<p>
> Therefore, I believe that there is
> no principled basis on which <b>anyone</b>
> can claim that AGI alignment is impossible,
> that AGI is "inherently uncontrollable"
> and that AGI is therefore inherently "unsafe".
</p>
<p>
> No one knows anything about AGI yet,
> and there will always be more evidence.
> Your coming off as "certain"
> is a sign of overconfidence,
> biased judgement, and an inability
> to change one's mind.
> You should listen to, and learn from,
> the many more knowledgeble people
> who have already reviewed these issues.
> - ?; why should anyone read the writings
> of someone as so obviously unreasonable
> as yourself?.
</p>
<p>
- where/if someone were to claim
> "someday, someone will make
> a perpetual motion machine",
then I,
and nearly other reasonable scientist,
would likely simply dismiss that claim
as inherently unreasonable/unprincipled.
- that, we too, would have the heuristic of
> "move on, nothing to see here",
and to go about our own business,
ignoring anything further from that person.
</p>
<p>
- we do this because there <b>is</b> a principled reason
for arguing against
the currently unknown results of
> "the unlimited future cleverness of all future
> engineers, throughout all of future progress" --
it is called the 'Second Law of Thermodynamics'.
</p>
<p>
- that the mere claim that
> "we do not (cannot)
> ever absolutely know/predict
> what the future will bring",
and/or
> "that there is always more evidence",
simply are not <b>principles</b>,
and not at all <b>relevant</b>,
in the case of 'perpetual motion' --
(or anything that sufficiently resembles that).
- that this remains the case even if there are
yet (well meaning) (desperate) (less critical) people
(and sometimes investors looking to make a buck)
who can be (and are sometimes easily) deceived
into thinking/believing (by some seductive charlatan,
and/or by some clever, but still deluded, engineer)
that their "infinity zero point energy machine"
will "be unlike anything that has ever come before".
</p>
<p>
- if someone claims that something (everything)
is "just possible in principle" (eventually),
we are naturally going to be skeptical and ask:
"what principle?".
</p>
<p>
- when/where we say/claim "that X is impossible",
on the basis of a clearly stated 'principle P'
we are at least making 'P' explicit,
and explaining why 'P' is relevant,
<b>both</b> via _valid_ reasoning
<b>and</b> _sound_ argument.
- where/for you to simply pretend to counter
that argument by saying/suggesting:
> "your mind is closed to the possibility that
> maybe someone, someday, might find some evidence
> that X is possible, and that therefore,
> you are unreasonable, illogical, closed minded"
> (and so thus we will self justify ignoring you).
is simply to produce neither any actual evidence
and no actual reference to any real principle.
</p>
<p>
- that the real effort here is <b>not</b> for someone,
in their in-crowd community-recognized expertise
to tacitly assert, and thus attempt to assume
the authority that <b>they</b> get to 'be the judge',
to be/play _as_if_ you are skeptical, fair minded,
of <b>our</b> claims against the impossibility
of <b>any</b> "Safe AGI" ever, in the long term --
when it is actually the case that <b>we</b>
are to be very skeptical of <b>your</b> claims
and implied assertions, notions, beliefs, etc,
that <b>any</b> notion of "Safe Aligned AGI"
is even possible "in principle".
</p>
<p>
To us, there are clear mathematical truths
that apply to how to model these notions
such that positing the "safe AGI" notion
is a bit like a windmill engineer
claiming they can break the Betz Coefficient,
or a motor mechanic saying that they
can beat the Carnot engine efficiency,
or a distributed CS major saying
that they can overcome the CAP theorem.
</p>
<p>
- ?; on what principled basis
can anyone ever think
that it would <b>ever</b> be possible
to "align" an AGI and/or
make it even "approximately safe"
for any reasonable duration
to any reasonable degree?.
- where leaving aside the issue of
"aligned" to <b>all human</b> well-being,
rather than to just private benefit,
as is so very often tacitly "overlooked".
</p>
<p>
- unless/until someone somewhere produces
even a single reasonable reason suggesting,
on some actual principled basis,
that there is "good reason to hope"
that is not simply based on psych-bias,
false anchoring, belief, speculation,
misapplied and/or misguided metaphors
and/or hype and marketing scams,
we will continue to be skeptical of all
benefit claims of "safe practical AGI".
</p>
<p>
- otherwise, <b>you</b> are going to sound like
someone who claims something unreasonable.
</p>
<p>
:jnu
> Even if something is considered impossible
> via some mathematical model and/or theory,
> (eg; faster-than-speed-of-light travel)
> It is still inherently irrational to believe
> that it is 100% certain to not be possible.
> No one can be 100% certain about anything.
</p>
<p>
> The productive thing to do
> is to look for edge cases
> and find out how it might
> be possible after all.
</p>
<p>
We disagree -- there are knowable things.
We can be 100% certain that 2 + 2 == 4.
There are no "edge cases" for this --
it is not negotiable, nor a matter of opinion.
</p>
<p>
If you have not carefully ensured
that your models are both sound,
and consistent, in an actual worldly way,
then any time/effort you have spent on
trying to correctly "calibrate uncertainty"
is simply wasted.
</p>
<p>
You cannot simply circumvent the Rice Theorem
and/or the Halting Problem, or the CAP Theorem
by wishful thinking and "uncertainty engineering"
and calibrations.
</p>
<p>
Anyone who claims that "at some future point",
we/humanity will have anti-gravity machines,
faster than light travel (warp drives),
and some sort of 'time machines',
is going to seem inherently unreasonable,
and simply saying that "we cannot be 100% sure"
that these sorts of things "cannot happen"
is going to sound a bit flaky --
ie; does not seem to actually have a real
understanding of <b>causation</b> itself
(a concept impacted directly by <b>all</b>
of these three types of "technology").
- that all three are each examples
of near equivalent (structurally speaking)
extraordinary capabilities/functionality,
each/all of which, will involve
some real, deep, and unexpected
understandings of the principles of
General Relativity.
</p>
<p>
You are welcome to "negotiate gravity"
on your own time, not on ours.
</p>
<p>
That people might equivocate on the basis
of failing to distinguish Special Relativity
and General Relativity, and/or to not understand
the issues inherently associated with
"gravitions" as particles concept
and why they do not integrate well
in the Standard Model -- all of this
is simply to show evidence of confusion
about a significant number of
key/critical concepts.
</p>
<p>
Until/unless I hear/see some high level
of understanding of actual principle --
rather than just hopeful/hyped speculation --
then it is going to be very hard to take
your "skepticism" of us, as supplanting
our more reasonable skepticism of you.
</p>
<p>
Moreover, when considering x-risk issues,
that to have <b>anyone</b> simply elect to ignore
some "inconvenient" and "non-marketable" truths --
because they are "too busy" or "too important"
to review any actual principled reasons
(ie, the actual modeling, argument, math) supporting
significant safety concerns, ones involving everyone,
merely on the basis of some clearly faulty
personal belief heuristic preferences on your part,
is deeply ethically irresponsible in the worst way.
</p>
<p>
- such people _may_think_
that they can "elect to make choices"
regarding the well being,
on behalf of all other life on the planet --
all other people currently alive --
(and/or which have yet to be born,
for all of future time),
is an act at the absolute height of arrogance,
of hubris, of <b>colonialism</b>
of the very worst kind.
- as to actually be morally reprehensible,
while pretending to be an opinionated expert.
</p>
<p>
The issue here is <b>not</b> for you get to decide
"if we are worth your time, to read and understand";
rather it is for <b>us</b> to decide if you are worth our time,
to talk to and collaborate with, to respect,
in regards to anything which actually matters.
</p>
</body>
</html>