-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path37-Market-Tech-X-Risk.html
675 lines (672 loc) · 23.3 KB
/
37-Market-Tech-X-Risk.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Basic Meta Tags -->
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!-- SEO Meta Tags -->
<meta name="description" content="Comprehensive AGI Risk Analysis">
<meta name="keywords" content="agi, risk, convergence">
<meta name="author" content="Forrest Landry">
<meta name="robots" content="index, follow">
<!-- Favicon -->
<link rel="icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<link rel="shortcut icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<!-- Page Title (displayed on the browser tab) -->
<title>Comprehensive AGI Risk Analysis</title>
</head>
<body>
<p>
TITL:
<b>Market Tech X-Risk</b>
<b>By Forrest Landry</b>,
<b>November 1st, 2020</b>.
</p>
<p>
ABST:
Some considerations of concepts
also associated with tech x-risk.
</p>
<p>
TEXT:
</p>
<p>
> Is there some version
> of the 'Paperclip Maximizer'
> that is already running,
> even with the form of narrow AI
> that exists today?
</p>
<p>
Yes, as it turns out.
</p>
<p>
We can consider that 'businesses' and 'markets'
are both things that are semi-intelligent,
and 'further and reproduce themselves'.
Intelligence in that sense means something like
'responsive to the environment in an oriented way'
that leads to 'increase of itself', etc.
Businesses and markets that do not grow
tend not to be reproduced,
where those that do, are, etc.
</p>
<p>
Where/if you treat a business
as a type of 'machine' or 'autonomous technology' --
then it may 'use humans'
(and/or other organic life forms)
as 'implementing components',
as part of its 'tech stack'.
Insofar as it is in itself, 'not human'
then businesses and markets, both,
are 'artificial beings'.
They each can be considered as
'virtualized' beings as well,
though that is a somewhat different discussion.
</p>
<p>
:3zu
> Are you suggesting that a business,
> in the sense of being
> a kind of 'paperclip maximizer'
> is of concern?
> Is that an x-risk in itself?
</p>
<p>
Most people would say "no".
</p>
<p>
What is interesting is that
nearly every single category of x-risk
emerges out of 'side effects'
associated with business and market process --
inclusive of institutional process
(ie, things like governments and universities).
Insofar as unchecked institutional forms
are polluting the world, causing damage
in the form of climate change, etc,
and/or are developing nuclear weapons,
bio tech weapons, and all other manner
of automated drone kill machines, etc,
it can be suggested that the collateral risk
associated with current social/institutional
processes of all types
is maybe more problematic overall,
than it would initially seem.
</p>
<p>
Is it net negative?
That is another question for another day.
</p>
<p>
Even inventing absolutely nothing new,
and just considering the technology
that is already apparent now/today,
that there are already some
significant and real concerns,
with regards to x-risk,
from all of these causes.
</p>
<p>
With nuclear weapons tech as already developed,
as already implemented in real devices,
we have the ability to kill
every human on every square inch
of all the land surface of the entire planet Earth.
While this is a serious x-risk,
it can at least be conceived
that life on the planet <b>might</b> recover,
eventually, to some somewhat lower
level of complexity, overall, than present,
in the few hundred million years remaining
before the sun gets too hot
for photosynthesis to work anymore.
</p>
<p>
These sorts of issues, however,
are well beyond the scope of this essay.
</p>
<p>
:44c
> Is the X-risk associated with AI
> of a completely different kind
> than the X-risk associated with
> nuclear winter and/or global warming?
</p>
<p>
Yes.
</p>
<p>
This answer is expressive of
the difference between 'thinking about energy',
vs 'thinking in terms of pattern'.
</p>
<p>
Where insofar as there are
'three basic categories of being',
as 'atoms', 'energy', and 'pattern',
then we can see that weapons forms
correspond to these categories as well.
</p>
<p>
Most weapons are energy weapons.
Guns, for example, are kinetic energy weapons,
delivered by way of bullets.
Missiles, explosives, and lasers
are all also energy based,
and insofar as energy is both
expensive in itself
and expensive to accurately deliver,
then all such weapons forms are costly.
</p>
<p>
Atom based weapons are less dramatic,
consisting of things like cement barriers
and chemical poisons.
</p>
<p>
Pattern weapons are the most complex,
and consist of things like bio-tech
and computer viruses, memetic and social
manipulation via propaganda, etc.
</p>
<p>
:4gj
The main things to notice in regards to
these categories is that:.
</p>
<p>
- 1; while atoms and energy (as light)
can have independent existence;
that pattern needs to be combined with
at least one of atoms or energy
to be realized as existing.
</p>
<p>
- 2; while atoms remain effective
by mostly not changing, that both
energy and pattern have effectiveness
insofar as they do inherently change.
</p>
<p>
The basic observation associated with @2
is expressed in the following aphorism:.
</p>
<p>
"Where energy dissipates,
that pattern replicates".
</p>
<p>
This means completely different
manifestations and implications,
with respect to what basis of existence
we are considering.
</p>
<p>
:4mj
Given that AGI/APS, superintelligence, etc,
are all defined primarily in terms of
some sort of inherently abstracted pattern,
it shifts the possibility space completely.
</p>
<p>
When considered in terms of impact,
particularly in terms of its reach in time,
and where as auto- catalysing growth,
rather than just in terms of impact over space,
the x-risk associated with AGI/APS
is far more serious overall,
especially insofar as, once it is developed
and occurs as an extreme outlier event,
it can be strongly and conclusively argued
that life as we know it never recovers --
never has a chance again, ever at all,
in the universe.
</p>
<p>
This seriousness is somewhat compounded
on a more universal scale
when combined with an understanding
of our current planetary life situation.
When considering our own planet local sun
as a 'main-sequence star',
and when given current stellar models,
and observations of galactic state, etc,
we notice that something like 95%
of all of the stars that will ever be made
in this universe, have already been made.
And that moreover, our own planet is closer
to the end of its 'viable for life conditions'
then it is closer to its beginning.
Both of these factors combine
to give some sense of <b>if</b> we
'mess up' our current 'one chance at life',
that it may be that we actually mess it up
for this universe as a totality, forever,
rather than just locally on this planet.
</p>
<p>
A lot of that sort of thinking depends on
</p>
<p>
- 1; how you think about issues like the
'Fermi Paradox', particularly in regards
to the unicity of life on Earth,
in its origin, etc,
as due to a relatively quiet level of
galactic background radiation, events, etc.
</p>
<p>
- 2; how you end up thinking about AGI/APS
re-catalytic uptake/takeoff process and/or
overall stability in the longest time scopes.
</p>
<p>
For example, how you regard the Fermi paradox,
and whether you consider
that the 'Great Barrier of the Past'
is the dominant one,
and/or that it might just be validly considered
that life is just actually
incredibly rare in the universe,
and that therefore, our lonely planet,
may have something even more than
a quadrillion quadrillion to one level
of improbability, unicity, and thus also,
of value.
</p>
<p>
This suggests that the questions of concern
may be significantly more than just something
of an abstract issue of 'safety philosophy'.
</p>
<p>
:4s8
> Are we determining the nature
> of all possibility of life
> in the entire universe,
> for all future time?
</p>
<p>
Something like that, yes.
It is indeterminate if the stakes
are actually that high now, currently,
and yet prudence in the space of
important uncertainty
would suggest a precautionary principle
be applied nonetheless.
</p>
<p>
It can at least be confidently asserted
that we do need to be sure to think about
these issues in terms of
the appropriate scales of context --
to consider broader reaches
in time, space, and possibility --
if we are going to be intellectually honest,
and deal with what is needing
to be dealt with, etc.
</p>
<p>
:4v2
> So your concerns about business
> and/or of prestige drivers --
> (as being the 'reason'/basis
> of the 'arms race'
> to "be that guy" that 1st develops AGI,
> and/or that company, or government,
> that gains "huge asymmetric advantage"
> over all other similar institutions, etc) --
> these are real?
</p>
<p>
Note that the notion of 'asymmetric advantage'
as associated with AGI is pure hype.
There is no reasonable reason
to assume that any instance of
any general artificial intelligence
will actually elect to choose to cooperate.
In fact, there are many reasons
to think that any sort of AGI/APS
might not actually cooperate over the long term.
Our not having solved the problem of
AGI alignment being only _just_one_ of
the more significant issues.
</p>
<p>
Also, it is to be recognized that,
even the prestige factor
of being 'that person' or 'that team'
which 'solved the conundrum of consciousness',
it is very unclear as to whether
such benefits will be even be received
(and/or receivable) at all.
Moreover, such benefits, if any,
may be potentially very short lived.
</p>
<p>
On the scales of multiple millennia --
in epochs of future time
on geologically relevant time scales,
that the duration of any one human life
is very unimportant --
it does not even register.
That it is a bit arrogant in the extreme
that some insignificant peon person
in some (happens to be this one, the current) age
made the choice about the entire future of humanity,
of all of life itself,
simply on the basis of expanding their own ego.
That such actions/choices,
on the part of AGI/APS technologists,
is to be so ultimately narcissist,
so inherently an completely psychopathic,
so anti-social, genocidal,
as to require that any such person
be put in a straitjacket and imprisoned immediately --
for the safety of all the children, forevermore
for all of time.
</p>
<p>
Such levels of arrogance, colonialism,
and a complete total absence of temporal humility,
as if driven by short term market
and/or prestige/status gains
in the (feeling of) "it (maybe) looks good
in the quarterly shareholder report" --
that this sort of thinking
is so far out of ethical alignment,
that it reflects such a lack of consideration
and/or discernment
as to be wholly without merit.
</p>
<p>
:4xj
> How do you fix an absence of merit,
> or of care, consideration,
> or conscientiousness?.
</p>
<p>
An excellent question,
one that continues to be a topic
of ongoing meditations.
How do we help and encourage others,
other people, and maybe even machines
(if we ever make AGI/superintelligence)
to care, to have care, to be skillful
in a meaningful and appropriate relevant way, etc.
How do we actually ensure that humans
actually treat one another well --
or better than that, "excellently",
so that we all can thrive in joy and abundance?.
</p>
<p>
> Many of those things
> are defined by market and social forces,
> as things much larger than
> any person or company.
</p>
<p>
Agreed.
Such considerations would need to be
part of a larger effort, in any case.
</p>
<p>
It may be the case that our species
is just smart enough to get ourselves killed,
and not yet smart enough to <b>not</b> actually do so.
</p>
<p>
Consider: we are just barely smart enough
to have discovered how to use tools/technology.
In effect, we have just crossed that threshold.
That the existence and deployment of tech/tools
is a kind of 'phase change' or 'state change'
in the overall matrix of
the process of the planet.
When at a critical point,
raise or shift the temperature and/or pressure
even just a tiny bit --
a tenth of a degree --
and the whole nature of the substance changes.
Liquids become gas, or maybe solids melt.
</p>
<p>
And where considered internally,
in our brain capacity and biology,
that humans change only very slowly.
Both in inherent raw compute capability,
and in terms of our inherent wisdom,
we are basically the same creature
as we all were 4 thousand years ago.
In terms of space (as our overall memory),
and in terms of time (how fast we think),
and in terms of energy (how willing we are
to think and do that which is necessary),
we are still basically the same brains.
</p>
<p>
As such, compared to who people were
living many thousands of years ago,
when tech/tools were just starting,
we are still the very same species.
In terms of evolutionary history,
and geological epohcal time,
all that has happened since
the very first tools and tech,
is a mere brief moment.
</p>
<p>
And we are the very 1st species to do so.
That means, we are literally
the dumbest possible creature
to have developed technology.
Today, we are barely smarter than
the absolute minimum total necessary
to develop and deploy tech/tools
in the first place, and hence,
our risk of misusing that tech
is very much greater if that tech
is much stronger than our wisdom.
</p>
<p>
We can therefore also expect that,
if there is any gap at all between
the level of skill, competence, or wisdom
absolutely and inherently necessary
to truly and reliably use that tech safely
and the lessor level of skill to simply
have, use, discover and deploy that tech,
then we can also expect that we have not,
yet, also developed that level of wisdom.
</p>
<p>
:23a
> Is there a gap between
> our knowledge development rate
> and the rate of increase
> in our inherent capability/wisdom
> to handle that technology?.
</p>
<p>
It ends up being a kind of 'rate equation':
how fast can we increase our wisdom,
as compared to how fast technology tends
to increase itself (due to whatever ambient
market forces, where the notion of 'market'
is itself inherently actually a kind of technology --
a means or a tool of solving problems,
as inclusive of things like food distribution,
and/or obtaining clothing, shelter, mates, etc).
Since the market and institutional process
(where institutions, firms, businesses,
governments, and also the whole legal system,
and monetary system, educational systems, etc,
are all themselves kinds of 'civilization tech')
are the factors driving the demand for new tech,
then it is actually tech
developing itself as tech.
</p>
<p>
Overall, it seems that the answer is 'yes',
that there is an overall gap between
the level of power implied by tech,
and the level of minimally necessary wisdom
to be able to (long term) wield that tech.
In some philosophy writings (somewhere),
this is known as "the ethical gap".
</p>
<p>
:28a
> How fast does tech develop?
> how fast does human wisdom develop?
</p>
<p>
We can notice Moore's Law, and suggest
that tool and technology capability,
at least in the area of virtual compute,
tends to increase, rather a lot --
doubling even --
every X number of years.
When we look at tech development overall,
it is clear that in the last few hundred years
there has been a near total shift and change --
something like a doubling of overall power
(ie, energy sourcing and processing capability)
at least 16 times.
</p>
<p>
Comparatively speaking, human wisdom,
as limited by organic biology,
would take something like nearly
two million years to <b>double</b>
its own compute capability <b>even once</b>.
</p>
<p>
Therefore, in terms of rate of change,
that technology/tool development process
exceeds human wisdom increase process
by something like a hundred million times --
maybe eight or nine orders of magnitude.
</p>
<p>
If it <b>also</b> turns out that the ability
to use significant world changing power
cannot be done safely without also having
world responsible wisdom capability,
then we can expect significant problems --
ones that are particularly existential.
</p>
<p>
Considerations of how and why market processes
(as proxy technology processes)
act as an accelerator
and generating function for
increased degrees, kinds, and severity
of multiple interacting x-risk categories
is the topic for another conversation/essay.
</p>
<p>
Likewise, for all of the social implications
of social media, inter- generational social process,
and the degree to which such companies and practices
are already critically disabling
the necessary human sensemaking needed
to develop the capabilities and wisdom necessary
to deal with these sorts of issues
in the first place --
all of that discussion can be,
(and to some extent is),
considered elsewhere.
</p>
<p>
What <b>is</b> relevant herein this dialogue
is whether or not we can find and develop
effective and efficient means
to increase our necessary level of wisdom
at a rate that is commensurate with
the inherent needs of utilizing tech appropriately
for the long term wellbeing of our shared planet.
</p>
<p>
:3bl
> What sorts of means/processes can we use,
> as a species, to increase our wisdom
> to the necessary extent, in the required time?
</p>
<p>
This in itself is a vast
and critically important question.
</p>
<p>
Unfortunately, there is simply not enough
time or space herein this essay to consider
all of the relevant aspects of that question.
Considerations of how to implement
sane and healthy group process,
as living together in some sort of family,
intentional community, or city state, etc,
are important (and also considered elsewhere).
Considerations of how to do group sense-making
and choice-making, as a form of governance,
are also especially relevant (also elsewhere).
Considerations of how to manage resources,
in the form of time, space, or energy,
are also necessarily considered elsewhere.
</p>
<p>
All of these are critically important questions,
having to do with things like 'economics',
and the 'material physics' and 'energy' and 'compute'
all as 'built into' the substrate of the real world.
Such matters are <b>also</b>, necessarily,
'built into' our very innermost nature,
into the very chemistry and biology
of our own long evolutionary heritage,
and as now manifesting as
anthropology and psychology,
social physics, and so on.
</p>
<p>
Too many topics to consider all of that now.
All that {can be / is} attempted to be
considered herein is/are the question(s) of:.
</p>
<p>
> Can we use AGI, APS, robots, machines, and/or
> some sort of artificial superintelligence
> that we somehow (soon?) create
> to do (implement) our wisdom for us?.
> Can we do that(?), and if so, how?.
> Can we do it soon enough (to matter)?.
> Will it, could that, even work?.
</p>
<p>
Unfortunately, a careful examination
of the factors involved in the inter-relationships
between hardware concerns and software concerns
ends up defining a hard result: definitely not!.
</p>
<p>
This becomes even more especially apparent
when considerations of the limits of compute,
ie, of engineering, physics, math, logic, etc,
as considered in terms of game theory,
process evolution, various forms of
complexity theory, theory of information
and of compute, entropy and energy, etc.
</p>
<p>
Basically, the key difference comes down to
the actual meaning of the term 'artificial',
and of the implications that has with respect
to hardware (what brains and bodies are made of),
and not just software (what minds are made of).
What the being substrate is made out of
matters a lot --
rather a lot more than was ever expected.
</p>
</body>
</html>