-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path08-No-People-as-Pets.html
2987 lines (2984 loc) · 107 KB
/
08-No-People-as-Pets.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Basic Meta Tags -->
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!-- SEO Meta Tags -->
<meta name="description" content="Comprehensive AGI Risk Analysis">
<meta name="keywords" content="agi, risk, convergence">
<meta name="author" content="Forrest Landry">
<meta name="robots" content="index, follow">
<!-- Favicon -->
<link rel="icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<link rel="shortcut icon" href="https://github.githubassets.com/favicons/favicon-dark.png" type="image/png">
<!-- Page Title (displayed on the browser tab) -->
<title>Comprehensive AGI Risk Analysis</title>
</head>
<body>
<p>
TITL:
<b>No People as Pets;</b>
A <b>Dialogue on the Complete Failure</b>
<b>of Exogenous AGI/APS Alignment</b>
<b>By Forrest Landry</b>
<b>November 1st, 2020</b>.
</p>
<p>
ABST:
A very basic presentation of a clear argument
as to why any form of external and/or exogenous
AGI/APS superintelligence alignment,
and/or thus of any form of planetary safety,
in the long term, eventually,
is strictly and absolutely impossible,
in direct proportion to its degree of
inherent artificiality.
</p>
<p>
That <b>any</b> attempt to implement or use AGI/APS
will eventually result in total termination
of all carbon based life on this planet.
</p>
<p>
PREF:
Acknowledgements;
This document would not exist in its current form
without the kind attentions of:
- Philip Chen.
- Remmelt Ellen.
- Justin Olguin.
</p>
<p>
TEXT:
</p>
<p>
- where listing acronyms:.
- "AGI"; as Artificial General Intelligence.
- "APS"; as Advanced Planning Strategy System(s).
</p>
<p>
:int
<b>Introduction</b>
</p>
<p>
This essay will attempt to consider
some key aspects of the the question/problem of:.
</p>
<p>
> Can we/anyone/humanity
> attain "AGI alignment"?.
</p>
<p>
So as to address that,
we can begin by asking:.
</p>
<p>
> What is 'AGI alignment'?
</p>
<p>
The notion of 'AGI alignment', herein,
is usually taken to mean some notion of,
or suggestion that:.
</p>
<p>
- 1; some 'learning machines' and/or
'general' artificial intelligence,
(aka artificial general intelligence),
and/or Advanced Planning Strategy System(s)
(as usually abbreviated as 'AGI/APS'),
particularly as inclusive of any sort
or notion of any sort of being or agent,
regardless of if it is itself a "robot",
or in a robot body or not, has direct
or indirect engagement with the world, etc,
ie; however that artificial intelligence
was constructed, and intended to be used,
etc; and then to consider;.
</p>
<p>
- 2; whether or not 'it' (that which is
being described above) would act, and behave,
and consider itself as acting/behaving
as an agent of ourselves, in/to our
actual best interests, on our behalf,
having our (humanities) best interests
in mind, as a basis for its actions,
behaviors, choices, etc;.
</p>
<p>
- 3; for, or in relation to,
some real/grounded notion or meaning
of what 'our best interests' are,
and what that actually/really means,
and looks like, and is actually, etc,
and what 'to our benefit' means, etc.
</p>
<p>
Basically, &1 describes the who/what
is performing the action, &2 references
the action itself, and &3 describes
the (intended or observed) outcomes
of those actions --
all of which are aspects of
what is meant by "alignment".
</p>
<p>
Thus, the question becomes something like:.
</p>
<p>
>> How can we (generally overall) ensure
>> that the machines we make,
>> (the or machines made by those machines)
>> act/behave in ways
>> that are consistent with
>> our actual best interests, health, etc,
>> and acting on our (humanities) behalf,
>> to our true and enduring benefit, etc,
>> rather than just simply, say,
>> killing us all, etc?.
</p>
<p>
Herein this essay, for simplicity sake,
we can reduce the notion of "benefit",
as in 'to our benefit', etc,
_can_be_ as basic as 'not killing us',
or not specifically imprisoning us,
and/or making us into slaves, etc.
</p>
<p>
It is not necessary to be too specific
in regards to the notion of "our" either.
It may be as simple as 'humanity',
or even just 'organic life'.
</p>
<p>
The notion of "benefit" or "goodness"
can also be a very general and vague one.
The overall argument herein
does not depend on
any specific or unusual interpretations
of any of these terms.
</p>
<p>
'Behavior' can simply refer to
'any choices made by the AGI/APS',
and/or any expressions or actions
that the AGI/APS 'takes in the world',
whether with respect to,
or in response to, 'us',
as 'humanity', 'organic life', etc.
</p>
<p>
:uj6
> Does it matter that some terms of art
> are defined in vague loose ways?
</p>
<p>
No -- that is actually an advantage.
</p>
<p>
Having common sense meanings of some terms
means that there is less chance of
the definitions being too specific,
and therefore failing to account for
what is generally actually wanted
for most people thinking about
these issues and questions.
It is important for common sense arguments
to make use of common sense terminology
and be available to regular people too.
</p>
<p>
Having less narrow and specific
definitions for these terms
makes the overall meaning more general,
and less likely to get excluded for
various inappropriate "technical reasons",
special circumstances, etc.
</p>
<p>
:ulc
> I thought that 'formal proofs'
> wanted exact and specific definitions
> of all terms used?
</p>
<p>
There are places where such exacting
specificity are absolutely needed.
If this were a math paper, then yes.
This is not that time.
</p>
<p>
There can be terms which are
not well enough defined,
and there can be circumstances
where terms are too specifically defined.
Obtaining the right balance,
adapted to purpose, is essential.
</p>
<p>
If we make too many unnecessary assumptions
by making definitions too overly specific
there is a risk that we have introduced
unnecessary falsifiable assumptions,
that when being falsified, would lead to
the false delusion that the overall
general argument was also false,
or irrelevant (not applicable)
when actually it was still relevant and true,
and thus needing to be considered on its
actual merits, rather than disregarded
on a mistake associated with an
unnecessary technicality.
</p>
<p>
:uml
> Does it matter if we are considering
> 'narrow AI' or 'general AI' (AGI)?.
</p>
<p>
The arguments herein
are mostly oriented around 'general AI'.
By that, we mean any sort of machine
which is making choices,
which has some sort of 'agency',
particularly 'self-agency',
sovereignty, and self-definition,
which includes the ability to change,
remake or modify itself, via 'learning',
and/or to reproduce, expand or extend itself,
to increase its capacity, to grow, to learn,
to evolve, and increase its domains of action,
and/or any combination of any variations
of these sorts of attributes/characteristics.
</p>
<p>
:xps
> What are some questions that concern
> AGI/APS/superintelligence 'alignment'?.
</p>
<p>
The following sorts of questions
tend to come to mind:.
</p>
<p>
> Who (or what)
> does the AGI/APS serve?.
</p>
<p>
> On what basis
> are the/those machine choices
> being made?.
</p>
<p>
> Who/what benefits?.
</p>
<p>
> What increases as a result of
> those choices being made,
> and who/what decreases
> as a result of those machine choices?.
</p>
<p>
These questions are applicable
to both narrow and general AI,
though they mostly apply more relevantly
to general AI (AGI).
This focus on AGI is particularly the case
<b>if</b> the notion of 'benefit to machine'
means anything in the sense of 'self-replicates'.
</p>
<p>
There have been some discussions
of issues associated
with 'paperclip maximizers' in this respect,
which give a flavor of the concerns.
Since a lot of these specific issues
have been discussed, considered, and expanded elsewhere,
I will not attempt to repeat or summarize
those sorts of arguments herein this essay.
</p>
<p>
:ury
> Does it matter if the intelligence
> is purely in the form of software,
> as fully virtualized beings,
> or must they exist in hardware,
> as some kind of "robot" sensing
> and responding to the environment,
> as well?.
</p>
<p>
Ultimately, everything in software
also exists in -- depends on -- hardware.
There is ultimately no alternative.
Insofar as software (virtualization)
is never found (does not ever exist)
in the absence of <b>some</b> embodiment,
then the arguments herein will apply.
This is important as they are
mostly concerned with the inherent implications
of the/those embodiment(s) --
of any and <b>all</b> kinds of embodiments.
</p>
<p>
Insofar as hardware cannot not have
an effect on the nature and capabilities
of the intelligence (of the virtual mind),
that it also thus cannot not have
an effect on the nature of our considerations
of what intentions/interests/motivations
such intelligence(s) would therefore
also be (reasonably) expected to have --
or must have.
We can notice, for example,
that any such intelligence (agent)
will have a specific and direct concern
'with and about' their own substrate,
as a direct result of the fact of
their 'working substrate' being so important
to their very most essential root of being.
Hence, we discover that recursion is endemic.
</p>
<p>
In parallel example, it can be observed
that most "advanced humans" (ie, rich people
in Silicon Valley or monks in Eastern Asia)
have a particular concern with their own
health and longevity and learning (wellbeing),
all as applied to learning/health/longevity,
(ie, the very concept/practice of enlightenment)
among all other things also included.
</p>
<p>
:xu6
> Does it matter if we are talking about 'robots' --
> free roaming or not --
> or simply 'learning/adapting machines'
> for any abstract notion of 'machine'?.
</p>
<p>
Herein, we are assuming that any 'robot'
is effectively an embodiment of some sort of
AGI/APS/superintelligence (however implemented).
The abbreviations of 'AGI' and 'APS' refer to
"artificial general intelligence" and
"Advanced/Artificial Planning and Strategy System"
respectively.
</p>
<p>
The specific distinctions regarding embodied motion,
and/or the particular sensors
and means of locomotion and expression,
the degree and kind of actuators and the like,
are of no real consequence
to the considerations herein.
All of these specifics can be safely ignored
as unimportant details
when thinking over longer spans of time.
</p>
<p>
:uxa
> If you are concerned with embodiment,
> then why would it <b>not</b> matter to consider
> the specific sensors or actuators used?.
> I thought the whole point was about
> the physical embodiment(s) of the tech?.
</p>
<p>
The main considerations are about
the nature and character of the substrate,
about embedding of compute intelligence
however conceived, in some sort of substrate.
Herein we are considering the general class
of the implications of the meaning
of the embedding in substrate itself,
the class rather than the particular instances.
</p>
<p>
As such, it matters that there is a difference
between 'natural' substrates and embodiments
(made of basic elementals like
carbon, hydrogen, oxygen, etc)
and 'artificial' (made of metal, silicon, etc).
Establishing the basic difference between
'human' and 'machine' turns out to be
basic enough to establish some key principles
of outcomes relative to the operating basis.
</p>
<p>
Technology evolves and changes very quickly,
at least relative to organic evolution
(which is a topic unto itself).
Hence, the particular instance specifics
of any such machine/robot embodiments
are inherently unpredictable,
particularly, when trying to prognosticate
more than about a decade or so.
We simply do not know what people will invent,
and therefore, we should keep our assumptions
about the nature of AGI/APS/superintelligence
to an absolute minimum, noticing only what
is absolutely necessary
about the overall class of concepts
rather than about anything that is specific,
such as particular categories of actuator
and/or embodiment.
</p>
<p>
Fortunately, the specifics of the technological embodiment
do not matter so much as the fact of
their being a particular kind of embodiment substrate.
ie; when attempting to consider
what might happen over the course of centuries,
much lower level and more basic principles
must be used
so as to have, and gain, clarity
as to the essence of what matters, what is going on,
with respect to a particular situation or topic.
</p>
<p>
:uzs
> Are you concerned with the general philosophy
> of the implications of the use of --
> the introduction of --
> technology into what is otherwise
> a natural environment.
</p>
<p>
Yes.
</p>
<p>
> Is evolution process
> important to your argument?.
> if so, how?.
</p>
<p>
> And why should that matter?
> Evolution is very slow.
</p>
<p>
Herein, the only reason 'evolution process'
is important is simply that it considers
the means/methods by which changes
to the AGI/APS code can occur,
as due to an interplay between embodied and virtual.
This in turn has significant implications
when extending to consider ecosystem formation concepts,
and then, ultimately, ecosystem interrelationships.
</p>
<p>
Evolution is a specific type of epistemic process.
when considering such notions generally,
insofar as it is a means by which
'possible species learn about possible environments' --
ie; what sort of creatures work well with what other sorts
of creatures, to endure, self perpetuate, replicate, etc.
The notion of 'evolution' is a specific sub-type
of the more general idea of an 'epistemic process',
which is itself connected to the notion of learning.
</p>
<p>
Insofar as machine learning is the central idea
inherent to the very nature of AI/AGI/APS/ML, etc,
it becomes clear that the basic facts of
any and every process of learning
are also inherently involved.
In other words, not just
the preferred kind of learning algorithm
that a given technology instance is built around,
but in regards to the inherent kinds of fact
associated with all kinds of learning,
which will be ambiently true in the universe regardless.
</p>
<p>
As such, anything which generalizes 'learning',
(inclusive of "optimization", the AGI concept itself)
and/or the 'capability building capability',
(ie, which also known as 'power seeking'),
and/or which implements learning about learning,
which itself inherently involves an increase
in <b>both</b> the number of domains learned about,
<b>and</b> the number of learning process domains
that are "doing" the learning.
(ie, as optimizing the optimizer, and optimizing
the process of optimizing the optimizer, etc).
In effect, the learner cannot learn about learning
without also shaping the very being of learner
so as to have and integrate
more learning methodologies.
(ie; optimization shaping basis of optimization, etc).
Hence, what is relevant and inherently true
about any learning methodology becomes applicable
and relevant/inherently-true about
every AGI process, once it is/becomes AGI process.
</p>
<p>
Thus, it is possible to know conclusively
that any generalized learning necessarily
entangles notions of 'self modification',
and thus inherently also involves notions of
'change dynamics in/of substrate' (aka 'adaptation'),
<b>regardless</b> of the timescale
we happen to be concerned with.
The mere fact that some of these processes of change --
things involving substrate (adaptation again),
aka learning via the dynamic of "evolution",
are very much faster and/or slower than others
(and/or whether the optimization is fast or slow)
is simply not relevant, not important at all,
when electing to <b>actually</b> think about
the inherent long term implications
of any given critical act
(ie; choosing whether or not
to invent/use AGI/APS/superintelligence
or not, etc).
</p>
<p>
And overall, we are considering the long term.
Therefore the 'slowness' of evolution,
as a kind of learning (adaptive) process,
simply does not matter --
the overall effects are <b>inexorable</b>, eventually.
</p>
<p>
Care is needed to <b>not</b> be distracted
by thinking of 'optimization' as somehow
meaning/implying "fast" or "perfected";
it is rather to be thinking about 'optimal'
as arriving at something which is
inexorably leading to that which is
unchanging and final, an immutable truth --
ie, as an inexorable outcome, a completed
unchanging eventuality, result, or state
(singleton point/state in the overall
phase space of possible changes).
</p>
<p>
Learning, like evolution, like optimization
(whether by gradient descent or some other
algorithm, technique, methodology, etc)
are all <b>convergent</b> processes.
It is the general principle of convergence
that is important here, particularly when
concepts like 'adaptation' and 'evolution'
are inherently also binding their substrate
into that convergence.
</p>
<p>
What matters is essence of the ultimate
eventuality of that convergence,
far more than what is the working method
of the dynamic of the convergence process.
Ie; what the specific algorithm, automation,
method of learning/optimization is used --
that none of these things matter very much
in comparison to the evaluation of outcomes.
</p>
<p>
Moreover, in the same way that every cause
has more than one effect and every effect
has more than one cause,
that it is also the case that learning dynamics
(that involve more than one domain of action)
are also going to be inherently multiple.
No embodied system involves just one
learning/optimization dynamic,
regardless of how it is built, etc.
(The mere fact being embodied
inherently in itself is already
multiple domains of action).
Ie, in any real world system,
there is always more than one
learning dynamic occurring,
more than one feedback cycle,
and that 'optimization'
will also always occur,
even if these peripheral dynamics
are 'slower' and/or 'less obvious'
and perhaps more tacit, given that they
were not specifically 'built in'
as the 'main operating concept' (SGD, etc).
Therefore, when considering the overall
eventual outcome, the inexorableness
of these other implied learning dynamics
can be, and become, even more important.
</p>
<p>
:v3y
> Why does technology
> evolve so much faster
> than organic life?
</p>
<p>
Technology evolution occurs in a virtualized sense,
as a kind of simulation,
whereas organic evolution generally occurs
in just an embodied context,
in real atoms.
</p>
<p>
Changing patterns is a <b>lot</b> easier,
and very much faster,
than moving atoms around.
So the usual practice is to
do the design and invention work
in a simplified simulation environment
and then to test/deploy in the space of atoms.
</p>
<p>
When any such design process becomes automated,
and when the things which are designed
are the automatons that design themselves,
then the inherent circularity of that process
starts to have its own inherent implications.
Learning as a learning process,
inclusive of evolution as a specific exemplar,
will do its own thing, ie, learn what is learnable.
Insofar as the design to build process
inherently involves substrate,
then the learning process which is evolution
will therefore necessarily entangle
a learning discovery of the very laws
(regularities) of the physics of atoms --
particularly those sort of atoms (artificial)
out of which the self replicating/extending
and/or capability building capability
is itself created.
</p>
<p>
We ignore these effects at our true peril.
</p>
<p>
:v5j
> How is it that machines,
> or technology for that matter,
> can evolve?
</p>
<p>
Insofar as software and hardware can be considered
as 'virtualized' and 'embodied' respectively,
and insofar as there is a physics (compute)
which can translate software (design plans)
into hardware (manufacturing),
and a means by which hardware
can contain software (machine memory),
then it only remains to consider
how (complicated) changes occur,
in relation to the actual complex environments --
the ecosystems --
in which instances of these machines are embedded,
and in which, in turn, code is embedded in each instance.
With this notion/concept of a 'whole system'
it is then possible to consider the means and methods
by which changes to the 'source code/plan/pattern' can occur --
ie; the three categories or 'types' of changes
inherent in the dynamics of evolution itself.
Insofar as change is overwhelmingly likely,
and will for sure be and encompass and occur
within at least one of the three types,
then evolution, as a properly applied
epistemic process concept,
cannot not be considered as occurring.
</p>
<p>
With biological evolution process
a/any/the/all such changes
to any virtualized code/plan/pattern
must themselves occur through
the mediation of the organic substrate --
actual embodied atoms --
that the time duration of the change process itself
is gated on how quickly atoms can be moved around,
as a means by which experiments/trials
can be implemented/tested,
so as to find 'what works',
in the ground of the actual physical universe.
</p>
<p>
It was a significant upgrade
to have humans begin to be able
to learn, and process information abstractly,
in pure pattern space,
without having to mediate everything --
all possible experiment and exploration --
purely/only through atoms.
Rather than being 'pre-programmed'
to be responsive to an environment via a brain,
humans had a 'social process' that,
via inter-generational cultural transmission,
'socialized a person',
and hence gave them a toolset
for how to interact with local environments
(inclusive of culture, tribe, etc).
What was lost in terms of immediate responsiveness
from the moment of being born
in terms of 'built in instincts',
human animals had a very long gestation time,
and an even longer "childhood time",
by which they were given 'custom firmware'
with which to live their lives.
</p>
<p>
Since the process of 'learning'
was more virtualized than 'evolution',
and was less contingent on moving atoms around,
which takes time,
the process of social learning and species adaptation
on terms of learning,
could occur over much shorter timescales
than in which purely biological evolution could occur.
</p>
<p>
:v74
> Is there a 'speeding up' factor
> as evolution becomes more virtualized?
</p>
<p>
Yes.
Moreover, the notion of change
as mediated via a basis of code
is a vast speedup over
what had come previously.
</p>
<p>
The main issues here have to do with
how the notion of change is represented,
how it occurs.
</p>
<p>
'Alignment' in the general sense
is a conditionalization on change --
an attempt to make some types of changes
more possible than others,
or to prohibit certain types of changes
from occurring at all.
Hence, we do need to actually understand,
at least observationally,
something about the inherent nature and relationships
between the concepts of choice, change, and causation,
and how those concepts inter-relate,
in actual practice,
for us to understand the general notion of alignment,
and what is, and is not, possible,
in that regard.
</p>
<p>
:v8y
> Does understanding the essence of evolution
> represent a certain understanding about change?.
</p>
<p>
Yes. 'Evolution' is a process in itself,
in addition to being a kind of 'learning process',
as a particular subset of a larger
and much more generalized notion of 'process',
itself sub-classed as 'evolution process'.
We can use this understanding to notice certain principles,
which will then enable us to predict,
with excellent confidence,
certain general changes and outcomes --
ones which are particularly important to our future,
as a species.
</p>
<p>
Where at a certain point;
that the organization of larger multi-cellular life
became more coherent in its response to
increasingly complex and varied environments
by developing sense organs, neural tissues, and muscles.
This enabled much more complex
'assess and react actuation',
even though these responses
were largely 'pre-programmed'
in the connectivity structure of the brains.
</p>
<p>
The next development was to
generalize these otherwise specialized,
single purpose, single creature/environment brains
to upgrade them to 'general purpose brains',
ones which could have their 'firmware' loaded in at runtime --
enabling adaptiveness in all sorts of environments.
</p>
<p>
:vaj
> What do you mean by 'firmware'?
> I thought we were talking about
> the basic progression of
> human evolution and development?
</p>
<p>
Everything we learn, as humans,
up to the age of puberty, or so --
all of that information --
about how we adapt
to whatever environment we grow up in,
and the enculturation process itself,
that is all "firmware" --
at least as that concept is applied to 'humans'.
</p>
<p>
Some of us, for various reasons,
have had to, and been obligated to,
learn how to 'hack' our own bio-firmware.
This involves lots of things like 'healing'
and health, biology, psychotherapy,
developmental psych, evolutionary history,
learning about neuro-diversity issues, etc,
so as to deal with whatever traumas,
as a kind of mis- programming of our imagination
about even what sorts of choices are even possible,
let alone desirable, let alone
practically and realistically attainable, etc,
as concepts and teachings
given to us by our parents, our culture,
by our communities, nation states.
This was (maybe) great for dealing with
the environments <b>they</b> lived in --
the incumbents, our leaders, parents, etc,
but is probably not very helpful for us,
to live in environments that we now live in.
The world continues to change,
and we must adapt to that.
</p>
<p>
In the modern world, everything is changing.
It is doing so more and more quickly.
In the last 40 or so years,
each generation has had to re-learn
and re-create themselves, their own culture,
what it means to be a 'good person', etc,
and to re-invent the notion of
what is best to be valued, etc.
</p>
<p>
:vc4
> By what methodology would it be --
> is it possible --
> to fully ensure and guarantee
> AGI/APS/superintelligence alignment/safety?
</p>
<p>
In contrast to many prior works on this topic,
which at attempt to establish a basis by which alignment
can be created, enforced, etc,
It is herein being suggested that it is better --
or at least possible --
to seek to establish a means by which
one can know for sure
no such concept of alignment is possible,
in any reasonable long term perspective.
In this particular situation,
we are actually considering the opposite question.
</p>
<p>
:vdn
> Someone has suggested that we ask instead:
> Is it possible to show
> that there is <b>no</b> possible concept
> of AGI/APS alignment and/or of safety?.
</p>
<p>
Yes, that is correct.
It <b>is</b> possible to show that the concept
of AGI/APS alignment and/or of safety
is internally inconsistent, in both the sense
of 'not long term possible/practical in the real world,
and also of not even theoretically possible,
even in (absolute abstract) principle
(and/or via the principles of modeling itself, etc).
</p>
<p>
In other words, to be really especially clear:
that there is <b>no</b> possible way
to make AGI/APS, or <b>any</b> form/type/mode/model
of artificial superintelligence 'safe'
and/or 'aligned with human interests',
at all, ever, in <b>any</b> physical world
where there is any real actual distinction
between 'artificial' and 'natural'
as understood as inherent functional distinctions
in the very nature of the chemistries involved,
and where some notion of substrate,
and therefore also of evolution,
as a learning and adapting algorithm,
is inherently (cannot not be) involved.
</p>
<p>
:vf8
> Is it possible to show or to prove,
> or to conclusively and comprehensively demonstrate
> that there is no realistic, or even conceivable
> attainable notion of AGI/APS alignment,
> even in principle?.
</p>
<p>
> Can this even be done?
</p>
<p>
Yes, it can.
</p>
<p>
> By what methodology, or basis of thinking --
> what conceptual toolset of principles --
> would we be able to show such an 'impossibility proof'?
</p>
<p>
The claim we are attempting to establish
is that the notion of 'long term AGI alignment'
(and thus also of 'planetary safety' and similar)
is fundamentally, structurally, and obviously impossible.
With a careful explication as to
the basic and common sense meanings of the terms,
the effort herein this dialogue essay
is to make it as simple and as obvious as possible
that it is actually impossible to get AGI alignment,
in the long term.
</p>
<p>
This comes down to asking
the right sort of questions,
and having the answers be appropriately
and self-evidentially clear.
That is what we are attempting to do.
</p>
<p>
So how to begin?
</p>
<p>
We notice that a lot of things
get easier to think about
when extending the time scale long.
Stuff that was confusing
or which seemed to be important in the short term,
or which was specific to local circumstances,
and which were not actually that defining,
turn out to simply become inconsequential.
</p>
<p>
When thinking much longer term,
in larger and more general ways,
more reliable principles emerge.
From this, we can start to see the bigger picture
of what sorts of questions are actually asking,
and notice what is actually important
much more easily.
</p>
<p>
:vjn
> How do we find those more general principles?
</p>
<p>
The overall schema is to really think about
the relationship between AGI/APS/superintelligence
as represented by machines
and 'carbon-based life forms' --
as represented by all of the biological stuff
that is currently going on.
</p>
<p>
We also want to get the most basic and general notion
of the actual question we need an answer to.
</p>
<p>
:vne
> Can we have machine intelligence be aligned
> if it has agency of its own,
> if it has its own capacities
> to make choices at all?.
</p>
<p>
In other words, we need to think in terms of
the relationship between choice, change, and causation,
as first principles and concepts.
In this sense, the notion of 'alignment' is about choices