Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Artificial General Intelligence: Concept, State of the Art, and Future Prospects #10

Open
markroxor opened this issue Jul 24, 2017 · 12 comments

Comments

@markroxor
Copy link
Member

https://intelligence.org/2013/08/11/what-is-agi/
https://pdfs.semanticscholar.org/72e1/4804f9d77ba002ab7f1d3e3a5e238a3a35ca.pdf

Ben Goertzel - Chief Scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC

@markroxor
Copy link
Member Author

Below I consider four operational definitions for AGI, in (apparent) increasing order of difficulty.

@markroxor
Copy link
Member Author

The Turing test ($100,000 Loebner prize interpretation)
The exact conditions for winning the $100,000 prize will not be defined until a program wins the $25,000 “silver” prize, which has not yet been done. However, we do know the conditions will look something like this: A program will win the $100,000 if it can fool half the judges into thinking it is human while interacting with them in a freeform conversation for 30 minutes and interpreting audio-visual input.

@markroxor
Copy link
Member Author

The coffee test

Goertzel et al. (2012) suggest a (probably) more difficult test — the “coffee test” — as a potential operational definition for AGI:

go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.

If a robot could do that, perhaps we should consider it to have general intelligence.5

@markroxor
Copy link
Member Author

The robot college student test

Goertzel (2012) suggests a (probably) more challenging operational definition, the “robot college student test”:

when a robot can enrol in a human university and take classes in the same way as humans, and get its degree, then I’ll [say] we’ve created [an]… artificial general intelligence.

@markroxor
Copy link
Member Author

Nils Nilsson, one AI’s founding researchers, once suggested an even more demanding operational definition for “human-level AI” (what I’ve been calling AGI), the employment test:

Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or “jobs” at which people are employed. I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test, AI programs must… [have] at least the potential [to completely automate] economically important jobs.6

@markroxor
Copy link
Member Author

As late as 1976, I.J. Good asserted that human-level performance in computer chess was a good signpost for AGI, writing that “a computer program of Grandmaster strength would bring us within an ace of [machine ultra-intelligence].”

But machines surpassed the best human chess players about 15 years ago, and we still seem to be several decades away from AGI.

The surprising success of self-driving cars may offer another lesson in humility. Had I been an AI scientist in the 1960s, I might well have thought that a self-driving car as capable as Google’s driverless car would indicate the arrival of AGI. After all, a self-driving car must act with high autonomy, at high speeds, in an extremely complex, dynamic, and uncertain environment: namely, the real world. It must also (on rare occasions) face genuine moral dilemmas such as the philosopher’s trolley problem. Instead, Google built its driverless car with a series of “cheats” I might not have conceived of in the 1960s — for example by mapping with high precision almost every road, freeway on-ramp, and parking lot in the country before it built its driverless car.


@markroxor
Copy link
Member Author

The trolley problem -

“The general form of the problem is this: Person A can take an action which would benefit many people, but in doing so, person B would be unfairly harmed. Under what circumstances would it be morally just for Person A to violate Person B’s rights in order to benefit the group?” Or, as Nicholas Thompson, editor of the New Yorker, put it: “Your driverless car is about to hit a bus; should it veer off a bridge?”

@markroxor
Copy link
Member Author

Core AGI hypothesis: the creation and study of synthetic intelligences with sufficiently broad
(e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different
from the creation and study of synthetic intelligences with significantly narrower scope and weaker
generalization capability.

@markroxor
Copy link
Member Author

The border between AI and
advanced algorithmics is often considered unclear. A common joke is that, as soon as a certain
functionality has been effectively achieved by computers, it’s no longer considered AI. The situation
with the ambiguity of ”AGI” is certainly no worse than that with the ambiguity of the term ”AI”
itself.

@markroxor
Copy link
Member Author

In subsequent years, psychologists began to question the concept of intelligence as a single,
undifferentiated capacity. There were two primary concerns. First, while performance within an
individual across knowledge domains is somewhat correlated, it is not unusual for skill levels in one
domain to be considerably higher or lower than in another (i.e., intra-individual variability). Second,
two individuals with comparable overall performance levels might differ significantly across specific
knowledge domains (i.e., inter-individual variability). T

@markroxor
Copy link
Member Author

markroxor commented Aug 3, 2017

The list is presented as a list of broad areas of capability, each
one then subdivided into specific sub-areas:
• Perception
– Vision: image and scene analysis and understanding
– Hearing: identifying the sounds associated with common objects; understanding which
sounds come from which sources in a noisy environment
– Touch: identifying common objects and carrying out common actions using touch alone
– Crossmodal: Integrating information from various senses
– Proprioception: Sensing and understanding what its body is doing
• Actuation
– Physical skills: manipulating familiar and unfamiliar objects
– Tool use, including the flexible use of ordinary objects as tools
– Navigation, including in complex and dynamic environments
• Memory
– Implicit: Memory the content of which cannot be introspected.
– Working: Short-term memory of the content of current/recent experience (awareness).
– Episodic: Memory of a first-person experience (actual or imagined) attributed to a
particular instance of the agent as the subject who had the experience.
– Semantic: Memory regarding facts or beliefs
– Procedural: Memory of sequential/parallel combinations of (physical or mental) actions,
often habituated (implicit)

• Learning
– Imitation: Spontaneously adopt new behaviors that the agent sees others carrying out
– Reinforcement: Learn new behaviors from positive and/or negative reinforcement
signals, delivered by teachers and/or the environment
– Imitation/Reinforcement
– Interactive verbal instruction
– Learning from written media
– Learning via experimentation
• Reasoning
– Deduction, from uncertain premises observed in the world
– Induction, from uncertain premises observed in the world
– Abduction, from uncertain premises observed in the world
– Causal reasoning, from uncertain premises observed in the world
– Physical reasoning, based on observed “fuzzy rules” of naive physics
– Associational reasoning, based on observed spatiotemporal associations
• Planning
– Tactical
– Strategic
– Physical
– Social
• Attention
– Visual Attention within the agent’s observations of its environment
– Social Attention
– Behavioral Attention
• Motivation
– Subgoal creation, based on the agent’s preprogrammed goals and its reasoning and
planning
– Affect-based motivation
– Control of emotions
• Emotion
– Expressing Emotion
– Perceiving / Interpreting Emotion
• Modeling Self and Other
– Self-Awareness
– Theory of Mind
– Self-Control
– Other-Awareness
– Empathy
• Social Interaction
– Appropriate Social Behavior
– Communication about and oriented toward social relationships
– Inference about social relationships
– Group interactions (e.g. play) in loosely-organized activities
• Communication
– Gestural communication to achieve goals and express emotions
– Verbal communication using English in its life-context
– Pictorial Communication regarding objects and scenes with
– Language acquisition
– Cross-modal communication
• Quantitative
– Counting sets of objects in its environment
– Simple, grounded arithmetic with small numbers
– Comparison of observed entities regarding quantitative properties
– Measurement using simple, appropriate tools
• Building/Creation
– Physical: creative constructive play with objects
– Conceptual invention: concept formation
– Verbal invention
– Social construction (e.g. assembling new social groups, modifying existing ones)

Left no stone un-turned. This is exactly what makes a human, human.

@markroxor
Copy link
Member Author

the recent work of Legg and Hutter (Legg and Hutter, 2007b), who
give a formal definition of general intelligence based on the Solomonoff-Levin prior. Put very
roughly, they define intelligence as the average reward-achieving capability of a system, calculated
by averaging over all possible reward-summable environments, where each environment is weighted
in such a way that more compactly describable programs have larger weights.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant