ACM Computing Surveys
28(4es), December 1996,
http://www.acm.org/pubs/citations/journals/surveys/1996-28-4es/a4-doyle/. Copyright ©
1996 by the Association for Computing Machinery, Inc. See the permissions statement below.
This article derives from a position statement prepared for the
Workshop on
Strategic Directions in Computing Research.
Cleaving (Unto) Artificial Intelligence
Jon Doyle
Massachusetts Institute of Technology, Laboratory for Computer Science
545 Technology Square, Cambridge, MA 02139-3539, USA
doyle@medg.lcs.mit.edu, http://www.medg.lcs.mit.edu/doyle
Abstract: To survive and thrive, whether
intellectually or as a force in industry, artificial intelligence (AI)
must better identify its unique contributions. Failing this, it faces
disrespect, decay, and dissolution. I believe one obtains a better
identification both by dividing AI's subjects of study into the
categories of rational psychology, psychological engineering, and
articulating intelligence than by highlighting AI's characteristic
methods of seeking computational-complexity explanations and
simulating extremely complex semi-numerical models.
Major fields cannot survive without major problems or methods unique
unto themselves. AI's recent and traditional difficulties lie in
lacking such unique problems and methods, at least in the view of
outsiders. The field has never agreed on its own definition, other
than by stipulating that it traditionally encompasses several
subjects, notably ``making intelligent machines'' and ``understanding
intelligence''. These problems, however, do not possess the
uniqueness requisite to a healthy discipline.
- The problem of understanding intelligence sits too close to the older
fields of psychology and ethology (even theology), if taken to mean
understanding intelligences we observe. This problem becomes unique
only when taken to mean understanding possible intelligences other
than those we observe.
- The problem of making intelligent machines sits too close to the more
general activity of making more useful and capable machines, as
reflected in the truism that successful AI systems generally no longer
count as AI. The field has proven most successful at mechanizing
isolated functions of intelligence, but differentiating this activity
from that of building special machines generally appears
hair-splitting at best and impossible at worst. The problem becomes
unique only when taken to mean making machines that reason.
Even if we abandon these superficial but traditional ``mission
statements'' and look more deeply, we still see that AI shares most of
its major intellectual problems with other fields: understanding
knowledge, reasoning, and rationality with logic, philosophy,
psychology, economics, and sociology; understanding perception,
motion, and manipulation with physiology, anatomy, and psychology;
understanding language with linguistics and philosophy; understanding
planning with economics and operations research. Surely no important
aspect of what makes humans special has remained foreign to other
fields, for---knowledge aside---people today have the same functions
and abilities they have had for thousands of years.
Looking back at the history of the field, the methods of AI appear
more distinctive than the problems addressed.
- The first new method employed by AI consisted of examining
traditional problems by constructing and using symbolic or
semi-numerical models of a complexity and character never seen before.
Traditional modeling methods stretched to consider small sets of
differential equations or linear equations of at most a couple
thousand dimensions, but the nonlinear systems represented by computer
programs of moderate size far outstrip these predecessors in
complexity and potential difficulty of analysis. The name ``complex
information processing'' early favored by Newell and Simon for AI
suffers from understatement in capturing the essence of the advance
represented by this method of the field.
- AI's second new method, inspired by the computational models,
consisted of seeking explanations for psychological phenomena and
justifications for putative psychological structures in terms of
the notion of computational complexity, explaining observed
limitations on
the basis of the difficulty of computing something with a Turing
machine (or equivalent device), and postulating one structure over
another on the basis of computational advantages. The mechanistic
assumptions presupposed by this method may have embroiled AI in
philosophical and religious disputes, but the method has proved
fruitful in specific psychological investigations quite
independent of general suppositions about the nature of man.
One need not search far for the difficulty posed by relying on
methodical distinctions for intellectual survival; most people with
problems will adopt any methods they find useful, and some of the
hidden success of AI has been evident in the degree to which
psychology, philsophy, lingustics, etc., have adopted the modeling and
explanatory approaches championed by AI. But once the methods of AI
diffuse among the fields, what role remains for AI? Its main
reaction to date to this diffusion of method---identifying specific
techniques (rules, frames, what have you) as AI and all others as
not---has been distinctly damaging to its credibility,
especially as the specific mechanisms exploited by AI have precursors
and in some cases independent developments in more traditional fields.
Thus these useful and important methods do not promise to ensure the
existence of AI. Stick with them alone, and the field will continue
to fragment, with other fields absorbing (or ignoring) the fragments.
For these reasons, AI must rethink itself and either identify problems
unique to the field or give up and go home to the traditional fields.
I entertain the possibility that the option of dissolution might prove
right, but believe that AI should first try reviewing its purpose or
purposes. I propose to abandon the expiring patent on the
methods of AI and to find truly unique roles for AI along a cleavage
induced by AI's broad conception of intelligence.
I see AI's most dramatic break with the past not in its computational
methods but in its conception of intelligence divorced from
embodiments in the people and creatures we find around us. Rather
than limit attention to what already exists, AI contemplates what
might, and views human and other extant intelligences as particular
forms of a broader universal. No field before AI has studied the
question of understanding intelligence construed this broadly, though
philosophers and science-fiction writers speculate on special
possibilities. The first step to finding uniqueness in AI lies in
distilling out the problems related to intelligence broadly construed.
Do this, and two natural fields emerge.
- The first field, rational psychology, addresses the
problem of understanding the full range of possible psychologies---by
which I mean possible organizations for minds---and classifying them
according to their structures and properties. Though others might
differ, I view the problems of understanding and
classifying possible psychologies as an essentially mathematical
problem. Rational psychology seeks to find the most appropriate
concepts with which to characterize and describe psychologies, so as
to understand the nature of and connections between psychological
concepts. It does not presuppose or require any sort of rationality
of the psychologies under investigation. I adopt the term ``rational
psychology'', an old term used by Kant and James to mean philosophical
psychology, in analogy with rational mechanics, the mathematical or
conceptual investigation of mechanics. While the name has not spread
much, the idea certainly has correspondents in some subsequent
activities. I originally suggested this conception as the proper
conception for what some call ``cognitive science'', but I later came
to think that term more apt for the broader, not entirely mathematical
subject. More recently, Glymour and others have employed the term
``android epistemology'' to cover all possible psychologies, but that
term, in conflict with its advertised meaning, suggests a much
narrower conception of knowledge alone in human-like beings. I thus
prefer the broader, more descriptive, and more traditional term.
- Psychological engineering addresses the problem of
constructing psychologies that exhibit specified properties, of
finding economical designs for implementing or mechanizing agents with
specified capacities or behaviors. This problem characterizes much
work in AI, and represents the general engineering problem specialized
to psychologies rather than to mechanisms, chemicals, genes,
etc., hence the name psychological engineering.
This cleavage of the field requires only two additional elements to
capture essentially all of AI. Part of AI studies computational
models of human psychologies, but I classify this as a shared subfield
of psychology proper, with its traditional presence within AI an
artifact mainly of the desirability of sharing computers and code. In
the modern computing environment, these tethers loosen daily, and I
expect this part of the field quickly to return to psychology itself,
if indeed it ever really left it. The remaining element, seen most
visibly in the areas of knowledge-based systems and commonsense
knowledge, consists of the activity of articulating
intelligence, of codifying common and refined knowledge and
methods in all topics of human endeavor. Long antedating computers,
this work traditionally goes on in every field, and grows more formal
and explicit over time. I believe AI has significant leverage to
exert on other fields here, by propagating its techniques for formal
representations of knowledge. As long as rational psychology and
psychological engineering remain unique enterprises and continue to
bear fruit, work on articulating intelligence and casting it in new
forms will remain a unique activity of the field.
AI might survive without rethinking itself, but only by virtue of
increasingly hyperbolic advertising, or by the personal longevity of
its adherents. But this sad fate seems unworthy of the gems obscured
by the overburden of old tales about the aims and contributions of AI.
To survive, AI must offer problems and methods other fields can
respect, and it must share knowledge and technique with these fields
when it also shares their problems. The latter transformation of AI,
from intellectual isolate to conceptual trader, has been accelerating
for some time, so its survival reduces to ensuring that it has
something left at the end of this exchange. People may disagree about
whether rational psychology and psychological engineering provide the
right cleavage of AI's gems. But these fields do possess important,
clearly understandable tasks all their own. I propose AI reset its
work to expose these gems, leave its intellectual parents and
cleave to this new joint identity.
Acknowledgments: I thank Joseph Schatz for valuable
discussions and MIT for its support over the years. The position
argued here restates one presented in papers of 1982--1994 listed
below. The image of cleaving the field draws inspiration from George
Miller's essay on ``dismembering cognition.''
Bibliography
- Doyle, J., 1982. The foundations of psychology: a
logico-computational inquiry
into the concept of mind, CMU CSD, Report 82-149.
Revised version published in Philosophy and AI: Essays at the
Interface (R. Cummins and J. Pollock, eds.), Cambridge: MIT Press
(1991), 39-77.
- This paper develops the conception of studying all possible
psychologies.
- Doyle, J., 1983. What is rational psychology? toward a modern mental
philosophy, AI Magazine, V. 4, No. 3, 50-53.
- This paper reintroduces and explains the field of rational psychology.
- Doyle, J., 1988. Big problems for artificial intelligence,
AI Magazine, Vol. 9, No. 1, 19-22.
- This editorial suggests essentially the same division of AI
as proposed here.
- Doyle, J., 1994. Reasoned assumptions and rational psychology,
Fundamenta
Informaticae, Vol. 20, No. 1 (Spring 1994).
- This article provides an example of my own work in rational
psychology.
- Miller, George A., 1986.
Dismembering cognition, in One Hundred Years of Psychological
Research in America (Hulse, S. H. and B. F. Green, Jr.,
eds.). Baltimore: Johns Hopkins University Press, pp. 277-298.
- Miller's essay focuses on how different theories of
psychology divide the subject with different concepts, few
of which cleanly ``divide the subject at its joints''.
Permission to make digital
or hard copies of part or all of this work for personal or classroom
use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, to
republish, to post on servers, or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from
Publications Dept, ACM Inc., fax +1 (212) 869-0481, or
permissions@acm.org.