2003-10-12
The Sciences of the Artifical, notes
(book by Herbert Simon, early systems theory.)
#
p8
The reasons objects have boundaries...
there should be a continuum of what can be affected and what can't, manipulated
in order to survive better in the environment. But actually in the internal
environment it's easier to do because it can be done with the genome, and
everything outside is phenome and behaviour to manipulate. So there's a surface
tension - in distance terms - and the two (internal and external) move further
apart. (Because there's total knowledge of inside, whereas outside requires
sense organs.)
And also, we work by predicting behaviours based on other behaviours.
Behaviours are like shapes, and interfaces are always pretty similar.
From Sophie's World, last night [I wrote this paragraph on 2003-10-03], on
Plato: Plato identified the problem of "why do things behave in similar ways?"
-- and said they all obey the same "ideas"/ideals. Bateson in the book
(mind/nature) I just read *also* saw this problem. As did Deleuze (explained in
Delanda VS&IP), and solved it with multiplicities and manifolds. Which all
sounds a bit like the dynamic core of the brain (a universe of consciousness).
#
p13
"""
A bridge, under its usual conditions of service, behaves simply as a relatively
smooth level surface on which vehicles can move. Only when it has been
overloaded do we learn the physical properties of the materials from which it
is built.
"""
Cybernetic statements, or bridges, are wormholes or negative distance between
two points. But to be so they have to be seamless -- that is, in real life you
can cross a bridge without noticing you're leaving a road (within limits).
Online a link between two points, say in a computer program or a web
application, has very tiny limits indeed. Memory conditions, or having to
marshall into XML to make a function call. Telepresence isn't seamless either:
it's an imitation of somebody being around, you'd never get confused about
where the boundary was.
One day our wormholes will be seamless (within limits) and be proper bridges --
at the moment they're just fording rivers.
Also...
"""
Artificiality connotes perceptual similarity but essential difference,
resemblance from without rather than within. In the terms of the previous
section we may say that the artifical object imitates the real by turning the
same face to the outer system, by adapting, relative to the same goals, to
comparable ranges of external tasks. Imitation is possible because distinct
physical systems can be organized to exhibit nearly identical behaviour. The
damped spring and the damped circuit obey the same second-order linear
differential equation; hence we may use either one to imitate the other.
"""
Shapes! I think it's interesting that Simon really pushes the point that
systems can be similar. That's the conduit metaphor all over. So he
concentrates a lot on the similar behaviour of systems, but not enough on the
functions that join levels together, that distinguish shapes (interfaces,
faces) from one another and allow implicature between levels.
#
p17
"""
Resemblance in behaviour of systems without identity of the inner systems is
particularly feasible if the aspects in which we are interested arise out of
the organization of the parts, independently of all but a few properties
of the individual components.
"""
and
"""
A computer is an organization of elementary functional components in which, to
a high approximation, only the function performed by those components is
relevant to the behaviour of the whole system.
"""
So we model reality into levels, and *then* we build artifical systems that
conform even more strictly to our model.
#
p19
Some nice terms about the "common organizational features" of computers (and he
is indeed studying them like a biologist would an animal, as he said he would).
"""
They almost all can be decomposed into an active processor (Babbage's "Mill")
and a memory (Babbage's "Store") in combination with input and output devices.
"""
(There's an article in this month's IEEE Computer magazines about GPUs -- it
talks about the 'computational engines' on the chip.)
#
p34/35
"""
modern economies cannot function well without smoothly operating markets.
...
market processes commend themselves primarily because they avoid placing on a
central planning mechanism a burden of calculation that such a mechanism,
however well buttressed by the largest computers, could not sustain.
"""
Then why don't companies operate using markets? In fact, how could markets even
be interfaced with them? (Actually, Simon says later that there's an internal
environment and an external one, and it's the limit between that says where
hierarchic organisation is needed, and where market.)
But this is the good bit about markets:
"""
-- the computational limits of human beings:
The most significant fact about this system is the economy of knowledge
with which it operates, or how little the individual participants need to know
in order to be able to take the right action.
...
At least under some circumstances, market traders using a very small amount of
mostly local information and extremely simple (and non-optimizing) decision
rules, can balance supply and demand and clear markets.
"""
*Local* information! The distribution of information defines a space.
(There's a bit that follows about what information you need to make fully
rational decisions. Future behaviours of things. But that's computationally
expensive, so alternatively you can use feedback: we don't know when it's going
to snow, but still the snowploughs come out.)
#
p36
(a system can be steered using feedforward, predicting the future, and feedback
to correct errors of the past...)
"""
However, forming expectations to deal with uncertainty creates its own
problems.
"""
(Destabilising effects, etc. And that's 'expectations' in the Popper sense too.
ie, not just a response pattern, but response-to-the-response and so on.)
I mailed this to Tom S, about "moving together" (as opposed to just association
networks) and this section (I've also just read Universe of Consciousness,
about the brain):
The concept of 'moving together' crops up a lot in cybernetics, and also recent
continental philosophy. Coordinated events links things together. Which makes
me think... I remember reading something about epilepsy and the corresponding
lack of consciousness as to do with the hypersynchronised firing of neurons.
Which sounds very much like what happens in a stockmarket crash (or in a
bubble): individually rational subunits (brokers) using information systems
(some shared, some not shared), and a pervasive external environment/value
system (the market figures) someone synchronise and all sell at once, causing
the crash, but are *still* acting rationally. Which makes me wonder whether
there are lessons that could pass either way -- how can the synchrony be
disrupted? What causes the breakdown in homeostasis?
#
p45
"""
We can summarize our account of the respective roles of markets and
organizations in a modern society as follows: (1) organizations find their
niches wherever constellations of interdependent activities are best carried
out in coordinated fashion in order to remove the need for individuals'
outguessing each other; (2) the human motivation that makes organizations
viable and alleviates the public goods problems that arise when individual
efforts cannot be tied closely to individual rewards is provided by
organizational loyalty and identification; (3) in both organizations and
markets, the bounds on human rationality are addressed by arranging decisions
so that the steps in decision making can depend largely on information that is
locally available to individuals.
"""
#
p79
The footnote...
"""
I may mention in passing that Siklossy's system refutes John Searle's notorious
"Chinese Room Paradox," which purports to prove that a computer cannot
understand language. As Siklossy's program shows, if the room has windows on
the world (which Searle's room doesn't) the systems matches words, phrases and
sentences to their meanings by comparing sentences with the scenes they denote.
"""
(actually Siklóssy.)
The point being, perhaps, that the brain works by association. That's there's a
perception, a sensation in the brain, and the word is associated with that.
Mind you, that Chinese thing is bollocks. One of the cards coming in could be
"assume the word 'not'" in front of all subsequent sentences. The books could
cope with that, sure, but the complexity could not be any less than a brain
itself, in face an embodied brain (I don't think it's reducable from being
embodied). That 'not' could be 'give me the md5 hash of an MRI of your brain'.
So at a certain point you have to say if the room is going to behave exactly
like a person, it has to *be* a person. Maybe.
#
p88
"""
We can think of the memory as a large encyclopedia of library, the information
stored by topics (nodes), liberally cross-referenced (associational links), and
with an elaborate index (recognition capability) that gives direct access
through multiple entries to the topics. Long-term memory operates like a second
environment, parallel to the environment sensed through the eyes and ears,
through which the problem solver can search and to whose contents he can
respond.
"""
(that last sentence especially. A second environment, like "the remembered
present" (Universe of Consciousness). Or a constructed present. Like a 3d
rendered scence which is half bottom-up constructed (so it can be disassembled
and physics applied to the components) with whatever is not understood added in
by texturemaps. Perceived environment is constructed environment + senses.)
On a doctor making a diagnosis:
"""
Thus the search is conducted alternately in each of two environments: the
physician's mental library of medical knowledge and the patient's body.
Information gleaned from one environment is used to guide the next step of the
search in the other."""
(all of which sounds much like the pull/push of evolution. Push to an adaptive
feature, pull to make exaptive use of it. Push to create an OS or application
with hooks, pull to move it into the ecosystem and create more niches. Push
with the hierarchy, pull with the meshwork. Organisation and market, too?
Maybe.)
([TO FIND] Somewhere in this book it says that environmental niches are created
by other species, so it just complexifies and doesn't fill up. Not sure where
though, maybe later.)
#
p150
On the design of social systems, and figuring out who to consider:
"""
The architect need not decide if the funds the client wants to spend for a
house would be better spent, from society's standpoint, on housing for
low-income families. The physician need not ask whether society would be better
off if the patient were dead.
Thus the traditional definition of the professional's role is highly compatible
with bounded rationality, which is most comfortable with problems having
clear-cut and limited goals.
"""
"Bounded rationality" sounds a *little* like Tesugen's "constrained universe[s]
of expression". But why is this? I guess it's essential we define an
information space that takes advantage of locality -- or, given that space
exists anyway, that we take advantage of it and go with the flow. It's very
conduit metaphor though, very industrial mindset. A physician being a cog in a
machine. And in fact people *don't* do that. They're principled and have
regards for society: but this is just a *perturbation* to the decision first
suggested by their most local information. Hm.
#
p157
Distant events and discounting the future:
"""
Thus the events and prospective events that enter into our value systems are
all dated, and the importance we attach to them generally drops off sharply
with their distance in time. For the creatures of bounded rationality that we
are, this is fortunate. If our decisions depended equally upon their remote and
their proximate consequences, we could never act but would be forever lost in
thought. By applying a heavy discount factor to events, attenuating them with
their remoteness is time and space, we reduce our problems of choice to a size
commensurate with our limited computing capabilities.
"""
And the psychological consequences of this:
"""
There is a vast literature seeking to explain, none too convincingly, what
determines the time rate of discount used by savers. (In modern times it has
hovered remarkably steadily around 3 percent per annum, after appropriate
adjustment for risk and inflation.) There is also a considerable literature
seeking to determine what the social rate of interest should be -- what
the rate of exchange should be between the welfare of this generation and the
welfare of its descendants.
"""
#
p163
"""
Each step of implementation created a new situation; and the new situation
provided a starting point for fresh design activity.
Making complex designs that are implemented over a long period of time and
continually modified in the course of implementation has much in common with
painting in oil. In oil painting every new spot of pigment laid on the canvas
creates some kind of pattern that provides a continuing source of ideas to the
painter. The painting process is a process of cyclical interaction between
painter and canvas in which current goals lead to new applications of paint,
while the gradually changing pattern suggests new goals.
The Starting Point
The idea of final goals is inconsistent with our limited ability to foretell or
determine the future. The real result of our actions is to establish initial
conditions for the next succeeding course of action. What we call "final" goals
are in fact criteria for choosing the initial conditions that we will leave to
our successors.
"""
#
p169
Conceptions of complexity:
"""
This century has seen recurrent bursts of interest in complexity and complex
systems. An early eruption, after World War I, gave birth to the term "holism,"
and to interest in "Gestalts" and "creative evolution." In a second major
eruption, after World War II, the favorite terms were "information,"
"feedback," "cybernetics," and "general systems." In the current eruption,
complexity is often associated with "chaos," "adaptive systems," "genetic
algorithms," and "cellular automata."
While sharing a concern for complexity, the three eruptions selected different
aspects of the complex for special attention. The post-WWI interest in
complexity, focusing on the claim that the whole transcends the sum of the
parts, was strongly anti-reductionist in flavor. The post-WWII outburst was
rather neutral on the issue of reductionism, focusing on the roles of feedback
and homeostasic (self-stabilization) in maintaining complex systems. The
current interest in complexity focuses mainly on mechanisms that create and
sustain complexity and on analytic tools for describing and analyzing it.
"""
So if the internet is a long echo from the WWII interest (cybernetics) - which,
incidentally, explains why the sudden interest in terms from that time (power
law) because we're discovering the features now that were originally built
(folded) in - what are we going to do in the aftermath of *this* complexity
mindset?
#
p187
(Hierarchies are defined on p184. It's all about control.)
"""
There is one important different between the physical and biological
hierarchies, on the one hand, and the social heirarchies, on the other. Most
physical and biological hierarchies are described in spatial terms. We detect
the organelles in a cell in the way we detect raisins in a cake -- they are
"visibily" differentiated substructures localized spatially in the larger
structure. On the other hand, we propose to identify social hierarchies not by
observing who lives close to whom but by observing who interacts with whom.
These two points of view can be conconciled by defining hierarchy in terms of
intensity of interaction, but observing that in most biological and physical
systems relatively intense interaction implies relative spatial propinquity.
One of the interesting characteristics of nerve cells and telephone wires is
that they permit very specific strong interactions at great distances. To the
extent that interactions are channeled through specialized communications and
transportation systems, spatial propinquity becomes less determinative of
structure.
"""
(Distance is the half-life of causality! And, clustering is a matter of what
*moves together*. And, the telegraph as a push system, and push as a water ford
rather than a bridge? For things to be *actually* close, for it *actually* to
be a proper wormhole, the wormhole must be extremely high bandwidth, to the
point that things can cross over - within limits - without actually noticing. A
proper wormhole must allow unintended consequences (pull) of the same type
you'd get in the real world.)
#
p189
Entropy, information and the speed of evolution, in the footnote:
"""
entropy is the logarithm of a probability; hence information, the negative of
entropy, can be interpreted as the logarithm of the reciprocal of the
probability -- the "improbability," so to speak. The essential idea in
Jabobson's model is that the expected time required for the system to reach a
particular state is inversely proportional to the probability of the state --
hence it increases exponentially with the amount of information (negentropy) of
the state.
"""
I'm trying to figure out the entropy of a network. The thing is, it's not just
connections. That's really simple, and give the relation above and that the
network could be radio, could figure out the entropy. But what about paths?
Like, a connection defines a vector and the vector points somewhere and that
changes the probability? Especially if there's a whole path. And if paths, then
branes of all dimensions. Hm.
#
p204
On the subject of "near composability" (or, shapes. The fact that things offer
a standardised interface which is optimised on either side when two things
interface together a lot):
"""
It is probably true that in social as in physical systems the higher-frequency
dynamics are associated with the subsystems and the lower-frequency dynamics
with the larger systems. It is generally believed, for example, that the
relevant planning horizon of executives is longer, the higher their location in
the origanizational hierarchy. It is probably also true that both the average
duration of an interaction between executives and the average interval between
interactions are greater at higher than lower levels.
"""
(that started on p203 where it also said you talk to your friends a lot, but
don't have too many. And gave physical examples too, like atomic and molecular
bonding.)
"""
Intracomponent linkages are generally stronger than intercomponent linkages.
This fact has the effect of separating the high-frequency dynamics of a
heirarchy -- involving the internal structure of the components -- from the
low-frequency dynamics -- involving interaction among components.
"""
Again, this is a very industrial way of putting things, but the point is that
the shape of an interface must take into account the shape of the processes and
the dynamics it's involved in:
"""
An organ performs a specific set of functions, each usually requiring continual
interaction among its component parts (a sequence of chemical reactions, say,
each step employing a particular enzyme for its execution). It draws raw
materials from other parts of the organism and delivers products to other
parts, but these input and output processes depend only in an aggregate way on
what is occurring within each specific organ. Like a business firm in an
economic market, each organ can perform its functions in blissful ignorance of
the detail of activity in other organs,
"""
(I'm not sure really whether I buy this. If the organ was heavier, that would
change things. If it demanded more resources during growth, that would change
things. Surely it's not just spatially local in its effects?)
#
p209
"""
If a complex structure is completely unredundant -- if no aspect of its
structure can be inferred from any other -- then it is its own simplest
description. We can exhibit it, but we cannot describe it by a simpler
structure. The hierarchic structures we have been discussing have a high degree
of redundancy, hence can often be described in economical terms. The redundancy
takes a number of forms, of which I shall mention three:
1. Hierarchic systems are usually composed of only a few different kinds of
subsystems in various combinations and arrangements. A familiar example is the
proteins, their multitudinous variety arising from arrangements of only twenty
different amino acids. Similarly the ninety-odd elements provide all the kinds
of building blocks needed for an infinite variety of molecules. Hence we can
construct our description from a restricted alphabet of elementary terms
corresponding to the basic set of elementary subsystems from which the complex
system is generated.
2. Hierarchic systems are, as we have seen, often nearly decomposable. Hence
only aggregative properties of their parts enter into the description of the
interactions of those pars. A generalization of the notion of near
decomposability might be called the "empty world hypothesis" -- most things are
only weakly connection with most other things; for a tolerable description of
reality only a tiny fraction of all possible interactions needs to be taken
into account. By adopting a descriptive language that allows the absence of
something to go unmentioned, a nearly empty world can be described quite
concisely. Mother Hubbard did not have to check off the list of possible
contents to say that her cupboard was bare.
3. By appropriate "recoding," the redundancy that is present but unobvious in
the structure of a complex system can often be made patent. The commonest
recoding of descriptions of dynamics systems consists in replacing a
description of the time path with a description of a differential law that
generates that path. The simplicity resides in a constant relation between the
state of the system at any given time and the state of the system a short time
later. Thus the structure of the sequence 1 3 5 7 9 11 ... is most simply
expressed by observing that each member is obtained by adding 2 to the previous
one. But this is the sequence that Galileo found to describe the velocity at
the end of successive time intervals of a ball rolling down an inclined plane.
It is a familiar proposition that the task of science is to make use of the
world's redundancy to describe that world simply.
"""
(Here's a thing. When I say 'maximally complex' I don't mean 'completely
unredundant'.)
(In 1, it certainly seems the case that subsystems fall into fewer identical
classes than aggregate systems: eg plutonium is plutonium, but spiral galaxies
are all different. That seems to be the case in chemistry, in linguistics, in
sociology. Does life best exist at this level (if it's true), or is this
appearance just an illusion based on point of view, like things further away
looking smaller?)
(In 2, the descriptive language Simon posits is one that works on interfaces
rather than knowing the endpoints of each statement/wormhole. It's something
that computers are very bad at because they don't have a conception of locality
or distance: if you had a list of items and you wanted to say they were in a
cupboard you'd have to alter each one so it could state true or false whether
it was in the cupboard or not (ah, computers have trinary logic. They're
inherantly 'null' to any question. Whereas real life is more like the game Go:
by deciding on a new question, you can have something fall into the true or
false camp without even knowing what or where it i.) -- this is the problem
object orientation tries to solve, but it falls prey to its own conduit
metaphor problems.)