mental systems



Here I'm taking a cautious approach to modelling, towards improved theories and
computer simulations of the biology of cognitive faculties, or 'mental systems' ...

we use visualizations for the same reasons we use simulations:

to convey our theories, and make them intelligible. we caution
everyone to look carefully at the assumptions that we believe
are behind both, and the possible influences of our cognition
upon our productiona dn interpretation of our simulation and
visualization, which may not correspond to reality. As always
with science, this is not about 'truth', it's about trying to
understand more about a very puzzling world.

but simulations often have other, non-explicit, and completely 
unconsious and unknown factors, involved in their construction.
that is why caution is so called for.

when a cybernetics or AI person builds a model, it is intended
to help them to understand something very complex in the real world.

but once they 'implement' this model in the real world, partially
or 'completely', in software or hardware or both, a few things
happen: 

1. they are now studying their simulation. that can be ok,
as long as it's distinguished from the mental model of the
original system under study.

2. they begin to involve their innate faculties, whether their perception,
or unknown faculties that we unconsciouly recruit to create ideas,
which then give human-mind-internal meaning to the machine
that we've seen. In this way, what we perceive is not what's
going on in the original system -- we simply perceive both.
And we get a certain positive jolt when we recognize and idea
in both. But that idea may be quite wrong, and we've now
fallen into the 'intuition trap', where when something seems right
we accept it, which has impeded most scientific progress.

So, when we create a feedback system to help us achieve something,
and call it goal achievement by the machine, and say that we're modelling
the brain, and that only such systems would survive natural selection,
we are making one engineering assumption after another, about complex systems
that we've almost stopped thinking about directly.


Greg Bryant
I. first theory:

II. experiments:

* Simplest minimalism visualization
description
* Match the mind-internal meaning in a small set, without any LF-form calculus
description
* unknown experiment 3
description

III. adjusted theory:

IV. adjusted experiments:

Glossary and open terminological questions about cognition:

Association:

Association is, first of all, an idea. That means it has 
a reality in the brain that is sufficiently consistent,
for all human beings, to allow us to discuss it. 

There are various ideas for which the word 'association' is used.

One is a label for a common human conscious experience (see 'conscious')
of 'oh this made me think of that'. There is a school of philosophy
that would call this a 'phenomenological' term, that is, a technical
labelling of a real phenomenon of internal experience.

That said, once labelled, introspection of these phenomena roams quite
far from a further understanding of the phenomena, from the perspective
of the natural science of complex systems, and begins to involve nearly
everything that an introspective modern philsopher can think of. Interesting,
and often fruitful in generating more phenomena to study, but somewhat
incapable of revealing the nature of a 'single experience' (but see 'event').
For that, some kind of cautious experimental approach is more appropriate,
again, from the perspective of 'natural science'.

Another important use of 'association' is associated with 'associationism',
an idea which is a kind of mental transformation of the phenomenon, creating
some serious distance from it, making it an 'element' of some kind of 'network'
from which 'the mind' is constructed. Of course, it is a very intelligible
model, and so has been successful at capturing the imagination, and facilitating 
engineering and computer design -- but as a theory of natural science, it has
not been successful or well-used. 

The latter, 'technical' use of association should itself be considered a mental 
construct, a model of a kind of intelligible theory, rather than a model of
the kind of machine the brain the machine is. With similar theoretical terms
that allow us to make 'connections' between highly differentiated 'elements' 
and complex 'networks', we really should retire the idea that 'associations' are
anything more than a modeling tool, and one that has a tendency to make us 
forget what we are studying.

So, what model might we make of these ideas of 'association'?

The phenomenal one is some form of 'thought' that leads to some other 'thought',
in conscious experience. It doesn't include any of the unconcious mechanism that
the associationists or connectionists would impose on the idea of recall or 
reminiscience.

Modelling the imposition of the 'technical' term 'association' on the study of 
real-world mental systems should actually tell us more about the mind than the
theory itself.

First, there is objectification ...

explanatatory adequacy

one of the points of Chomsky's minimalist program, in a sense, is a common but 
unspoken one in natural science: we are finite creatures, so if we want better
explanatory theories, they need to be increasingly fundamental and simple, so
we can understand more. This means our tools cannot become our theories ...
and if they are found to add complexity to a theory, by their nature, we need
to recast our tools and reconsider our theories.