Artificial Intelligence: Decision-Tree of Mind-Design

Posted 26 Nov 2004 at 13:34 UTC by mentifex Share This

This Mentifex AI news article will guide students of artificial intelligence (AI) through the process of developing a theory of mind for AI.

    The method of discovery is the answering of a series of questions at forks in
    the road of major, all-important design-decisions. By default of any other
    sufficiently advanced AI theory, the Mentifex mind-model serves here as
    a standard of comparison marked with an "X" at each fork in the road to
    indicate the Mentifex answer to the highest-level questions of mind-design.
    Therefore any enterprising student may improve, or alter, or even refute
    the Mentifex design by providing a better answer at any given fork in the
    road -- or by altering the road-map to adduce important new decision-points.
    1. What do you do with the sensory inputs?
      [ ] Nothing goes into memory.
        [ ] (If you pick this path, you may elaborate here.)
      [X] Each sense feeds into a sensory memory channel.
      2. Specify a process of integrating sensory input with the rest of the mind.
        [ ] Have each sensory memory channel lead further to a mysterious "CPU" --
        a "central processing unit" (homunculus, anyone?) that begs the question of
        how a mind conceptualizes sensory input and thinks about what it perceives.
          [ ] (Go on to describe the nature of the CPU -- central processing unit.)
        [X] Each sensory memory channel associates sideways to the mechanisms of
        emotion, thought, and free will -- interspersed amid, not terminating, the channels.
    Each [mentifeX] choice above shows the most fundamental design-decisions of
    the Mentifex theory of cognitivity for artificial intelligence. In the study of the mind,
    it may be easier for students to react against a pre-established structure of
    ideas than to create an original theory or structure that no one else has ever
    thought about -- all the way to publication -- before. Here is a chance, then,
    to react against the Mentifex design and either learn more about it or come up
    with something better by zeroing in on what to you are the obvious mistakes made.
    There are plenty of forums all over the Web for you to disseminate your new ideas,
    although you should of course protest the illegal war against the people of Iraq
    as a matter of American national honor and a higher priority than the study of AI.

You know, posted 27 Nov 2004 at 12:19 UTC by zephc » (Journeyer)

I've tried reading your writings before, and it just reads like a slightly more cogent Alex Chiu. Your designs are extremely primitive, and lacking lots of necessary parts.

I can't find (admittedly I didn't look very hard), for instance, anything about internal modelling of reality (what we often call Imagination), reasoning backwards from a goal to a present state, and so on.

Doing it all in JavaScript or Perl will not win you many points, either.

Also, throwing in non-sequitors about the War, etc. just makes you sound like more of a crackpot. (FYI, I'm against the war, but that has nothing to do with an article on AI theory).

Heh, posted 27 Nov 2004 at 15:31 UTC by wspace » (Journeyer)

Great reply by zephc, well said. A model of reality is essential for "AI minds".

Meds, posted 28 Nov 2004 at 04:40 UTC by ncm » (Master)

Before we can construct an artificial intelligence, we must construct or arrive somehow at the other sort, first. A lifetime of study and experience actually solving real-world problems might prepare one to embark on a first effort to test some basic notions.

Also, we mustn't neglect our meds, must we?

Umm, posted 29 Nov 2004 at 16:25 UTC by salmoni » (Master)


I think there is a bit of way to go on this. For example in your first question, you don't leave any option for sensory input that doesn't necessarily go into memory. Can I recommend work by Anne Treisman or Donald Broadbent as some basic readings on pre-attentional processing (ie, stuff that doesn't go into memory).


Mentifex thanks Salmoni, posted 30 Nov 2004 at 07:01 UTC by mentifex » (Master)

O Master Salmoni!

Your excellent comment above is exactly the sort of enlargement and expatiation intended with the "decision-tree" of mind-design. It was an idea that came to me in the lassitude after the big meal of an American Thanksgiving. Today it further occurred to me to "ramify" (can that be transitive?) the decision tree out into the documentation pages of many of the Mentifex AI Mind modules -- especially in the auditory system.

Sometimes I meet programmers who want to re-implement the basic mentifex-class AI not with an auditory memory channel but with a simple database of all the English words contained in the mind of the AI. Such a database would prevent multiple instantiations over time -- and conceptual learning over time. So therefore if I start showing reasons to reject such an approach within the Decision-Tree of Mind-Design, maybe a lot of re-inventing of the wheel may be nipped in the bud. Grandiose dreamer that I am, I dare to imagine a big wall-chart AI Decision-Tree for AI coders to haggle over and mark up with scratch-outs and write-ins. Anyone who wants to draw up and print out such AI charts -- feel free, because Salmoni here has shown the way. (I go now to certify you as Master! :-)

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page