The Functional Analogy to a Brain

January 12, 2011 By Michael Zeldich

ABSTRACT:
The essence of the concept for designing of an artificial system capable to demonstrate a reasonable behavior is in implementation of the feature of the subjectivity in an artificial system. Incorporating of the feature of subjectivity in an artificial system will make the architecture of the resulting systems (A control unit and the connected to it hardware) non task specific.

ARTICLE TEXT:
The artificial control system with non task specific ("cognitive") architecture for forming of an subjective artificial system
(The functional analogy to a brain)

The concept
The essence of the concept for designing of an artificial system capable to demonstrate a reasonable behavior is in implementation of the feature of the subjectivity in an artificial system.
Incorporating of the feature of subjectivity in an artificial system will make the architecture of the resulting systems (A control unit and the connected to it hardware) non task specific.
"Non task specific" have the same meaning as for a human been and all other live being, which are capable to carry out broad enough spectra of problems needlessly in reprogramming or in alteration of a body.

The feature of subjectivity will remove the necessity in programming of the functionality in such systems and let them perform like a live being. Profiling of the subjective systems and its performance will depend on the hardware connected to the control unit, accessible resources, education, training and a past subjective experience, no programming will be required.

I would like to outline one important consequence of incorporating of the future of subjectivity in the artificial system. The properly designed systems control unit, in that kind of the systems, will be capable to automatic execution of the functionality build in hardware of such system.

So, why, so far, no one was able to do the same?
To understand why, we have to begin with brief excursion in history of a question.
"Behaviorism" was rejected due to the problem with study of the inner functionality of the live creature, which behavior was the subject of that science.
So called cognition has surfaced as the alternative to "Behaviorism" and of a scientific basis for the further progress. However, this scientific approach does not have sufficient and relevant factual basis. Moreover, this scientific approach has more problem than the rejected "Behaviorism ".
In the book “Artificial intelligence”, the 3rd edition, by Stuart Russell, professor of computer science, director of the center for intelligent systems, and holder or the Smith—Zaden Chair in Engineering, and Peter Norvig, director of Research at Google: inc., page vii, they state that:
"The main unifying theme, the idea of AI, is intelligent agent. We define AI as the study or agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions such as reactive agents, real-time planners, and decision-theoretic systems."
We cannot see there the references to the inner functionality as well.
John Tyndall was stating in 1871: “The passage from the physics of the brain to the corresponding facts of consciousness is unthinkable.”
In that strong statement we could see the two problems: The brain is set apart of a body, and up to the present we did not have "the corresponding facts of consciousness".

The first problem is making the functioning of a live creature a mystery, and blind believes in existence of so-called mental function is turning the so- called "Cognitive science" in a kind of religious faith.
Today science did not have the approach, built on the factual basis, for solving the task of designing of an artificial system capable to be self-guided, nor the conceptual understanding what a brain does. Therefore, all attempts to simulate the functionality of a brain in an artificial system, on the basis of studying the structure of a brain, are doomed to fail.
To be successful in the attempts to develop of an artificial system capable to behave in a reasonable manner one should find the concise answers to the following questions:
• How can a control unit (brain), isolated from direct access to an environment, manage the behavior of a system (body) in the relation to that environment?
• Why live beings do not require programming, at least external, for surviving while have to interact with constantly changing environment?
• What content of subjective experience?
• Why live beings did not facing the "combinatorial explosion" problem?
These questions are not isolated from each other. The answer to one should not contradict with an answer to the rest.
I could offer of the general concept for the development of the artificial subjective systems capable to subjectively determine its own behavior and would like to assembly a team of people, or establish the partnership with an existing entity for the purpose of building a real system.
The general concept should be converted by the team to detailed, and on that basis it will be possible to build a control unit for the existing android (for example), which will convert that android in the artificial subjective system capable to demonstrate a reasonable behavior.
(NDA and normal business agreements are mandatory.)

That task could be accomplished in 6-8 month within an established company. Further details can be discussed after we are will reach an agreement about further development.
What is your opinion about that, tell me, please.
Best regards, Michael Zeldich, Independent inventor

Cell: (917) 816-447
Skype: Subjective1
E-mail: [email protected]


Submit an Innovation Article