Critical Review 1: Emotions and the Brain with Mark Solms

This critical review, called “Critical Review 1: Emotions and the Brain with Mark Solms” is the first in a series of critical reviews called CRITIQUES OF BRAIN THEORIES where I will be examining the work of leaders in the fields of neuroscience, psychology, philosophy, and AI.

The idea of critiquing the work of brain experts was in part inspired by the very person whose work I will review first, based on a similar challenge of a colleague’s work he had performed.

He is none other than world-renowned and respected psychoanalyst and neuropsychologist Professor Mark Solms. Mark posted a YouTube interview on X (previously Twitter) on 30 October 2024 titled “Emotions and the Brain with Mark Solms”. The interview was conducted by Leanne Whitney (PhD), a depth psychologist and guest host for the Youtube channel ‘New Thinking Allowed with Jeffrey Mishlove’.

Although the interview comprises a high-level discussion aimed at a wide audience, I have decided to capture and compare Mark’s explanations of emotions against the explanations offered by my own brain model, the Xzistor Mathematical Model of Mind.

The final goal of these critiques is to obtain and consolidate the best ideas around the working of the brain from the fields of neuroscience, psychology, philosophy and AI, and hopefully spur on these fields towards a better understanding of the brain – a quest that most in these fields support and also feel is urgently needed.

Click on the document below to view the Critical Review:

New Paper: Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind

The Xzistor LAB is building a new agent (robot/simulation combo) that we refer to as our ‘Language Learning Infant Agent’.

Our new paper provides a theoretical basis for how artificial agents can develop a language learning capability using artificial emotions as defined by the Xzistor Mathematical Model of Mind. A multi-stage project is proposed to demonstrate how an Xzistor agent will develop a language skill like and infant and then refine this skill towards improved syntax and grammar use with further reinforcement learning. The paper provides two appendices covering the mathematical principles of the Xzistor brain model and an explanation of how it could potentially unify behaviorist and structuralist language theories.

Find new paper (preprint) here:

Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind

The paper comes in 3 parts:

Part 1

Main Paper — Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind

Part 2

Appendix A — Mathematical Principles of the Xzistor Brain Model

Part 3

Appendix B — Xzistor Brain Model Unification of Behaviorist and Structuralist Language Theories

QUOTES FROM THE PAPER

“If we want a different AI future, we need to start considering alternative approaches to contemporary generative AI…”

“A multi-stage project is proposed to demonstrate how an artificial Xzistor agent could systematically develop basic language skills…”

“The model’s mathematical framework offers insight into the underpinning logic of the biological brain…could reignite the quest for human-inspired AI.”

“Building an artificial agent with the skills of an infant that can learn to use language to communicate with humans…will be much more than just a demonstrator of the principles of verbal behavior — it could be the start of a new era of Artificial Intelligence (AI).”

“The Xzistor Mathematical Model of Mind provides many of the missing pieces of the puzzle — and comes with a proven safeguard against ‘runaway-intelligence’ rooted in physics.”

Reach out to us if you want to know more!

Xzistor LAB Team

Rat Brain Rebooted!

Why would a rat brain that has learnt to avoid a feeding lever delivering a distasteful salty mixture, instantly forget all it has learnt, and joyfully consume the nasty mixture when it is provided with a new very strong desire for salt?

The experiment is described here:

Instant Transformation of Learned Repulsion into Motivational “Wanting”

The Xzistor model can explain this bizarre instantaneous override of reinforced (learnt) behaviour using only Control Theory.

Here is how:

We can design an Xzistor agent with an innate aversion to Strong Salt. All that is required is a homeostatic control loop with Strong Salt as a negative feedback control variable. The Xzistor model calls this a Body UTR (Urgency To Restore) mechanism. The model also explains how artificial emotions can be created based on whether homeostasis is diverging (negative) or being restored (positive). These emotions will be created by unique representations placed in the body map of the agent, as if they are part of the body sensory states, but representing the extent to which homeostasis is diverging or being restored rather than originating from body sensors. They will therefore be ‘felt’ as if coming from inside the body.

Aided by a reflex that makes the Xzistor agent retreat (repulse) from the Strong Salt source, it will quickly learn through reinforcement learning that it should avoid the Salt Source and the Strong Salt emotions will come to represent avoid or approach states.

The Xzistor model decrees that every Body UTR like this Strong Salt avoidance control loop, will also always activate the autonomic Body UTR (basically modeling the autonomic nervous system) so that it acts in concert with this Strong Salt control loop i.e. if the Strong Salt homeostasis is disturbed (salt is ingested), the autonomic Body UTR will also rise, and as the Strong Salt ingestion is halted and homeostasis is restored, the autonomic Body UTR will fall (calmed). This autonomic Body UTR generates its own unique ‘felt’ emotions modelled as a ‘stress emotions’ and ‘nausea emotions’ akin to how it happens in the human body (we feel stress as ‘butterflies in the belly’ and nausea as the urge to ‘vomit from the stomach’).

These autonomic Body UTR responses (or emotions) might be mild, but they play an important role in how the Xzistor agent will memorise this experience i.e. how the associations will be formed so that they can be used in future to decide what actions to perform. This is because this autonomic Body UTR will be re-activated when Strong Salt memories are recalled – note that the Strong Salt Body UTR or its emotions cannot be re-evoked form merely recognising or recalling related experiences, only the autonomic ‘stress’ and ‘nausea’ emotions can be regenerated. But just thinking about the Strong Salt lever will cause the Xzistor agent to feel actual stress and actual nausea again.

Let’s go back and look what happened when the Xzistor agent experienced the Strong Salt lever for the first time. It immediately felt the unique negative emotions (aversive) and learnt to avoid it – and of course feeling better as it moved away and learning that it is better to avoid the Strong Salt lever (operant learning).

How strong was this association that formed in the mind of the agent?

Not very strong. Although the agent is programmed to dislike it, a bad taste will not create a massive aversive state with high autonomic stress and nausea (like we could get if the agent experienced severe pain, heat or fear levels). For this discussion, lets assume a 5% stress level and a 10% disgust level was stored as part of the association. The newly formed association will include the visual image of the Strong Salt lever, the taste of the salty mix and the smell of the salty mix, and the retreat actions.

Now we have an association in the Association Database (model memory) that will regenerate the stress and nausea when thinking about or observing the Strong Salt lever, or tasting or smelling salt. If the agent actually do touch the lever again and tastes the salt, this association will also be ‘recognised’ and ‘re-activated’ causing a compulsion to perform the learnt retreat actions.

Let’s now reboot the Xzistor brain!

Now we reboot the Xzistor agent brain by building in a massive Strong Salt Body UTR – also using salt as the control variable but we design his Body UTR to crave (try to maintain) very high levels of salt. Without salt, the agent goes into homeostatic deficit (the model calls this Deprivation) and feels new very strong negative emotions.

This control loop is so strong that it will override any other Body UTRs in the agent like hunger, thirst, cold, warm, pain, fatigue, etc.

We now activate this craving for Strong Salt and release the Xzistor agent into the area where the Strong Salt lever is located, and where it can easily be observed visually.

What happens now in the brain of this agent?

We know the Xzistor brain model will always first collect information on all the Body UTR loops and decide which one is the strongest (called the Prime UTR by the model). Let’s assume this Strong Salt craving we have given the agent is REALLY strong and its unique Deprivation emotion states come in at the 95% level.

The brain is now told ‘you need to taste salt as an absolute priority’. Salt! You need salt! Now!

The agent brain now needs to do a search of all associations in the Association Database to see what associations are linked to salt (this is called Threading and when the brain really needs to narrow down the search to focus on useful results, it is called ‘directed’ Threading).

So, what is Threading looking for in the Association Database?

Salt. The taste sensation (representation of course) of salt.

Will this need for salt be recognised by the Association Database?

Yes, salt has been encountered before. We have an association about encountering the Strong Salt lever before and the agent brain will make the match, and ‘recognise’ the Strong Salt lever.

As it recalls this association – what happens next?

The autonomic stress and nausea states are immediately re-generated from memory and presented to the executive part of the brain which is tasked with deciding what the agent should do next. The model collects all emotions coming from all the Body UTRs and those coming from recalled memories (called Brain UTRs) and combine all the emotional feelings (positive and negative) and, through an adjudication process, decide which is the absolute strongest Body or Brain UTR emotion that should be addressed first. The agent still feels all the other emotions, but will only act on the strongest (most urgent).

The Strong Salt craving emotions (at 95%) comes out as the Prime UTR that needs immediate action. But it will also cause a strong activation of the autonomic UTR states – say stress at 45% and nausea at 20%. This will totally override the effect on the autonomic UTR from the recalled Strong Salt lever association (at stress 5% and nausea 10%) which will have no chance to compete against the urgent needs of the new Strong Salt craving.

The only contribution this recalled association can and will make, is to link the taste of salt to the visual image of the lever – as for the rest it is to weak to trigger an avoidance action in the brain. And the best guess action for the Xzistor agent to solve its very strong Strong Salt craving would be to navigate to the lever that has been associated with salt in the past. (Navigating to the Strong Salt lever will be an approach action sequence learnt as part of reinforcement learning e.g. navigating to other levers like a sugar disposing lever to find Satiation).

And will the horrible salt mix actually taste good?

Yes, external observers without this type of craving might think it must taste horrible to the agent – but the Xzistor agent has been programmed to crave this Strong Salt taste. So as it ingests the salt, the Body UTR Deprivation will flip to Satiation and this will trigger a very strong positive emotion. This will further be boosted by strong Satiation (calming) of the autonomic stress and nausea states (even activation of the Xzistor limbic system model – where other Satiation states are momentarily activated (co-opted) to increase overall hedonic satisfaction level).

Conclusion:

We will thus see that where an Xzistor agent has learnt to avoid a salty mix feeder, if we give it a strong enough new craving for salt, it will actually use past learning about the visual image and location of a salt source, albeit reinforced as mildly discouraging (aversive), to immediately guide it to that salt source. And it will derive huge hedonic pleasure from ingesting the nasty salt mix – as this will reset its homeostatic imbalance, the basis of its emotions (feelings)!

The Mathematics behind the Xzistor Brain Model

Originally led by a request from Prof. Judea Pearl (@yudapearl) on Twitter, more scientists are now coming forward and asking questions about the mathematics behind the Xzistor Mathematical Model of Mind.

It seems like more and more of them are finding it difficult to fault the model’s underlying principles – the fact that the brain can be viewed as a control system that can be described (principally) using control theory. This approach allows for a powerful way to explain subjective (artificial) Emotions and (artificial) Intelligence. Unlike many of the current incomplete and unimplemented ‘Theory of Mind’ paradigms, this model is evidenced by the convincing emergent behaviours observed in simple simulations and physical robots controlled by Xzistor artificial brains (computer programs).

Skeptical at first, many scientists have now had enough time to think about the fact that any model claiming to emulate the the human brain should start with a very basic system (like a baby brain – as Turing told us!) and then grow to full complexity by learning – storing associations, with these experiences given meaning and nuance through tagging them with homeostatic/allostatic emotions.

This is at heart what the Xzistor brain model is.

So, to respond the Prof. Pearl’s interest and request, and the others who have responded to the YouTube interview with my neuroscientist collaborator Dr. Denise Cook (here), I have decided to offer a meaningful explanation of the mathematics behind the Xzistor model. This is the basic mathematics underpinning the computer programs that ran the early simple human-like simulations and robots – the so-called Xzistor ‘proof-of-concept’ demonstrators.

After doing some reading on Prof. Pearl’s own phenominal career and achievements – see his personal website Judea Pearl here: http://bayes.cs.ucla.edu/jp_home.html – I realised again why he is regarded as one of the founding fathers of AI. I liked the idea that he offered a ‘primer’ to his very advanced inference modelling.

So, I thought I should start with a similar gentle introduction to the mathematics of my model. Then I can go to the full details in mathematical notation after that.

Here is an easy introduction to the mathematics of the Xzistor Mathematical Model of Mind using a Simple Robot Explanation.

I ask only one favour: Do not think the model is too simple to ‘principally’ emulate the human brain. To say it is too simple would be like saying a baby’s brain is too simple to learn and, over time, develop the ability to design jet engines or solve complex arithmetic.

Some important aspects are admittedly a little glossed over in the above slide pack – specifically how truly ‘subjective’ emotions are created by the model. For this it could be helpful to get some wider context around the Xzistor approach to Machine Emotions – here:

The detailed description of the algorithms and the mathematical equations inside them follow here. This was systematically extracted from the rather bulky 500+ page Manifesto of the Xzistor Mathematical Model of Mind including papers, books, the original patent specifications and the actual code used to drive the simulations (C++ and OpenGL) and physical robots (30 000+ lines of Java code including comments!).

I. INTRODUCTION

This Xzistor Mathematical Model of Mind describes a method for modeling the human brain. The functional brain model is substrate-independent and was developed to:

1. Provide a principal understanding of the working of the brain, specifically the mechanisms of cognition and emotion.

2. Serve as a basis for a complete cognitive architecture, providing autonomous agents with innate human-like intelligence and emotions.

The model simplifies and serializes the main neurobiological functions of the brain into a single logic loop containing various algorithms. By means of simplifying assumptions, all functions performed as part of these algorithms can be defined in mathematical terms.

II. HIGH LEVEL LOGIC

At the highest level, the Xzistor Concept uses a very simple logic loop to simulate the brain:

1. SENSING (obtain sensor inputs)

2. PLANNING (translate sensor inputs into behavior commands)

3. BEHAVIOURS (perform behavior commands using effectors)

4. Go back to 1. SENSING

Whereas the human brain has the ability to do parallel processing, it still in most cases goes through the same sequential steps and take time to register a sensory input, compare it with what has been learnt, plan what action would be appropriate, and finally send the effector (motion) comands to the muscles. Tests with Xzistor simulations and physical robots have shown that repeating this logic loop, containing all the required algorithms, at < 0.1 Herz approximates the paralellel processing of the human brain adequately to give rise to smooth human-like behavaiours in agents.

The diagram below shows the five functional algorithms of the Xzistor brain model and their linking. These algorithms are performed left to right – and repeated – in a constant loop:

1. SENSING ALGORITHM

A Sense translates a physical condition or variable (V) in the environment or body into a corresponding representation (S) in the brain:

Translation Functions for Senses

The translation functions above refers to any means whereby a sensed environmental variable (V) is changed into a representation (S) that the instantiation of the Xzistor brain model can process. An example would be an optic sensor that takes a sensed optic state (V1) and translate it via a video camera processor (Xs1) into an array of RGB values (S1) that a digital computer program can use to perform numerical calculations on. For different technologies, different means can be used to achieve this translation function. The only requirement for the model is that there will only ever exist one representation (S) for every unique incoming variable (V) – to a resolution appropriate for the application.

2. DRIVE ALGORITHM

The Drive Algorithm was derived from the bioregulatory processes in the body and brain, most of which reside in the sensory, endocrine and autonomic nervous systems. These mechanisms attempt to maintain homeostasis/allostasis of body states by regulating one or more Control Variables.

A Drive is in most cases part of a negative feedback closed loop control system which alerts the body and brain when a Control Variable is moving out of range. It generates the error signal which indicates to what extent the Control Variable is out of range and thereby indicating the level of threat it poses to the system.

In later descriptions of the model, Drives are also referred to as Urgency To Restore (UTR) mechanisms and a distinction is drawn between two types of Drives, namely Body UTRs and Brain UTRs. Body UTRs are Drives that perform homeostatic regulation of a Control Variable (CV) located in the body, while Brian UTRs are Drives that perform homeostatic regulation of states recalled from the Association Database (memory). This aspect will be further explained in the discussion on the Association Algorithm. A Drive translates a Control Variable (CV) error signal into a corresponding representation in the brain, and can be expressed in mathematical terms as follows:

Translation Functions for Drives

The translation functions above refers to any means whereby a sensed/measured Control Variable (CV) is changed into a representation that the instantiation of the Xzistor brain model can process. An example would be a digital thermal sensor inside or outside the body of the virtual/physical agent that takes a Control Variable (CV1) reading – tempearure in this case – and translates it into a digital error signal (ESd1) between the value 0 and 1. The error signal will depend on how far the reading departs from the setpoint value. The translation function (Xd1) will then use the Control Variable (for identification) and Error Signal (for status) to create the representation of the Drive (D1) that the brain model can process. This is an example of a Body UTR Drive (or just a Body UTR). The Error Signal (between 0 and 1) will allow this Drive to communicate with the brain model the ‘level of urgency’ with which it should be restored to maintain a safe external/internal temperature. This can then be compared with the ‘urgency’ (or UTR value) of other Drives, to determine what actions should be prioritised. The only requirement for the model is that there will only ever exist one Drive representation for every unique incoming Control Variable / Error Signal combination (to a resolution appropriate for the application).

By way of another example, we can consider the simple negative feedback closed loop control system aimed at homeostasis of blood borne water (H20) – see figure below. In this case the Drive representation in the brain could be a numerical value and the brain could be an Xzistor model computer program (executing the logic loop).

As the H20 Drive increases in time, it reaches a Detection Threshold (DT) – also referred to as the activation level – where the signal, which is already identifiable by the brain model, will become a contender to drive action selection. The H20 Drive will continue to increase as the blood borne H20 is depleted. The rising part of the curve we will refer to as the Deprivation (DEP) Regime. This regime can be expressed in mathematical terms as:

The brain creates a dedicated Deprivation Emotion (DE) representation which is somatosensory in nature, meaning it will consciously be ‘felt’ by the brain as if located in part(s) of the body (in humans this state could typically be generated in the insula, where competitive adjudication is performed – comparing activation versus inhibition levels of active Drives – to aid the thalamus in action selection). It will later be explained, as part of the Association Algorithm, why the Deprivation Emotion state DE will have to compete with the Deprivation Emotion states of other Drives based on Drive strength or ‘urgency’ to be prioritsied by the executive part of the brain model and why it will be viewed by the model as ‘negative’ as it is turned into an avoidance state through operant learning:

With the ingestion of water, the Drive curve will slope downward. The vertex point where the curve changes direction is of prime importance to the brain model and we will refer to it as the Satiation (SAT) point or Satiation (SAT) Event, and the declining part of the curve as the Satiation (SAT) Regime. This regime can be expressed in mathematical terms as:

The SAT point is where the brain detects that something in the environment or body is causing the H20 Drive to decrease (or be Satiated), and it needs to store all the information about – and leading up to this event – for future use. In the section on the Association Algorithm we will discuss the storage of ‘Associations’. Every Drive, whether it is a Body UTR Drive or Brain UTR Drive, is assumed to have a value between 0 and 1, where 0 indicates complete homeostasis, and 1 indicates the maximal departure from homeostasis (the most critical or aversive condition).

While DEP is simply the value of the Drive between 0 and 1, SAT is the derivative state given by the rate at which the Drive decreases (or the Drive curve slopes downward). In the case of SAT, 0 will indicate no decrease in Drive:

And 1 will indicate an instantaneous drop to 0:

For basic computer program implementations of the model the value of DE can be multiplied by -1 and set between 0 and -1 (i.e. always negative and dependent on the value of D).

The larger the decrease in Drive, or the steeper the downward slope of the Drive curve, the higher the value of the SAT will be. In the human brain SAT is a key condition for bioregulatory and cognitive control, and the model uses it as the basis for Reinforcement Learning (RL) and ‘Reward-based Backpropagation’ – these important mechanisms are discussed later. The prime objective of the brain in terms of the model always remain to constantly minimize DEP and maximize SAT.

The brain creates a dedicated Satiation Emotion (SE) representation which is somatosensory in nature, meaning it will consciously be ‘felt’ by the brain as if located in parts of the body (this state could typically be generated in the insula and presented to the thalamus for action selection). It will later be explained, as part of the Association Algorithm, why the Satiation Emotion state SE will also have to compete with the Satiation Emotion states of other Drives based on rate of Drive strength reduction or ‘recovery’ to be prioritsied by the executive part of the brain (e.g. thalamus) and why it will be viewed by the brain as ‘positive’ as it is turned into a pursual (or approach) state through operant learning:

For a basic computer program implementations of the model, the value of the Satiation Emotion (SE) can be turned into an absolute value and limited between 0 and 1 (i.e. always positive and dependent on the rate at which the Drive recovers from its Deprivation regime).

Multiple Drives

The model caters for many Drives being active simultaneously in the body and brain. The figure below shows three different Drives active at the same time:

Each Drive will measure its own Error Signal (ES) based on its own Control Variable (CV). We can define the Total Drive (Dtot) as the sum of the three Drives (i.e. D1, D2 and the Prime Drive) as indicated on the Drive versus time graph in the figure above.

We will refer to the strongest single Drive (highest value between 0 and 1) as the Prime Drive. The total Deprivation suffered between time T1 and T2 will be given by the area under the Dtot curve between T1 and T2 when plotted against time and can be calculated as follows:

In the graph above we can see that the SAT (rate of reduction) achieved by Dtot is less than that of the Prime Drive because only one of the three Drives is Satiated while the others are still increasing, meaning they are in Deprivation. Sometimes when there are many Drives to consider it is convenient to normalize the Dtot to a value between 0 and 1 (and multiply by -1), where 0 will indicate no Deprivation and -1 will indicate the maximum total Deprivation the agent is capable of suffering based on the total number of Drives. To normalize Dtot we do the following:

Interdependency of Drives

The model accommodates the interdependency of Drives. In the graphic above, for instance, the Control Variable of Drive 1 could have been influenced by the Control Variable of Drive 2. This type of coupling or dependency often exists between biological Drives and some Drives extensively use other Drives to increase the total Deprivation (DEP) value of Dtot to collectively make it a stronger avoidance state. Some Drives also make use of Reflexes that influence the Control Variables of other Drives e.g. the Fight-or-Flight Reflex. The F-o-F Reflex is a complex mechanism which, in the human brain, directly activates the Control Variables of numerous Drives simultaneously to rapidly escalate Dtot using the endocrine system and autonomic nervous system mechanisms. Many Drives use the F-o-F Reflex to increase DEP – some at low levels (mild stress) and some at a higher level (severe shock). Pain is modeled as a Drive using body mapped pain receptors detecting excessive pressure, temperature, shear force and tissue damage as Control Variables, inextricably coupled with the autonomic F-o-F Reflex. Some bio-regularity processes do not have detectable representation in the brain (detectable by the executive structures of the brain e.g. thalamus, basal ganglia, etc.). These ‘quiet’ control systems are not deemed Drives by the model, but some do influence Drives (e.g. blood volume (quiet) can influence Thirst levels (a detectable Drive).

3. REFLEX ALGORITHM

A Reflex is triggered by a Sensory State (S) or a Drive (D), resulting in a representation in the brain, interpretable as a preprogrammed set of motion commands. When triggered by a Sensory state, we have:

When triggered by a Drive state, we have:

The Reflexes triggered during Deprivation are normally different from those triggered during Satiation. We will group Reflexes into four types:

For the Deprivation (DEP) condition:

1.Involuntary Deprivation Reflex (triggered when the Drive is in Deprivation) e.g. shivering when cold.

2.Learn-modifiable Deprivation Reflex (triggered when the Drive is in Deprivation) e.g. crying when hungry.

And for the Satiation (SAT) condition:

3.Involuntary Satiation Reflex (triggered when the Drive is in Satiation) e.g. prostate contractions.

4.Learn-modifiable Satiation Reflex (triggered when the Drive is in Satiation) e.g. suckling of an enfant.

4. ASSOCIATION ALGORITHM

The Association Algorithm uses the representations generated by the other algorithms to store and later re-evoke Associations. Association-storing is achieved by ‘linking’ and ‘storing’ representations which are all present at the same time in the brain.

The representations present in the brain at time = t will be stored in the Association Database as a mal-tuple At of the form:

Association Forming and Updating

With every cycle of the Xzistor logic loop, an Association (Ai) is stored by linking all of the above representations and storing them as a combined entry into the Association Database. For simple digital applications like virtual agent simulations or small robots controlled by an Xzistor computer program, typcially 10 Associations can be stored/recovered per second which will result in smooth movements when in future the Motion Algorithm fetches these Associaitons and executes the learned effector Motions (M) at 10 updates per second (discussed in more detail below).

The Association Algorithm contributes to modelling many important brain functions and effects like learning, daydreaming, sleep dreaming, recalling, recognizing, context, effector motions and problem-solving. How these are achieved by the brain model will be described in Section 8.1. LINKING THE 5 BASIC FUNCTIONAL ALGORITHMS.

Recognition

With every cycle of the Xzistor logic loop, a check is also performed to see if current incoming representation sets (containing e.g. Si, Di, etc.) have a match in the Association Database (see Anchor States below). A match would mean that the current incoming Association has been ‘recognised’ as part of an existing Association which the model can now use as a source of information (e.g. re-evoking Emotions around currently perceived objects or perhaps access prior learning around effector Motions to avoid/approach these objects). The information from a ‘recognised’ Association will be used and synthesised (combined) with the incoming representations at various points during Xzistor logic loop, to create an updated version of the Association that will be stored back into the Association Database.

Anchor States

As we have seen, any number of the representations (from 1 to many) forming part of an Association Ai can, when checked against the Association Database, re-evoke an exisisting Association and make the information contained in its different representations available to the brain model for use. For modeling purposes it is however convenient to fix the number of input representations that, collectively, will re-evoke such an existing Association. This subset of representations which we need to be present at the same time, in order to ‘unlock’ or re-evoke an Association (Ai) from the Association Database, we will call the Anchor State (AS), and it will be a subset of the representations comprising an Association:

Typically, a good trade-off between accuracy and processing overhead can be achieved by choosing the Anchor Sate (AS) to only contain the main Drives and the main Sensory States, i.e:

Impact Factor

Associations will have a specific strength or ‘Impact Factor’ when stored. In the model the Impact Factor IF is a function of three parameters:

  1. EIF, Emotional Intensity Factor i.e. the highest of the two absolute values (|Dtot| or |Stot|) representing emotional salience.
  2. ET, the elapsed time since the Association was last recalled (re-evoked).
  3. RR, the number of times re-evoked or ‘reinforcement repetitions’.

Impact Factor  = f (IF, ET, RR)

The IF provides a way to rank Associations by how important they are to the system (in finding Satiation). Associations with high Impact Factors will be stored near the top of the Association Database and first accessed when experience (learning) is seeked during problem-solving (discussed later).

Forgetting

Less impactful Associations are not forgotten but stored in a manner whereby they will only be accessed when search criteria are very specific and their is adequate time for the model to Thread through the Association Database (Threading is discussed later and will explain how the Impact Factor of an Association can aid in providing ‘context’ around a real or recalled object or concept).

This provides the system with a long term and short term memory.

Emotions re-evoked as part of recognising Associations

As a general rule, when an Associaton is recognised, all of its representations become available as information that can be used during that cycle by the brain model to help determine the most appropriate next behaviour. This does not mean that Emotions are re-evoked as if from their Drive representations, only that their information can be used without regenerating the pseudo-somatosensory representation (the actual feeling). Just by recognising a food source when not hungry will not create a hunger Emotion in humans, and actual Pain cannot be created by simply looking at a cactus that had caused Pain in the past.

There is however one important exception to this rule.

The Xzistor model uses the way it defines Senses, Drives, Emotions and Associations to model the autonomic nervous system (ANS) which in humans comprise the Sympathetic Nervous System (SNS) and Parasympathetic Nervous Systems (PNS). Simply put, the SNS causes stress e.g. the Fight or Flight (FoF) Response and the PNS counters that with a state of relaxation or calm. Except for the preprogrammed Reflex reactions that trigger the FoF i.e. the SNS, we find that in the human brain activation of the thirst centre, hunger centre and pain centre, etc. also triggers the SNS causing a stress state in concert with these Drive states. As all these Drives increase in strenght, the SNS response also increases in strength, and as these Drives decrease in strenght, the SNS response also decreases in strength. In humans, this SNS response is transferred via the hypothalamus and adrenal glands to the gut (vagus nerve) and then projected to the brain via the brainstem as a visceral somatosensory body state to areas like the insula where it will create pseudo-somatosensory Emotion representations. Because the SNS and PNS become activated in concert with all other Drives (i.e. SNS activation during the Deprivation phase and PNS activation during the Satiation phase), it becomes another type of Drive we will simply call the Stress Drive.

But this Stress Drive has a unique feature in that its Emotions can be re-generated by merely observing or thinking about an object that had been encountered in the past. So as an Association is recognised, the Stress Drive and Stress Emotion representations, stored at the moment the Association was formed, will be regenerated as actual Emotions (unlike Hunger, Thirst, Pain, etc.). In effect the representation of the Stress Emotion in the brain as part of an Association, will act as a Control Viariable (CV) to trigger the Stress Drive and the Stress Emotions – this can be a Deprivation Emotion (bad) or a Satiation Emotion (good).

So now, for every cycle, the Stress Drive and associated Emotions are firstly generated by the net effect of all the other Drives in the system as they are activated, but also influenced by the recognition of a Stress Drive and Stress Emotion representations in an existing Association. Because there is only one autonomic nervous system, all the above effects will work in on the same Stress Drive system and create a consolidated Stress Emotion.

When the Xzistor brain model must determine the next behaviour, it will weigh up the strengths (between 0 and 1) for all the Emotions, including the consolidated Stress Emotion activated by both the other Drives and Stress Emotions from the recognised Associations – and act on the strongest.

To descern between Drives that are triggered by Control Variables in the environment and body, and those triggered by stored Emotion representations in recognised Association – we will refer to two types of Drives: Body UTRs and Brain UTRs.

Body Urgency To Restore (UTR) mechanims are Drives with Control Variables in the environment and body, and Brain Urgency To Restore (UTR) mechanims with Control Variables in the recognised Association.

For more complex instantiations of the Xzistor brain model, both the Stress Drive and the Nausea Drive (to a lesser degree) can be combined to create a Stress Drive and Disgust Drive that can be instantly recognised (regenerated) as objects are recognised in the environemnt (the academic literature provides evidence of the presence of both stress and nausea representations in the insula). The Emotion representations that also provide information on the strengths of these Brain UTRs (between 0 and 1) are often stronger than the other Body UTRs and will drive behaviour of the system – meaning the system is acting out of stress or disgust, and not immediate homeostatic needs.

An interesting behaviour that can be generated based on the above is where the agent now just ‘thinks’ about a painful experience, and performs a learned behaviour to avoid getting into the painful situation again – even without experiencing any Pain. This is referred to by the Xzistor brain model as allostatic Emotions (based on Brain UTR Drives) versus the homeostatic Emotions (based on Body UTR Drives).

These mechanisms supprted by the Association Algorithm contribute to a fully implementable brain model that explains and demonstrates many of the more elusive brain phenomena like recognition, acting out of stress, acting to seek stress relief, acting out of disgust, acting to avoid disgust, acting on the strongest Emotion (whether originating from the body or brain), preference, fear of Hunger, fear of Thirst, fear of Pain, fear of Cold, fear of Fatigue, etc. – also learning, planning and problem-solving based on these Emotion states originating in the body and brain. These effects will be explained in more dteail in Section 8.1. LINKING THE 5 BASIC FUNCTIONAL ALGORITHMS.

4. MOTION ALGORITHM

The Motion Algorithm will translate any of the following into effector motions (actions):

1.A Reflex input

2.A recognised Phobia (where the Association was preprogrammed)

3.A recognised Association (where the Association originated from learning)

4.Motion commands forced on te system by an external party (e.g. robot tutor)

LINKING THE 5 BASIC FUNCTIONAL ALGORITHMS

The Linking Algorithm performs the integration of all the algorithmic elements that control the interfaces between the above 5 functional algorithms, and is effectively the executive part of the brain model (comparable to many of the functions performed by the thalamus in the human brain). What information is passed between these functional algorithms for every logic loop cycle is crucial to how the Xzistor model provides an agent with human-like functions and effects like sensing, subjective emotions (like feeling hunger, thirst, pain, cold, warm, fatigue, anger, pain, fear, stress, etc.), learning, language development, dreaming (day dreaming and sleep dreaming), thinking (including contextualisation and problem-solving), coordinated goal-driven motions, etc.

The full logic loop will be discussed next. It assumes an implementation comprising compiled computer code e.g. C++, Java, Swift, etc. driving a physical robot, but could equally apply to equivalent neural network (hardware) systems – or a combination of both (the functional model is therefore means agnostic). To change this into a simulation, the physical elements of the robot and the environment are simply replaced by virtual models.

1. START ROBOT – The virtual or physical agent is activated with the Xzistor Concept running as its brain.

2. INITIALIZE – Initialize all variables and arrays (e.g. Association Database).

3. TUTOR OVERRIDE – Open tutor control interface of the agent. This allows the tutor to guide the agent during initial learning. Typically, the tutor will take over control of the robot effectors (e.g. motors) and demonstrate a Motion (M) a few times to show the agent how to solve a problem like opening a food source (simulated) when the Hunger Drive has gone high.

4. MAIN LOOP – The main loop is entered which is repeated until the tutor interrupts the program, or power to the system is cut.

5. READ SENSORS – Based on the latest incoming sensory Variables (Vi), the sensory representations (Si) are generated by the Sensing Algorithm, e.g. video, tactile(touch), audio, color, temperatures, shock, accelerometers, etc. For a simple computer program instantiation of the model, all these representations could merely be unique numerical values. Sensory representations can in some cases directly trigger Reflex reactions (R) which could trigger autonomic Stress Drives and Emotions (DE or SE) and/or Motions (M).

6. RECOGNITION (READ BRAIN UTRs) – The representations that are part of the Anchor State (AS) are compared with those in the stored Associations (within the Association Database) to see if any of them correlates and can thus be recognized. If the Anchor State representation is recognized, it will immediately re-evoke the autonomic Stress Drive representation in the recognised Association – positive (stress) or negative (relief) – and the model will be ready and waiting to combine this Brain UTR Drive representation with will all the other autonomic Stress Drive representations generated symbiotically by all the other active Drives.

Anchor State representations that are recognised can also trigger Phobia (Brain UTR) reactions which will generate negative autonomic Stress Drive and Emotion representations with associative Motions (M). Phobias are just Associations that are not learned, but pre-programmed into the Association Database effectively creating instinctive negative Emotion representations (fears). Associations with positive autonomic Stress Emotion representations can also be pre-programmed to trigger positive Stress Drives for stress relief or calming, upon recognition.

7. READ BODY UTRs – For the current cycle of the program the Drive Algorithm will obtain the Control Variable (CVi) representations and use them to generate Body UTR Drive representations for all the Body Drives.

8. CREATE EMOTIONS – The Drive Algorithm will use the Body UTR and Brain UTR Drive representations from step 6. and 7. above to calculate the positive (SE) and negative (DE) Emotion representations for all the active Drives (both Body UTRs and Brain UTRs).

To obtain a consolidated autonomic Stress Emotion representation the following originating mechanisms exist as part of the model:

  1. Reflex – A Sensory representation (input) directly creates the Stress Emotion (e.g. instinctive threat object, loud noise for Deprivation (stress inducing negative emotion) or sound of running water for Satiation (calming positive emotion).
  2. Phobia – A pre-programmed Association with a negative Stress Emotion is recognised via its Anchor State (e.g. complete darkness creating Deprivation).
  3. Body UTR – Every Body UTR will always create a proportionate negative or positive Stress Emotion representation.
  4. Brain UTR – Recognition of an Association via its Anschor State will regenerate its Stress Emotion representation. Threading through Associations in the Association Database as part of dreaming or thinking will also regenerate the Stress Emotion (negative or positive) of every Association accessed – discussed in Step 13 below.

To arrive at a consolidated autonomic Stress Emotion representation, the highest source of negative autonomic Stress Emotion (Deprivation) from the above list will be used as the prevailing negative Stress Emotion (between 0 and 1). However, if this Deprivation level is decreasing – meaning that the system is experiencing autonomic Stress (Satiation), the highest source of positive autonomic Stress Emotion (Satiation) from the above list will be used.

This will provide a consolidated autonomic Stress Emotion (either in Deprivation or Satiation) that can now be compared with all the Deprivation and Satiation levels of all other Emotions (generated from Body UTR Drives) that is part of the system during the current cycle.

9. SATIATION – A Satiation Event will be registered if the agent was in Deprivation during the previous logic loop cycle of the program and has moved to Satiation in the current cycle. This is the moment the model will implement its operant learning protocol – whereby the autonomic Stress Emotion representation (positive becasue of the Satiation) will also be assigned to the Association that was newly stored or updated during the previous cycle. The effect of this is that recognition of the Anchor State of the previous cycle Association is now turned into a Satiation Event – not becasue it provided homeostatic Satiation (e.g. food, warmth, etc.) but because it will now cause a lowering of the autonomic Stress Drive and Emotions caused by the Hunger Drive. For instance, recognising the green door leading to the kitchen will trigger a lowering of the autonomic Stress Drive and Emotions (stress relief) and lead to another Satiation Event. And again, this Satiation Event will, bu virtue of the operant learning process, turn the preceding Association into a navigational reward source. This process is called Reward-based Backpropagation and is how an Xzistor agent learns, through operant learning, to navigate to reward sources from anywhere in its environment. Under dynamic conditions the facial expressions of these agents will show and increasing Deprivation (clearly desperate frowns) as they try to find a reward source, while recognition of these en route navigation cues acting as Satiation sources, will trigger lowered autonomic Stress Emotions (momentary relieved smiles) which makes for a very realistic human-like behaviour.

If already in Satiation (e.g. eating food, or charging its battery, etc.) the agent’s actions will not be interrupted, unless a stronger (more urgent) Body or Brain UTR is registered (e.g. higher value between 0 and 1). This will force the agent to abandon the learned Satiation activity (i.e. Motions) and act on the new higher priority Body or Brain UTR. The program therefore keeps previous cycle information in cache, to be able to see if there had been a move from Deprivation to Satiation in the current cycle.

10. DEPRIVATION – If the agent is not in Satiation, it either means it is in Deprivation (e.g. suffering Hunger, Thirst, Pain, Fear (e.g. the negative autonomic Stress Emotion triggered when observing a known Pain source) or no UTR is currently high enough (over the critical activation level) to warrant action. If no Body or Brain UTRs require attention, the agent’s behavior will still revolve around finding Satiation and avoiding Deprivation (this will be explained later).

11. PRIME UTR – The program will compare all the Body and Brain Drive Emotions and will confirm if the current Prime UTR is still the highest Drive in Deprivation meaning the agent should keep on executing the related behaviours (Motions) to minimise Deprivation, or if the current Prime UTR is still providing the highest Satiation so that the agent should keep on executing the related Satiation seeking behaviours (Motions). Else a new Prime UTR (Body or Brain) will be selected as the Prime UTR, which will start driving the agent’s behavior.

The adjudication is peformed as follows:

  1. If the Prime Drive (Body or Brain UTR) is in Satiation (i.e. homeostasis/allostasis is being restored), keep on performing the learned Motions to restore (lower) the Prime Drive until it falls below its activation level (beneath which the system will be aware of it but not act on it).
  2. If another Drive (Body or Brain UTR) is now offering stronger Satiation, make this Drive the new Prime Drive and change to perform the learned Motions to restore (lower) this new Prime Drive until it falls below its activation level (beneath which the system will be aware of it but not act on it).
  3. If the Prime Drive (Body or Brain UTR) is in Deprivation (homeostasis/allostasis deficit is increasing), keep on performing the learned Motions to restore (lower) the Prime Drive that will achieve Satiation.
  4. If another Drive (Body or Brain UTR) is now recording higher Deprivation, make this the Prime Drive and change to perform the learned Motions to restore (lower) the Prime Drive that will achieve Satiation.

This will confirm the Prime Drive for the current cycle.

12. THREADING – If all the agent’s Body and Brain UTR Drives are below their selective activation levels there will be no Prime UTR and the agent will perform Threading since there are no urgent imbalances to address i.e. no problems to be solved. A typcial activation level could be 0.1 on a range 0 to 1 (i.e. 10%). In this state the agent will Daydream or learn to obtain Satiation from other sources e.g. playing games. Activities like Playing might start off as instinctive infant exploration behaviours or can be learnt behaviours offering Satiation by artificially creating Deprivation (often mild autonomic Stress Emotion generated during physical games or computer games) that offer moments of intense Satiation (relief). Adults might look at more sophisticated and subtle forms of achieving Satiation e.g. studying new subjects, watching sports or having conversations involving friendship (bonding) and/or humor. Daydreaming will be achieved through a process called Threading whereby the system will recall Associations from the Association database akin to the human brain’s process of ‘mind wandering’. The criteria for selecting the next Association whilst performing Threading will be similarity in optic Sensory state (mainly) and the value of the Association’s Impact Factor (IF). Based on similarity with the current re-evoked optic Sensory state, e.g. recalling a tutor’s face that provided a high-Satiation food source, a list of Associations will be selected starting with those with the highest IF – this will mean that these Associations had made a strong emotional impact (good or bad), were often repeated or are very recent. Whilst Daydreaming can still be affected by what is oberved in the environment, Sleep Dreaming follows the same process except that effector Motions are disabled, unless strong Sensory input is experienced (tactile, sound, etc.) which will terminate the Sleep Dreaming process (wake the agent up).

13. THINKING – If the agent has performed the Motions to resolve the Deprivation of a Prime UTR many times before, it will quickly recognize the correct environmental cues (Anchor State representations in the Association) as well as the actions (Motions) from the Association Database e.g. navigating from the kitchen to the battery charging station in the lounge could be a quick, smooth Motion (motor inputs updated every 0.1 seconds). When originally starting out though, the robot would have bumped into walls and often cried for help from the tutor (crying is a Learn-modifiable Reflex that can be triggered by a high level of Body or Brain UTR Deprivation – typically when Deprivation reach 0.3 on a scale from 0 – 1 i.e. 30%). Later, all the correct learning wil become almsot instantaneously available as a quick succession of retrieved Motions from recognised Associations. If the agent does not recognize ts current environment as an area where it had learnt to Satiate Body or Brain UTRs, no learned Motions will exist in the Association Database – and the agent will have to Think (this is triggered by a period of increasing Deprivation and no recognition of Associations with known Motions to perform – say 3 seconds). This the model refers to as ‘directed’ Threading where the agent now searches for the ‘closest’ Association to fit with the UTR and environment (i.e. the closest Anchor State match), and just ‘try’ the Association’s stored Motions to see if it works (some applications will use a Tolerance Factor to indicate which Associations had often been prone to predictions errors i.e. mismatches). As the agent’s Deprivation level increases (for example due to increasing Hunger), the coupled negative autonomic Stress Emotion will increase and the agent will become more rushed to find an Association. Associations chosen from the Association Database might become more random and less accurately filtered – leading to increasingly desperate behaviours to find a food source. The Threading (mind wandering) process is now narrowly ‘directed’ by constantly returing (restarting) the search for a match using the optical images (Sensory representations) of learned food sources (only) as part of a specific Anchor State and the search becomes focussed on Associations providing Motions to these food sources within that environment. Narrowly ‘directing’ the Threading process in a manner to find behaviours (Motions) that can solve problems is called Thinking by the model. During Thinking the model will generate the ‘context’ around what is being thought about by recalling relevant Associaitons. The quickly recalled Associations based on images of the food sources (which could include recollections of helpful navigation cues in the environment) will form the ‘context’ around the situation.

14. ACTION COMMANDS – The program will use Steps 4 to 13 above to arrive at the most appropriate Motions commands for the current cycle, including where necessary through the process of Thinking aided by context generation.

These Motions will provide the best estimate from past learning as to what the agent should do, in a specific environment, to reduce Deprivation or maintain and optimise Satiation.

The Satiation Motion commands for the Hunger UTR could be to remain in one position and ingest the food (food intake is normally simulated). Identifying the correct Motion commands (representations) mean the program will also consider if any Reflexes were triggered and factor in where tutor instructions should override own decisions.

15. MOTIONS – Here the final Motion commands identified in Step 14 are executed by means of the Motion effectors e.g. motors, actuators, speakers, lights, etc. The Motions of virtual agents will be simulated.

After this step, the program will return to Step 4 above.

KEY EFFECTS GENERATED BY THE FUNCTIONAL BRAIN MODEL UNDER DYNAMIC CONDITIONS

The information passed between the 5 functional algorithms for every logic loop cycle is crucial to how the Xzistor model works, but equally important is the information passed from one cycle to the next cycle.

Reinforcement Learning

The Association stored / updated during the previous logic loop cycle must be available in the current cycle to determine if the Prime Drive (or any other Body or Brain UTR Drive) has changed from being in Deprivation to Satiation. This will indicate that a Satiation (SAT) Event had taken place.

When a SAT Event occurs, meaning an action by the agent is bringing a reduction in the Deprivation the agent is suffering, it is important that the brain model stores to memory the effector motions that were performed on the moment the SAT Event happened. It is equally important that the brain model stores to memory the successful effector motions leading up to the SAT Event. When a Prime Drive was Satiated, the Association preceding the SAT Event (i.e. ocurring during the previous cycle of the logic loop) will therefore retrospectively be updated and credited (‘reinforced’) as an Association that, for that Prime Drive and Sensory representations (environmental cues), offered/informed the correct effector motions that led to the SAT Event. The system will remember that when next it is in that same physical location and experiencing strong Deprivation from the same Prime Drive, it should use that specific Anchor State – i.e. environmental cues (Si) and Prime Drive representation (PD) – to ‘recognise’ this ‘reinforced’ Association and extract the correct effector motions from it (as a best-estimate) towards achieving Satiation. As the physical environment might have changed slightly, it could at times become a ‘trial and error’ effort by the model. If the attempt is successful, the effector motions will again be ‘reinforced’, also for the preceding Association, for future use. The model will, when the Prime Drive (let’s say A) and the Sensory representations (let’s say B), search for an Anchor State match in the Association Database containing A and B – and if it had provided Satiation before – execute its effector motions fo the matched Association. When an accurate Anchor State match is not available, the model will seek the closest Anchor State match Association and ‘try’ the effector motions to see if it works (in some applications a Tolerance Factor is used to inform the level of accuray required for a match). If no Anchor State match can be made, the model will resort to Threading to find an Association with potentially helpful effector motions by exploring the ‘context’ around the current Anchor State (discussed below).

Supervised Learning

When Motions (M) are imposed on the agent by an external ‘supervisor’, and these Motions become part of the Motions leading up to a SAT Event, they will also be reinforced and in future re-evoked when the same SAT source is pursued due to the same Drive. In simple robotic models tachometers are used to record the effector motions of the regulated electrical motors resulting from tutor interventions. These effcetor motions are then stored as part of Associations and can be re-generated when the Association is recalled. In more sophisticated models limb/joint forces and accelrations can also be measured and used as an additional proprioception sense (part of the Anchor State) to aid complex motions and coordinated effector routines.

Reward-based Backpropagation

As mentioned in Section 5.6, the specific way in which the Xzistor brain model achieves Reinforcement Learning leads to another important effect called Reward-based Backpropagtion. Reward-based Backpropogation is based on the manner in which certain states in the human brain linger in the brain while new ones are being introduced, allowoing cross modulation (as evidenced in the academic literature).

During Reinforcement Learning, the Association from the previous cycle is rewarded by changing this Association’s autonomic Stress Emotion representations from Deprivation to Satiation (proportionally to the level of Satiation generated in the current cycle from the Satiation Event). Based on the current cycle Prime Drive Satiation Emotion representation (SE), this Prime Drive Emotion must also be turned into a Satiation Emotion representation for the Association formed/updatd during the previous cycle. This will tag all the Drive and Sensory representations of the previous cycle Anchor State with Satiation Emotions that will enable these Anchor States, when ‘recognised’, to generate a new autonomic Stress Drive Satiation Event (and therefor a learning or reinforcement opportunity). This will progressively lead to Anchor States (with their effector motions) positioned further and further away from the Prime Drive Satiation Source location to be become recognisable as ‘approach’ states – leading to a physical navigation path being created towards the Prime Drive Satiation Source. Simple Xzistor robots have successfully demonstrated how they will learn to navigate from any point in their learning confines to a Satiation Source (e.g. food) using effectivley autonomic Stress Emotions tagged to environmental cues that encourages ‘approach’ behaviours. For for sophistacated future Xzistor models these navigation paths can include for instance solving a complex Hunger resolving navigational route – learning to drive a car, fetching the keys of the car, putting gasoline in the car, driving to the supermarket, buying food, driving home, cooking food, etc. In some Xzistor robotic applications a cache of the Associations formed/updated during previous cycles is maintained to add proportional rewards further back in time. This significanly speeds up operant learning.

Subjective Emotions

As mentioned before Emotion representations will be cast into pseudo-somatosensory representations. This means these representations will be somatotopically placed within the body – so that the agent will sense an Emotion state as ‘inside the boundaries of its body’ after the process of learning the boudaries of its body. This happens through interaction with the environment and tactile and pain sensations leading to the ability to ‘locate’ sensory representations within the somatotopic map of the body in the brain (i.e. computational correlate of the cortical homunculus). Emotions (DEs and SEs) are always consciously experienced (felt) by the brain model – as if originating from within the body – because they are constantly presented to the executive part of the model, along with their learned effector compulsions, to determine the next behaviour.

Although the Stress and Nausea Drives can act by being coupled to a Body UTR Drive (e.g. Pain), it can also be re-evoked from autonomic Stress Emotion representations residing in the Association Database (hence these are called Brain UTR Drives). When these autonomic responses occur as part of Body UTR or Brain UTR Drives, they will also lead to pseudo-somatosensory representations that will somatotopically be placed within the body (e.g. humans might experience stress as ‘butterflies’ in the stomach or nausea as an ‘urge to vomit from te stomach’.) When there are no strong Body UTRs driving the behaviour of the model, these autonomic Drives will, even when extremely subtle, drive behaviour. As the agent brain learns, new more complex and nuances concepts will need to be aquired by the agent to ensure access to Satiation sources. These might attract (e.g. through Reward-based Backpropagation) nuanced and complex Emotion sets (combinations) that will determine the meaning/value these concepts have in the mind of the agent, and lead to ‘common sense’.

Threading (Mind Wandering)

The Xzistor brain model uses a Threading mechanism akin to ‘mind wandering’ in the human brain to achieve some important effects. It is helpful to think about Threading as a mechanism whereby the brain model constantly wants to re-evoke Associations from the Association Database. This ‘compulsion’ of the model is resisted when urgent Drives (Body and Brain UTRs) need to be solved as a priority. When no urgent action is required to solve Drives (subjectively felt as Emotions), goal-based effector Motions will stop and Threading will start.

The way Threading works is that it starts with the current Anchor State and then searches the Association Database for Associations with closely correlating Anchor States – firstly re-evoking the ones with the highest correlation and Impact Factor, before searching deeper into the Association Database. The key attribute the next re-evoked Association shares with the input state Association could be its Anchor State or any representation (e.g. visual objects like faces, places, landscapes, artifacts, dagrams, images, words, distinct smells, specific sounds like melodies, or even a Drive, Reflex or Motions representation). Typically, the visual Sensory representation of each Anchor State belonging to the Associaiton will be re-evoked and again ‘seen’ by the agent – in a manner comparable with Daydreaming in the human brain. These recalled flashed images can be displayed on a screen for the tutor to see when using digital Xzistor robots.

As with the human brain, the brain model will learn to use this random searching mechanism of the Association Database to solve problems. When an answer cannot immediately be found (e.g. the way to navigate to a food source) the agent will pause and allow this involuntary Threading process to start re-evoking Associations, except now it will ‘direct’ the process by constantly returnign to the Anchor State that represents the Satiation Source (e.g. Sensory representation of hamburger). This will prevent the Threading process to continue unhindered and ‘narrowly direct’ it to only re-evoke Associations directly related/correlated to the problem (the Anchor State). As the Prime Drive starts to repeatedly flash this Anchor State before the executive part of the brain model (ensuring that ‘directed’ Threading are not allowed to go into ‘undirected’ Threading i.e. mere Daydreaming), it will reset the Threading process back to search for a match with this Anchor State and avoid it from wondering off topic. The hope is that searching the ‘context’ around a Satiation Source in this way could lead to an Association in the Associaton Database with helpful effector Motions that can aid in solving the problem.

Xzistor robots can be made to sleep and experience Sleep Dreaming which is just Threading with effector Motions disabled – unless Sensory inputs above a certain threshold is experienced (e.g. loud noise, nudges, etc.) Again, these recalled flashed images during Sleep Dreaming can be displayed on a screen for the tutor to see when using digital Xzistor robots.

Fears

We have seen how Body and Brain UTR Drives can generate Deprivation singly or collectively. Associations that were stored with high DEP values, will also generate DEP (albeit reduced) when they are re-evoked, mainly because of the autonomic Stress Drive (and in some applications a modelled Nausea Drive). DEP generated in this way will be referred to ‘Fear of’ or just ‘Fears’. We refer to the subjective Emotions generated based on the conditions of these Brain UTR Drives (DEP or SAT) as allostatic Emotions, but they can also just be viewed as homeostatic Emotions where the recalled Emotion representation also acts as the Control Variable for the homeostatic control loop. So the autonomic Stress Emotions can be triggered while the agent is experiencing intense Pain (as these are coupled), or when it recounts the Association formed during the painful episode.

Some Fears (by definiton also subjective i.e. somatotopically placed within the brain model’s body map), when evoked, can be stronger than the strongest Emotions from the current Body UTR Drive, which means it, as a Brain UTR Drive will take priority over such a Body UTR Drive and its Emotions. One of the first steps the Xzistor brain model takes as it enters the logic loop cycle and re-evoke an Association, is to see if the subjective Fears are stronger than the body UTR Drives. If so, behaviour will be prioritized by Fears and not Body UTR Drives. The Fears thus act like Drives in that they generate DEP when they are re-evoked. For example, when an agent navigates past a cactus it had previously bumped into, recognising the Association of the cactus will evoke such DEP that the agent’s priority will temporarily change to moving away from the cactus, before continuing on its way. And as the agents move away from the cactus the reducing DEP will cause a Satiation Event, causing the agent to learn to navigate away from this ‘avoidance’ state i.e. the optic representation of the cactus.

Instincts & Phobias

Any behaviors that can be achieved by learning, can be pre-programmed into the model as ‘instinctive’ behaviours (merely a set of permanent read-only Associations or preprogrammed Associations that can be modfied through learning).  Animals are born with a great number of very complex pre-programmed or ‘instinctive’ behaviors. The human brain has less pre-programmed behaviors and requires much more learning. The animal approach can provide an animal with advanced skills within minutes from being born aimed at a very specific domain, but very little flexibility to survive in other domains. Humans on the other hand need a long time to learn about their environments, but can adapt in different environments by means of goal-based learning and transferring skills learned in one domain to other domains. The human brain normally comes with some Phobias which are just pre-programmed Fears (i.e. autonomic Stress Emotions representing Deprivation). These are simply Associations that are already in the brain (Association Database) at birth, which will generate DEP (often by using the FoF Reflex), normally when a specific Sensory state is experienced (e.g. loud noise, sharp pain). Examples: fear of heights, the dark, confined spaces, specific animals e.g. spiders, snakes, etc.

Base Fears

The human brain quickly learns to fear the unknown. This happens because we learn that it is often in unknown environments that we encounter new threats i.e. sources of DEP. We also learn to be apprehensive about people because although people can be a source of SAT, they can also be a major source of DEP. Later we learn about our own mortality and understand there are permanent risks to keeping all our physiological and mental needs satisfied (the actual reason why we fear death). These risks could come from natural disasters, personal injuries, security of income, losing one’s shelter (house), health, the suffering of friends and family, public embarrassment, spousal rejection, violent crime and victimization in the workplace and a myriad of other modes of misfortune. Even just balancing our bodies involves the fear of falling down. These current fears and emergent anticipatory anxiety (fear) over the future, becomes a permanent DEP overhead we carry and constantly generates a ‘mood level’ mainly as a rsult of the FoF Reflex being triggered. We learn behaviours to minimise these Base Fears by e.g. focusing the mind on other issues (like playing games/sports), drinking alcohol, listening to music and escapism like reading books or watching movies. It is very difficult to completely reduce the DEP associated with these fears, and when we get close to a state where we are able to Saitiate all phyiscla and mental needs – we call it Euphoria. This can typically be achieved during sexual orgasm or when taking drugs like heroine, morphine and fyntanel. This extreme SAT is achieved when the total Drive Dtot is forced to drop low into the Base Fear regime – temporarily unburdening the brain of these Base Fears. 

Body State Override Reflex

The Body State Override Reflex is a mechanism extensivley elaborated on in the early Xzistor brain model provisional patent specifications (2002, 2003). The fact that all the motivations and behaviours originate from homestatic and allostatic control mechanisms create an opportbity for the brain to ‘interfere’ with the manner these mechanisms process input and output signals. Instead of these mechanisms acting on Variable and Control Variable readings, the brain can override these readings to artificially manipulate the Drives that lead to Emotions. For instance without Thirst in the brain being present, the brain can artificially create a sense of Thirst when food is being ingested. This is indeed what happens in the biological brain (this is to encourage that enough fluid is present as food is being ingested). But this principle can be extended to intervention in all the circuits that create Deprivation or Satiation Emotions from all Drives – leading to sets of artifically induced Emotions that can serve many purposes. For instance, the brain can enhance the Satiation Emotion experienced when eating by artifically triggering Satiation across many other Emotions – other than Hunger Satiation – creating moments of intense Satiation that will be ‘subjectively’ experienced as highly pleasuerable. This could be compared to the limbic system in the mammalian brain, known for its ability to create strong feelings of pleasure (Satiation). This has only been tested in early Xzistor simualtions, but will form part of a complete brain model in futue Xzistor humanoid robots.

Recognition & ‘Gut Feel’

When we enter a new situation, we will have new Sensory inputs flooding into the brain (e.g. optical states, audio, states, tactile states, olfactory states, etc). Each input state could have its own historic set of Associations it is part of i.e. the input state forms part of numerous Anchor States and their related Associations. The autonomic SAT and DEP values of all these Associations (for each input state) will be averaged, and again averaged across all input states to re-evoke an overall autonomic stress emotion (resultant autonomic SAT and DEP values). This will be the immediate resultant emotion we experience when faced with a new situation, even before we had time to understand the context around it.

If the situation has never before been experienced (no corresponding Anchor States), the same thing will happen but the brain will use the ‘closest correlating’ Associations it can find instead of Associations with precise Anchor State matches.

This enables the brain to quickly judge if a new situation is essentially ‘good’ (approach) or ‘bad’ (avoid).

The brain will immediately continue to re-evoke more Associations based on the current situation proposing to the brain what effector motions to perform (even if that is to perform no motions). If this initial context, along with the resultant emotion, is generated from matching Associations (experienced before), we will refer to this as Recognition. If it is generated from correlating Associations (never experienced before), we will refer to as a ‘Gut feel’.

In terms of the brain model, we define the initial context and the resultant emotion as giving ‘meaning’ to a situation.

Day Dreaming

When the brain has no Body or Brain UTR Drive Emotions to urgently attend to, the Threading mechanism will uninterruptedly re-evoke Associations. As the Sensory input states from the environment cause changes to the Anchor States, they will trigger new Association searches which will lead to new ‘Threads’. Autonomic SAT and DEP will be re-evoked along with each Association and Reflexes (e.g. slight FoF Reflex activity). No ‘learned’ Motions will be executed during this phase because no Drive will require to be Satiated and the Drive ID will thus not for part of the Anchor State. This phenomenon will be referred to as Day Dreaming. Day Dreaming (by virtue of Threading) will occasionally Thread onto an Association which reminds the brain of something it needs to do e.g. buy some ingredient to cook a dish. The brain could then re-evoke autinimic stress as to the fear of forgetting to buy the ingredient and this might rise to a levwl where in interrupts the Day Dreaming to undertake the shopping task (to Saitiate the autonomic stress). If the brain is too focused on urgent activities to Day Dream, it can easily forget things. It is when the brain is free of obliations to undertake urgent tasks, that this Threading reminds it of what other things it was supposed to do but might be forgetting at the moment.

Thinking

We learn that Threading sometimes can provide answers, or clues as to what action the brain should take to solve problems. This happens by re-evoking Associations of which the Motions (i.e. stored effector actions) can remedy a new  problem. We also learn that Threading can be ‘directed’ i.e. influenced to be more effective as a problem solving tool by performing certain actions while it is taking place. We learn that the probability of finding a helpful answer from Threading is enhanced if we:

  1. Look at the objects involved in the problem.
  2. Not think about other things.
  3. Look at details of the objects.
  4. Get different views of the objects.
  5. Touch the objects.
  6. Ignore sensory distractions from the environment.
  7. Ask questions about the object.
  8. Follow learned problem solving techniques.
  9. Look in guides/manuals.
  10. Ask clever people for solutions.
  11. Look on the Internet for information.

The brain automatically increases the efficacy of the Threading process during problem solving. It determines how urgent solving the problem is based on the current Prime Drive level, because the level of Deprivation i.e. the ‘urgency to restore’ is contained in the Emotion representation. It ‘focuses’ the mind by forcing the Threading mechanism to Thread from its current inputs states (Anchor States), using these as filter criteria, and not Thread for so long as to completely digress from the current filter criteria. It thereby allows only Associations close to the current problem (input states) to be considered. The more urgent the problem-solving effort becomes (and the higher the Prime Drive Deprivation), the shorter the periods the Threading mechanism will be allowed to search the Association Database before being returned to the current input states. This will cause the brain to narrow its search i.e. improve its ‘focus’ or ‘concentration’. The brain function of steering the Threading mechanism so as to find appropriate solutions to problems, we will refer to as Thinking.

Individuals with a natural tendency to digress faster from the search topic we can call lateral thinkers. Individuals with a natural tendency to stay close to the search topic we can call logical thinkers. The lateral thinker’s solution might be more creative, but more unproven, whilst the logical thinker’s solution might be less novel, but more conservative.

The model uses the simple relation:

Focus  DEP

And can define Focus to have a value between 0 (no Focus) and 1 (100% Focus), where:

for DEP = 0, Focus = 0, and

for DEP = 1, Focus = 1

Sleep Dreaming

When we sleep, sleep fatigue Reflexes will shut down limb movement and close the eyes. Because Threading is still constantly trying to re-evoke Associations, this will continue during sleep. Thinking can also occur during sleep if a problem occurs as part of a re-evoked Association or train of Associations (episode). Due to the basic Threading rules, we find that dreams were often related to ‘vivid’ Associations i.e. high DEP or SAT, recent and/or often repeated events (meaning Associaiotns with a high Impact Factor). Strong inherent Fears (DEP) will encourage episodes to be constructed from parts of Associations that will represent these Fears. This will lead to visual images being re-evoked that is often referred to as metaphors of these contextualised Fears. Transient observations forming part of one Association can influence parts of another Association whilst dreaming, contorting and modifying it, creating new states never actually experienced before e.g if we saw a lion charching in the zoo, we may dream of a horse charching us. Here the opticall triansition of the animals rapidly growing in size as it appraoched becomes an attribute that is applied to an Anchor Stare object from another Assocition.

Playing & Humor

When animals are born, it is with a resident set of generic instincts (i.e. pre-programmed read-only Associations). Playing is a Reflex to encourage animals to enhance these instincts with environment-specific or ‘detail’ Associations. Humans play for another reason. When the brain is not preoccupied with strong Drives or Fears, basically all that remain are some low Drives below the Detection Threshold and the Base Fears. The human is still obsessed with finding SAT, even under these conditions. The brain learns that certain events can make the low Drives and Base Fears decrease to cause SAT. More importantly, the brain learns that some activities will bring a temporary increase in Base Fears (normally by means of the FoF Reflex) and then lead to an immediate release i.e. SAT.

By doing things that slightly increases DEP, SAT can be achieved. This is why human games always involve some tension build-up, followed by sudden SAT. Humor also subtly lifts the Base Fears by creating some mental dissonance, or expectation (e.g. related to the fear of the unknown), then uses a punch line to release the tension and produce SAT.  Sport is also just a search for SAT, using the above mechanism, but enhanced with DEP from the Aggression Drive, where SAT is associated with violent physical action aimed at beating an adversary. Over the years sport has become less brutal, but we still talk about ‘beating’ the enemy.

Perhaps the most subtle of these ‘entertainment type’ activities are the things the brain does for intellectual stimulation or out of curiosity, where the slight fear of the unknown encourages the brain to seek for the truth. These ‘answers’ or insights become ‘minor victories’ over the fear of the unknown and provide satisfaction or SAT. Even just watching TV forces the brain to escape reality (Base Fears) and also makes the brain witness drama, excitement (e.g. sport, competitions, adventure), intrigue and insight, al creating SAT. The essence of entertainment is thus repeatedly creating mild DEP, followed by SAT.

Balancing & Walking

Balancing and walking is achieved in an interesting manner by the model. First the agent learns not to fall down because this is painful. It learns to avoid the pain associatd with falling through Rewatd-based Backpropagation – meaning it slowly learns that keeping balance and not falling down will Satiate the fear of pain from falling down). It also learns to stand up straight because other poses are too strenuous causing fatigue (i.e. muscle pain).

Lastly it learns to walk as a means to locate SAT sources in its environment by means of ‘reinforcement learning’.

Coordination & Optimisation

To have the agent optimize its own efforts, a Fatigue Drive can be introduced. This Drive will create DEP as a function of personal effort e.g. power (which is energy/time).

It can co-generate Pain. So the agent will ‘suffer’ DEP from exerting power and even feel Pain (analogous to muscle pains).

To find SAT associated with this Drive, the brain will learn behaviours to optimize its efforts in terms of DEP i.e. take short cuts, avoid steep inclines, even avoid navigational routes past object that cause Fear and choose navigation routes past object creating SAT e.g. music or landscape where optic images are associated with SAT.

Language Aquisition

The model allows for an agent to develop language (verbal) skills in exactly the same fashion it learns motion (non-verbal) skills. As such the model views language just as muscular motion sequences learnt to elicit SAT from the environment. The agent will optimize the syntax of a specific lexicon during communication to minimize the time it waits for SAT. The agent might well learn that adopting the terminology of an existing language, is the easiest way to communicate with other agents which could act as SAT sources.

Note: Research paper currently in production at Xzistor LAB defining a project to demonstrate language aquisition in Xzistor agents.

Higher Tier Thinking

The model requires no predetermined higher tier, or subsumption layers (see Rodney Brookes), and ‘abstract’ or higher level concepts simply gets learnt when the paths to SAT gets more complicated and presents sub-goals (achieved through subtasks) which must be achieved before the actual SAT can be experienced. An agent can, for instance, be forced to learn that it must use a specific technique to solve a certain problem, and part of using that technique means learning some abstract concepts.

Comprehending these concepts just become sub-goals on the agents path to finding SAT. Of course it might ask many questions along the way and need pain staking training to get there, but it must be remembered that even if the X-zistor Concept was a 100% accurate brain model, it will still take the agent many years of training, including mastering a language, before it could use abstract concepts in its thinking. It thus requires time and reinforcement learning (B.F. Skinner), rather than predetermined ‘man-made’ structures or layers (Noam Chomsky).

MODEL DIAGRAM ON A SINGLE A4 PAGE

(Yes – a single page diagram is possible of the Xzistor logic loop. Blurred because in the process of improving!)

Summary Bio – Rocco Van Schalkwyk

For those of you who want me to participate in funded research projects, please see the 1-page summary bio below:

Bio – Rocco Van Schalkwyk (Xzistor LAB, UK)

Rocco Van Schalkwyk (B.Eng[Mech], M.Eng[Mech]) is a mechanical engineer with a career spanning over 30 years in the naval/marine, aerospace/defence, nuclear and robotics industries. Currently the Safety Engineering Lead at Ocean Infinity Ltd, his remit includes safety across the company’s robotic ship and autonomous underwater vehicle engineering programmes.

Rocco has a personal interest in humanoid robotics and in 1993 started developing a functional brain model from first principles. The Xzistor Concept Mathematical Model of Mind (‘Xzistor brain model’) was provisionally patented in 2002 (South Africa) and supported by a simulation of a single agent in a learning confine, demonstrating the model’s ability to provide a ‘learning agent’ with artificial emotions and cognition. Although a very basic simulation (C++ and OpenGL), it proved the functional model under dynamic conditions and showed that there were no exceptions to the model’s generality.

After presenting his paper “Emotion Modelling for Robots” at the IEEE Africon Conference in 2007, Rocco was invited to demonstrate the Xzistor brain model at the Frankfurt University of Applied Sciences where he illustrated how the simulation can quickly be altered to represent a Fraunhofer Volksbot robotic platform (and its sensors) driven by an Xzistor ‘artificial brain’.

In 2011 Rocco designed and built a physical robot resembling the simulated agent in a ‘learning confine’. The robot was deliberately kept simple – Java program (PC-based) to Java Virtual Machine (robot-based) with HD video (slipring umbilical), WiFi and Bluetooth connection. This proved that the simulation could be ported to hardware robots.

After making videos of numerous ‘test cases’ using the physical robot, Rocco started making information available on the Xzistor brain model in the public domain. He also published two short books on Amazon ‘Understanding Intelligence’ and ‘Understanding Emotions’ based on this functional ‘substrate-independent’ model. A neuroscientist, Dr. Denise Cook (PhD), working on similar approaches in Canada, did an in-depth study of the model and started collaborating with Rocco on ‘locating’ the areas in the biological brain providing the functional mechanisms proposed by the Xzistor brain model. She also started a YouTube Channel discussing insights from the model called ‘Conversations on the Mind’.

In 2022 Rocco published a preprint ‘The Xzistor Concept: a functional brain model to solve Artificial General Intelligence’ to bring attention to the potential of this model not just as way to understand the brain, but also as way to provide a unified platform to bring together related brain research into a high-level functional brain model that can be used to provide agents with emotions and cognition.

Rocco and his team of independent researchers at the Xzistor LAB are now building a new agent (robot/simulation combo) referred to as a ‘Language Learning Infant Agent’.

His new paper, ‘Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind‘ (1 Jul 2024) written with neurolinguist Alireza Dehbosorgi, provides a theoretical basis for how Xzistor agents can develop a language learning capability using artificial emotions as defined by the Xzistor Mathematical Model of Mind. A multi-stage project is proposed to demonstrate how an Xzistor agent will develop a language skill like and infant and then refine this skill towards improved syntax and grammar use with further reinforcement learning. The paper provides two appendices covering the mathematical principles of the Xzistor brain model and an explanation of how it could potentially unify behaviorist and structuralist language theories.

Find the new paper (preprint on ResearchGate) here:

Artificial Agent Language Development based on the Xzistor Mathematical Model of Mind

[2] Can A Robot Experience Reality Like Me?

An informal discussion with those who have questions about robot brains I think I can answer – else expect a reply like ‘Very busy at work at the moment!’ 

It all started on Twitter when I (alias @xzistor) said:

Understanding that machines can subjectively experience emotions requires a mental journey. Study the scientific explanation I offer very carefully. Next realize that there are other ways to provide the same bio functions necessary and sufficient to generate emotions. Be brave!

By the way, my name is Rocco Van Schalkwyk (just Rocco) from the Xzistor LAB. I am an engineer working in the field of marine/subsea robotics and autonomous ships – and do brain modeling as my hobby. I have a few collaborators who are PhD level specialists in different fields including neuroscience and brain modelling. I will reach out to them when the questions get too hard for me to answer!

The above was my response to @rome_viharo from Big Mother who said he wanted to understand the Xzistor Concept brain model.

I said he must be brave – it could be a lonely journey.

He said he’ll be brave. Brrr…he is still skeptical, so I will have to come up with something new if I want to smash the ‘hard problem. Think I have something for him! But ready to hear his arguments too!

All that happens in the Xzistor brain model can be understood at the hand of a few basic terms, principles and concepts. Easiest is to read my two short (free) guides (absolutely necessary to proceed on the ‘…not so hard journey…’).

Understanding Emotions

Understanding Intelligence

Without reading the above two guides it will be hard to proceed…really…as they introduce some key definitions and principles.

“The problem I have to solve is to provide scientific evidence that will convince @rome_viharo and others that the functions in the human brain can be simplified and generated by other means, and that this can over time, with sufficient learning, be sufficient to develop a sense of subjective reality in the mind of robot that is principally no different from what is experienced by humans.”

Nice and easy. Let’s go!

Question 1 by @rome_viharo:

If Bibo has feelings as you say, this means Bibo should be able to experience ‘ideas’…how then?

Xzistor robots’ brains are designed to wander all the time (just like humans). Their brains are restrained (focused) from this default tendency to wander – by priority actions required to address needs (just like humans). The Xzistor model prescribes a clear mathematical algorithm for how and when this mind wandering takes place – called Threading (there is a ‘threading’ process in the human brain as well – when the mind is free-wheeling with no urgent actions to perform). This Threading process recalls Associations (memories) from the Association Database one after the other based on shared attributes. It often starts with what is observed through the senses in the current environment and then follows a process of re-evoking Associations related to this first input based on shared artifacts. The algorithm will now identify and re-evoke Associations from anywhere in the Association Database as long as each next Association that is re-generated has a link to the previous one based on the Xzistor logical Threading protocol (based on shared attributes/artifacts). Most of the time each new Association called up will re-evoke visual imagery that was stored when the Association was formed (pictures from memory like when the human brain daydreams). In Xzistor robots these visual images re-evoked from memory will not be the same as direct optical sensing as it will be lower resolution and retinally defused – meaning what was not close to the centre of the retina will be out of focus. The Xzistor robot will learn that these incomplete (fragmentary/defuse) visual recollections with retinal blur come from memory and is not real-time input from the environment (again – just like humans!).

Although this mind wandering that happens naturally in the human brain (and Xzistor robots) can provide a great daydreaming experience, we also learn that we can use this Threading process to solve problems. We learn that sometimes these ‘random’ Associations that our brains come up with can suddenly present us with insight into how to solve a problem. Often this happens because an Association was recalled containing a ‘principle’ which was applied to solve a problem in another domain that is not completely unlike the problem we are trying to solve now.

Listen to Ben here: “I was sitting at the coffee shop today thinking about having to mow the lawn and I suddenly got this idea! I thought what if I use the same control logic – the same programmable schema – that I built into my autonomous vacuum cleaner, and put it into my electric lawnmower! Will it control itself and mow the lawn and move back to the charging station by itself?”

While sitting at the coffee shop, Ben’s mind was casually wandering through thoughts from things he had experienced in the past – only occasionally changing tack when he noticed something in the environment or heard something. Then he would simply carry on Threading again without any urgent actions to attend to.

A sudden problem then entered Ben’s brain as he remembered he needed to go and mow his (large) lawn – an arduous and time-consuming task. Because the problem was suddenly centre in his mind, it was being repeatedly presented to the ‘executive part’ of the brain driven by a slight level of stress (the fear of exertion and not being able to watch football). This stress state (sympathetic nervous system generating small amounts of adrenaline) served to focus the brain and prevented it from wandering off – a process we call ‘directed Threading’ or Thinking. Since the stress state is an emotional avoidance state, it kept on focussing the mind on the ‘problem’ rather than allowing it to wander through the Association Database in an unhindered fashion.

Ben has moved from daydreaming, to problem solving.

So, where did Ben’s ‘idea’ come from?

He was starting to explore thoughts like: ‘Do I really want to push that lawnmower? Do I really have to be there all the time to control it?’. And Ben’s brain changed from casual mind wandering to narrowing its search for new Associations to those that had similar attributes to the the lawnmower he was thinking about – for instance to Associaitons that involved similar devices that needed control inputs from humans, and how this can be avoided. These thoughts were conceptually close enough to Ben’s autonomous vacuum cleaner that required to inputs from humans.

Here we see that the Association of the vacuum cleaner came not only with the visual imagery of the vacuum cleaner – but also with the attribute of ‘autonomous operation’ – which linked further to its programmable logic schema. We can now deem this programmable logic schema to be the ‘idea’ that the brain came up with (from memory) to help Ben solve his problem.

So Ben had a problem and his brain led him into a problem-solving modality where his persistent concerns (fears) about excessive exertion and missing the football managed to narrow (direct) the mind wandering process in his brain. This meant that only Associations sharing similar attributes to the ‘problem concept’ were extracted from memory and placed before the ‘executive part’ of the brain to help it find a solution. And bingo!

When the directed Threading process provides an Association with attributes (like a programmable logic schema) that could potentially solve a problem – we can call this an ‘idea’.

Xzistor robots solve problems by using ‘ideas’ they get from Associations as explained above. But when they have no direct knowledge (experience) available in their Association Database (memory), the robot will have to stop and Think – it will start to perform directed Threading and look for relevant Associations that might provide ‘ideas’ to solve the problem. Xzistor robots make use of inductive inference and will try these ‘ideas’ even if they are wrong because they have nothing else to go on.

Early life Xzistor robots will use simple ‘ideas’ from memory to find simple solutions e.g. cannot see red apple, will try green apple (and see what happens).  Later life Xzistor robots, given enough resources to learn, will theoretically develop on a trajectory similar to humans and later extract ‘ideas’ from Associations like – recipes, rules, procedures, methodologies, techniques, flow diagrams, schemas, etc. as all of these will become known as potential aids – ‘ideas’ basically – when trying to solve specific problems (just like humans!)

Listen also what I say in this interview with Dr. Denise Cook (@personalitygeni) about how ideas are used by Xzistor robots – go to 1:07:44 – 1:09:00 in the interview:

Question 2 by @rome_viharo:

Only if we are defining self-awareness as identical to a thermostat having awareness would this make sense?

I hope the explanation above shows how the Xzistor model provides functions similar to the human brain well beyond the limited functions of a thermostat.

Question 3 by @rome_viharo:

I am very much of the school that the hard problem is a very real hard problem, but I have my own reasons on why the problem should be predicted to be really really hard, so I am really going to reserve my skepticism pretty strong on this one with you, which is tough because I love everything else about what you have done!

Let me present my case (supported by neuroscientists at PhD level) and then I will be equally open-minded to hear your arguments.

Comment 4 by @rome_viharo:

It is very beautiful so far, I think you have genuinely created a whole system here that could model fluidity of mind in terms of behavior, so that is why you being “wrong or right” about view of mind/feelings would not be relevant to me.

Great – thank you. I dedicated my life – every grown-up waking hour – to this understanding of the brain. But don’t feel sorry for me because I could not have had a better life and my model so clearly shows that humans only ever do things for Satiation! So it was, and still is, very much an adventure for me offering lots of intellectual stimulation and fun!

Question 5 by @rome_viharo:

But is Bibo really having an experience?

We have spoken about what Bibo can do once there are some Associations in his mind e.g. daydreaming (Threading) and Thinking (directed Threading) in order to solve problems. But how do these Associations end up in the Association Database along with all the appropriate and useful attributes like emotion values (+ or -), emotional saliency, sensory states, and other artifacts? It is through experiences that the brain learns – from the first simplest experiences to the more complex and nuanced experiences later in its life. Again the way the robot experiences life is ‘principally’ no different from humans – it has all the emotions (also those from ‘recognising’ and recalling emotional events/objects from the past) and the robot will store Associations in real-time as it experiences its ‘life’ – automatically giving more provenance to those Associations with higher emotions values/saliency. It is difficult to prove that a robot’s subjective experiences are the same as a human’s, just like it is difficult to prove that one person’s subjective experience is the same as another person’s (how would you measure this and put irrefutable evidence on the table?). But what becomes obvious as you get more familiar with the Xzistor brain model and the way it uses a simple set of functions to grow an incredibly rich experience-basis – is that there is no reason why it would not reach a stage where it has developed speech and speak the words: “I feel hungry, I feel cold, I feel happy, I feel sad, I hate exercise, I love my tutor, etc.”

And now one can turn the ‘hard’ challenge around and say, based on this mathematically predicable trajectory towards complexity and the inclusion of synthetically constructed subjective emotional states, why would one argue that this ‘reality, this set of experiences’ will feel different to an Xzistor robot from the way a human would experience similar subjective states.

When an Xzistor robot develops up to where it eventually can use language so that it can self-report that it ‘feels’ cold, it means it has learnt to associate the word ‘cold’ with a sensory representation of a homeostatic deficit state – just like humans would. So what disqualifies the robot’s subjective experience from being just as subjective as a human’s experience? We must be careful not to be so in awe of our own human brain’s complexity and the rich way in which we experience our own subjective reality, that we make a ‘hard’ decision that robots cannot experience similar subjective states. The human brain is a complex multi-variable adaptive control system that generates the functional states necessary and sufficient to make it experience all the effects that make up its human’s subjective sense of reality. The Xzistor robot brain is a complex multi-variable adaptive control system that generates ‘functionally’ similar (but simplified) states to those in the human cognitive brain to create the robot’s subjective sense of reality. On what basis can we say the effects created by these functionally similar systems will not ‘principally’ be the same?

Question 6 by @rome_viharo:

I’m not sure neurochemistry itself can account for experience at this stage so when I see you using language like “sending signals” to the hunger area it just makes me ask the question, okay, how does sending signals to the hunger area generate the experience of hunger instead of the behavior or the reaction of hunger?

This is one of the biological brain’s basic systems modelled by the Xzistor Concept. By now you will hopefully be familiar with the Body and Brain UTRs defined by the model. These are homeostasis/allostasis mechanisms that send information about their deficit and recovery levels to sensory areas where sensory representations are created that also contains information about the extent to which these UTR states have been biased (through operant learning) into avoidance (bad) or pursue (good) states. These sensory states can now be presented to the ‘executive part’ of the brain in a common ‘emotion’ format so that this central assessment/relay complex can use it to select the appropriate action the brain should take to solve Hunger (using additional information from e.g. Brain UTR emotion states coming from the Association Database).

For Hunger to be truly experienced subjectively as a ‘feeling’, there are a few necessary conditions that must be met:

  1. A macro-Body Hunger UTR state must be generated that could represent the aggregate of all homeostasis micro-Body Hunger UTR states (salt, sugar, sour, carbo, spicy, umami, etc.) conditions contributing the overall Hunger state.
  2. A Body UTR state creating a ‘stress’ state (modelled on the sympathetic/parasympathetic nervous system) that is parasitically generated by the macro-Body Hunger UTR state.
  3. The macro-Body Hunger UTR state will contain the information around the Hunger level that needs to be made available to the brain to solve for Hunger.
  4. The ‘stress’ Body UTR state (modelled on the sympathetic/parasympathetic nervous system) will also ensure that an allostatic ‘fear of’ Hunger stress state (also an avoidance state) will in future be generated when recalling Associations about feeling Hungry.
  5. To ‘feel’ Hunger the above macro-Body Hunger UTR state needs to be turned into a sensory state (e.g. a pseudo-sensory tactile state in S1 or insula) associated with areas inside the body like the gut or the trunk. It has now been turned into an emotion.
  6. To ‘feel’ stressed about Hunger the above ‘stress’ Body UTR state needs to be turned into a sensory state (e.g. a pseudo-sensory tactile state in S1 or insula) associated with areas inside the body like the gut or the trunk. It has now been turned into an emotion.
  7. The ‘sensory’ state based on the macro-Body Hunger UTR state and felt in the body as a good(pursue) or bad(avoid) ‘emotion’ will represent the Hunger condition, the intensity level and if the level is increasing or decreasing.
  8. The ‘sensory’ state based on the ‘stress’ Body UTR and felt in the body as a good(pursue) or bad(avoid) ‘emotion’ will represent the ‘stress’ level associated with the Hunger condition, the stress intensity level and if the stress level is increasing or decreasing.

The brain will now have an ‘emotion’ (felt in the gut/trunk body areas) state that in addition contains all the information required to confirm it is a Hunger state, the level of the Hunger state and if it is in Deprivation or Satiation, if it should be avoided or pursued and to what level. Parasitically the brain will at the same time experience an ‘emotion’ (felt in the gut/trunk body areas) state that in addition contains all the information required to confirm it is a ‘stress’ state, the level of the ‘stress’, if it is in Deprivation or Satiation and if it should be avoided or pursued and to what level.

In short, the brain will now feel Hunger as if coming from inside the body along with a stress state also felt in the body, and it will feel a conditioned compulsion to take action to avoid these states.

Interesting – in future when the brain is not Hungry but thinking back to a Hunger episode, the brain will not re-evoke Hunger based on only these thoughts, but it will re-evoke the ‘stress’ state. This will teach the brain to start looking for food long before it gets Hungry. Clever!

To solve for Hunger, the brain can now use the information contained in the above ‘emotions’ and approach the Association Database with the request to see if Hunger Satiation sources can be sensed (‘recognised’) in the environment, or if any object in the sensed environment forms part of navigation cues leading to a Hunger Satiation source.

If the brain has had enough time to learn a language, it might also have learned to utter words to a person/robot it can observe such as: ‘I feel Hungry!’ This voice (effector) skill would have been learnt during a past Hunger solving event where the brain was rewarded with food after uttering the correct words to a tutor during training.

For an Xzistor robot to be able to self-report a subjective Hunger state, all that is required are the conditions listed above to experience an internal Hunger state and enough operant learning to eventually link the internal Hunger states (and emotions) with the phrase: ‘I feel Hungry!’  

(Pssst….no different in humans.)

Question 7 by @rome_viharo:

What happened to Bibo’s point of view or sense of somatics in between? If not touching or feeling anything, if neither too far from “food” or too close, neither satiated nor hungry?

What is Bibo’s “ground state” of being like?

When no sensory stimuli is grabbing Bibo’s attention and no ‘emotion’ is demanding Satiation, Bibo’s life (and actions) will still resolve around Satiation. In order to find Satiation when there are no emotions to Satiate, Bibo will learn that Satiation can be created artificially. He will start to look for opportunities to create small amounts of stress that can be Satiated. For instance Bibo will start to explore unknown areas of his confine (slight fear of the unknown) and enjoy coming away from it unscathed and knowing that it holds no threats. Or Bibo can start to play games that generate artificial tension that can be relieved. If there are other sources if intertainment or humor that will create Deprivation-to-Satiations undulations, Bibo will go for it.

If Bibo gets tired fo playing exciting games the fatigue emotion will go ‘active’ and Bibo will find Satiation from resting – this is when he could easily slip into daydreaming (Threading) or even ‘directed’ Threading (like Ben above) if he suddenly thinks of some problem he wants to solve in order to remove it as a source of Deprivation (e.g. fear) – something he needs to be worried about for the future. A mental solution to this problem will also create Satiation (based on an appraoch where the calming effect of the parasympathetic nervous system is modelled).

Next questions will be answered soon!

[1] Can A Robot Experience Reality Like Me?

An informal discussion with those who have questions about robot brains I think I can answer – else expect a reply like ‘Very busy at work at the moment!’ 😄

It all started on Twitter when I (alias @xzistor) said:

Understanding that machines can subjectively experience emotions requires a mental journey. Study the scientific explanation I offer very carefully. Next realize that there are other ways to provide the same bio functions necessary and sufficient to generate emotions. Be brave!😄

By the way, my name is Rocco Van Schalkwyk (just Rocco) from the Xzistor LAB. I am an engineer working in the field of marine/subsea robotics and autonomous ships – and do brain modeling as my hobby. I have a few collaborators who are PhD level specialists in different fields including neuroscience and brain modelling. I will reach out to them when the questions get too hard for me to answer!

The above was my response to @rome_viharo from Big Mother who said he wanted to understand the Xzistor Concept brain model.

I said he must be brave – it could be a lonely journey.

He said he’ll be brave.

All that happens in the Xzistor brain model can be understood at the hand of a few basic terms, principles and concepts. Easiest is to read these two short (free) guides of mine (absolutely necessary to proceed on the ‘…not so hard journey…’).

Understanding Emotions

Understanding Intelligence

Without reading the above two guides in will be hard to proceed…really…as they introduce some key definitions and principles.

I am an engineer. I am a scientist. I need a problem statement: “The problem I have to solve is to provide scientific evidence that will convince @rome_viharo and others that the functions in the human brain can be simplified and generated by other means, and that this can over time, with sufficient learning, be sufficient to develop a sense of subjective reality in the mind of robot that is principally no different from what is experienced by humans.’

Nice and easy. Let’s go!

So, @rome_viharo went through my guide Understanding Emotions and came back with some really good questions. I will now address his questions. Hear me out!

PS. Just keep in mind that I have completed all work on this model many tears ago (typo – but I’m gonna keep it! J). I have finished (patented) and I witnessed it work correctly in a demo simulation and a physical robot. I am just sharing now how this model explains the brain in simple terms – and writing about it…

Question 1 by @rome_viharo :

Quoting from Understanding Emotions (read it!): “What  we  can  now  do  is to  deliberately feed  a  special hunger signal from the Urgency To Restore mechanism for hunger in Bibo’s brain to this ‘intra-abdominal’ sensory area.”

Makes sense as a model, but how it becomes an experience or an actual “state of presence” producing a subjective experience (a dimension somewhere who knows where) as opposed to a (very) elegant model of one I still do not see. 

“Bibo  will  become  aware  of  the  sensory  state  in  the ‘stomach’ or  ‘intra-abdominal’ area of his  brain, but it will have no meaning to him (i.e. he will not know whether it is a good or a bad thing). “

The hard part though is the “aware”, not the semantics. I could argue the entire affair has no meaning to Bibo, that part should not be surprising to learn.

How did Bibo become aware of  presence as a point of view of Bibo?

Answer 1 by @xzistor:

We must systematically build up the complete picture of what is required to experience an actual “state of presence” or a sense of reality in the way we as humans experience it. The mental mechanism described above (from Understanding Emotions) is just a small part of the total emotion and cognition machinery that is necessary to form a complete sense of reality – but not sufficient.

But, WOW! We are straight into the ‘hard’ problem here! Let’s go for it!

There is a part in your brain that will use all the incoming and available information to decide for you what you must do (no free will – sorry!). Let’s just call it the ‘executive part’ of the brain for now (neuroscientists often associate the basal ganglia with this functionality). The main thing is – we know incoming and available information (say from memory) are used to base your behaviors on.

How do we provide information to this ‘executive part’ of the brain? What format must it be in? What will it understand? How will it decide what is important and what not?

It was designed to understand certain signals and states created in the brain and presented to it. And we can now present a hunger signal to it in a format it will understand.

We need to tell the brain the body it controls is hungry – and that hunger is bad i.e. hunger must be avoided.

How do we tell the ‘executive part’ of the brain the body is hungry and hunger must be avoided?

Using markers from the bloodstream, gut, etc. we create a representation (a state) in the brain that the ‘executive part’ of the brain can interpret. Firstly, the signals flood into the hunger center(s!) and set up a spatiotemporal state within the neural network (and supporting circuity) that is uniquely associated with that type of hunger, and also with the level of hunger. (By the way, it is fun to read the neuroscientific stuff on where such a hunger center, or centers!, might be located in the biological brain. It exists! It is real!)

This hunger state that has just taken shape in the hunger center (this could be in any cortical area(s) of the brain) will now make contact with the ‘executive part’ of the brain and present itself. In itself this state does not have any idea what it is or what it represents – but the ‘executive part’ of the brain was designed to work with the information presented to it in this format based on hunger signals from the body. The ‘executive part’ of the brain is now ready to process this information, decide how it compares with other incoming sensory states (an recalled states) and whether it is perhaps the strongest state and therefore needs to be prioritized.

If this hunger state has a high activation level and trumps the other states flooding into the ‘executive part’ of the brain, the ‘executive part’ will use this ‘high-activation’ hunger representation to approach the memory and see if it can unlock some previous learning that could contain cues (associations) as to what would be the best thing to do. This search will be augmented by information about the environment based on all the other sensory inputs at the time. This will help to isolate the most appropriate and effective actions from prior learning.

Now it gets interesting!

This ‘most appropriate’ cue or association coming from the memory needs not just be any cue – it can and should be one that was formed during an event where the brain learned what actions to perform to solve the hunger problem (preferably in the current environment). How did the brain learn to solve the hunger problem? Yip – during a learning event where it found food (either by itself or with some help from someone!) the brain is pre-programmed to reward the actions that led to the discovery of the food by storing these actions to memory – effectively telling the brain: “You don’t like being hungry – because when you are hungry and you find food, I very strongly reward you for finding food (and reducing hunger). I reward you by reducing your activation state and by making you store those actions to memory with the clear instruction to use them when next you get hungry and you are in this environment.”

What many people miss is that on that moment of storing preferred actions to memory in the brain – the brain is actually given its first preferences, its first biases. It gets given a state to avoid (hunger) and a state to pursue (eat when hungry and you have food!). Later when the brain has learnt a little more about life and learnt some words as part of a language, it will learn to say when experiencing this state “I feel hungry! It feels bad! I don’t like being hungry!” because the brain wanted it to avoid hunger. And “That tastes great! I like steak! I feel better now – I don’t feel hungry anymore!” when consuming food because the brain wants it to pursue satiation actions to make hunger go away. But these verbal expressions are based on deeply-rooted homeostasis mechanisms that we have in our bodies and brains (well described in the neuroscience academic literature) that sets us up to learn what we must pursue and what we must avoid to survive and thrive – and one can argue these form the basis of emotions i.e. this is where emotions start (after which they will permeate all of our everyday experiences and tag objects as good, bad and…meh).

Have you noticed what happened above?

We have said “we feel hungry”. This state in the brain representing hunger has become associated with the word ‘hunger’ and at the same time, it has become associated with an avoidance state. When we experience this state in the brain, we are automatically driven to avoid it.

The ‘executive part’ of the brain gets ‘a strong state’ coming in from the hunger center via its hunger portal and part of the information contained in this hunger state is the fact that it is an avoidance state (and how urgent it is compared to other incoming states). Note that the ‘executive part’ of the brain does its job without understanding what hunger is – it acts as a relay station manager knowing only how to adjudicate and direct incoming information based on a few simple rules. This will allow the ‘executive part’ of the brain to retrieve some cues – some bodily actions – from memory that might help to solve the current hunger problem. We will later see that the ‘executive part’ of the brain – when it has time – can also retrieve more context from memory around this hunger state and around what else is going on in the environment, especially when no resolving actions could be found immediately from past experience.

But let’s stop there for a moment.

Let’s come back to @rome_viharo’s question:

How does a state representing hunger (just a unique representation consisting of electrochemical signals propagating through a biological neural network) how it becomes an experience or an actual “state of presence” producing a subjective experience?

Ready for the ‘hard’ bit?

This state presented to the ‘executive part’ of the brain is real! We see the activation of the hunger areas in the brain on fMRI scans and we see how these are presented to the ‘executive part’ of the brain through neural pathways – measured empirically.

There is nothing more required from the brain than to register this real, physical incoming state and work with it in the manner that it does – for it to be able to become part of our reality. We have this state coming in and the brain teaches us to avoid it and soon we learn people use the words ‘feel hungry’ to refer to this unwanted state. And nothing more is required than for this state to act in the way described above and for the ‘executive part’ of the brain to help contextualize it to other sensory states from the environment and what it had been associated with in the past to retrieve helpful memories.

This is the part that people struggle to get!!!

They somehow think there must be more to it. But if a state is generated in the brain that you have been made to avoid, and its activation level makes it more urgent, and it had become associated with the word ‘hungry’ – you have all that you need to look someone in the eye and say: ‘Right now – I feel hungry!’

And it will be verifiably true – this is you experiencing subjective hunger. we see on fMRI scans predictably states activated in hunger centers when people describe how they are feeling hungry, and also when they are feeling satiated.

And why can we not built what we have described above into a robot?

Maybe we do not measure a marker in the bloodstream, but we measure battery level of charge. Now we can generate a state to represent this to the ‘executive part’ of the robot brain.

And here is the thing – the type of information contained in the neural hunger state, can easily be provided to the ‘executive part’ of the robot brain as a numerical state. Although it is in the form of a numerical value represented in machine code – it is still a unique state! It still contains all the information around where it comes from, how strong (urgent) it is and it can easily be compared to other incoming numerical states for prioritization. It can even serve for the robot brain to search for past learning (in an association database) that relates to this state within a specific environment and in this way appropriate avoidance actions can be retrieved. This is exactly what happens in my simple simulations and robots.

We must not make the fundamental mistake of saying what happens in the biological brain is fundamentally (principally) different from what we can do in a robot brain.

It is the same basic functions that we are providing – one is just in the biological brain and the other in a computer. The function is what is needed – and how the information is provided to the function should not make a difference.

I always say the Xzistor Concept is a functional brain model – and it is ‘means agnostic’. Strictly speaking , the human brain is just one instantiation of the model.

I wish and pray I can wake up tomorrow morning and the whole AI world just suddenly say: “We get it!”

And I will say: “Good! 60 years has been too long! Now let’s go build ’em!”

(Guys – need to bail out now and have some shopping to do tomorrow. But will pick up on the next brilliant questions from @rome_viharo. As other questions come in I will log them and add more blog posts. Hope you are enjoying your brave mental journey so far. Soon we will get into more complexity and explain why robots can have just as rich a personal experience as we have. But let me not get ahead of myself! Be brave!)

Xzistor Language Modelling & Generalization

Rocco Van Schalkwyk 15/10/2022

Like many aspects around the Xzistor Concept brain model, the discussion around how it implements a speech and language capability will require a mental journey many current linguists and AI researchers are not ready for yet. This is because it involves machine emotions – a concept not many are ready to accept as a real or implementable at the time of publishing this post.

A few Xzistor LAB collaborators have worked on their own theoretical approaches towards brain modelling which have provided them with an understanding of machine emotions, and specifically how machine emotions are implemented within the Xzistor Concept. These individuals now have the basic understanding to appreciate the scientific basis of how machine emotions can be designed and implemented in Xzistor artificial intelligent agents. With machine emotions as the basis, it becomes easier to explain how a speech and language facility can be achieved in artificial agents using the Xzistor Concept cognitive architecture.

Quick Refresher on Machine Emotions

According to the Xzistor Concept there is only information in the brain. To make a brain state ‘bad’, the brain state must be turned into an ‘avoidance’ state – because the robot will learn to perform effector actions to avoid it. To make it ‘good’ the brain state must be turned into a ‘approach’ state – because the robot will learn to perform effector actions to pursue it. By using sensory-type states and turning these into ‘avoidance’ or ‘approach’ states that can subjectively be ‘felt’, these sensory-type states can be given motivational value – ‘avoid’ (bad) or ‘approach’ (good) – as wells as motivational saliency (strength). The agent will constantly be aware of these sensory-type ‘feeling’ states as they will be subjectively felt – and we will refer to them as negative (-) or positive (+) machine emotions.

Most of the agent’s behaviors will originate from learning how to evade ‘avoidance’ states and achieve ‘approach’ states. This will happen through special reinforcement learning events called Satiation Events as part of operant learning. Operant learning happens when moving out of an ‘avoidance’ state and into an ‘approach’ state is rewarded by the brain by storing the effector actions that led to securing access to the reward source as an association. In humans, the subjective feelings experienced during the reward stage are often termed ‘relaxing’ or ‘pleasurable’ because what the brain has been trained to avoid (the bad state) is being reduced.

The manner in which the Xzistor Concept defines machine emotions cannot be fully understood without introducing two important theoretical constructs, namely Body UTRs and Brain UTRs. A basic understanding of these two functional mechanisms will aid in understanding that there are basically two types of machine emotions that can be generated in the brain of the Xzistor artificial agent.

Body Urgency To Restore mechanisms (or just Body UTRs) are simple homeostasis-type control loops for avoiding e.g. thirst, hunger, cold, pain, fatigue, etc. Each of these has its own dedicated centre(s) in the biological brain where the ‘avoidance’ and ‘approach’ brain states are generated based on the measured parameters on which the homeostasis mechanisms are based (utility parameters like hydration level, nutrition level, tactile pressure on the skin, etc.). These states are all subjectively experienced in the brain and constitute the first type of emotion that we can create machine equivalents of.

But these emotions, like experiencing intense thirst, hunger or pain or reciprocally drinking (thirst quenching), eating or escape for pain cannot be regenerated when recalled from memory in future. For instance we do not suddenly experience the bad feeling of thirst again when we think about it. Memories that make us feel bad when we recall them make use of another type of emotion. They emanate from a stress state that is parasitically generated at the same time a subjective state like thirst, hunger or pain is generated. In the human brain this is the Sympathetic Nervous System (SNS) that is automatically activated at the same time as states like thirst, hunger, pain. etc. These turn into a subjective feeling when they affect the vagal nerve (digestive tract) through e.g. adrenaline/noradrenaline/cortisol release. The sensory state in the gut is projected by sensory neurons through the brainstem to the cortical areas in the brain where a subjective sensory state is created.  The SNS cause this avoidance state in the somatosensory area, and it will increase (in activity) and then decrease (in activity) when the SNS state is inhibited by the Parasympathetic Nervous System (PNS). These are the ‘avoidance’(stress) and ‘approach’(relax) states that indeed can be remembered, and these emotions or ‘gut feelings’ will be recalled as emotions when we see or think about the conditions that caused them in the first place. As the Body UTR mechanisms get triggered, they will always create their own unique ‘avoidance’ and ‘approach’ states (e.g. feeling thirsty, hungry, etc.) as well these automatic SNS (stress) and PNS (relax) states that will be contributed to by all the different Body UTRs active at the time. The SNS (stress) and PNS (relax) states are the effect of all the active Body UTRs working on the same system and will result in a single collective ‘gut sensation’ either good (agent must pursue) or bad (agent must avoid). Whilst subjective Body UTR sensations like thirst, hunger and pain cannot be regenerated by recalling memories, the SNS (stress) and PNS (relax) states will be stored as part of associations and will be re-evoked (regenerated) when the human or artificial agent sees or thinks about the conditions that triggered them.

When we regenerate these SNS (stress) and PNS (relax) states because we recognize an object or think about an event to which they were linked, they effectively constitute ‘emotions from memory’ and will again trigger adrenaline/noradrenaline/cortisol (SNS) or inhibitors (PNS) for these. We will call these mechanisms Brain UTRs. Just like Body UTRs these good or bad emotions regenerated from memory could be strong and add to or oppose what the brain is already experiencing. Since they only work on the vagal nerve ‘gut feeling’ they will compete with other emotions that are affecting the SNS and PNS states as a result of activity by Body UTR mechanisms. So the different machine emotions that can be created are:

  1. Unique ‘subjective’ emotional states generated by active Body UTRs like thirst, hunger and pain.
  2. The correlating SNS/PNS emotional states as a result of those same active Body UTRs.
  3. The SNS/PNS emotional states as a result of what is sensed or thought about from memory.

Whilst all the unique Body UTRs will separately be experienced as recognizable subjective states like thirst, hunger, pain, fatigue, etc. – the effect these mechanisms have on the SNS/PNS will just cause a single resultant state (emotion) experienced in the gut through the vagal nerve ascending pathways to the cortex.

Although explained here at the hand of the biological brain, the Xzistor Concept will build simple equivalent systems that achieve the machine equivalent of these mechanisms.

For more on Xzistor Concept machine emotions please refer to this blog post on the Xzistor LAB website: https://www.xzistor.com/machine-emotions/

Or the author’s original free short book on machine emotions:

Understanding Emotions: For designers of Humanoid Robots

Early Xzistor Embodiments – Simmy Simulation

As the inventor of the Xzistor Concept brain model, I have not in any great detail implemented language learning in either my Simmy simulation or Troopy physical robot (please see www.xzistor.com). I have however spent quite a bit of time and effort on doing some early preparations for more comprehensive tests and demonstration using the Xzistor cognitive architecture.

Going back 20 years, I added into Simmy’s learning confine a ‘speaker’ which was able to send an audio message ‘Yes!’ sound message (green indication) and a message ‘No!’ sound message (red indication).

Added: I just managed to find the legacy video clip below of Simmy hearing the ‘No!’ command over the speaker. The emotional effect is not that clear because the little agent is already very distressed. This is because its stomach (backpack with purple fuel) is empty. But even so one can see how the corners of its mouth dips down even further when it hears the ‘No!’ command and also see the slight spike in brown bars on its Emotion Panel. Click on image below to start short video:

With the push of the N-key on the keyboard, I could ‘activate’ the speaker and the agent will hear the ‘No!’ word and turn this into a brain state or representation. This demonstrated that by using the word ‘No!’ while an avoidance state is being experienced, the representation of this sound state will become part of the association that is created at that moment, and then hearing this sound again in future will re-evoke a negative emotion (SNS (stress) activation state as a -%) e.g. when Simmy hears the word ‘No!’ again even in other situations or contexts, it will feel a sense of avoidance (SNS activation) created by the SNS/PNS emotion mechanism.

Similarly, the word ‘Yes!’ would become part of a positive approach (pursual) state association and become something that can trigger positive PNS emotions as a +% (based on inhibition of the SNS) when the association is recalled.

This is a good example of a Brain UTR. This Brain UTR will generate SNS Deprivation (-%) when the word ‘No!’ is heard and the agent will want to do something to ‘avoid’ this negative emotion. It will have to learn what it is that it needs to do and in different contexts as these actions could differ. If it is moving up towards the edge of the swimming pool and it hears ‘No!’, and it then reverses and hears ‘Yes!’ (which will generate a positive emotion PNS state), it will experience a Satiation Event and learn that the correct action to avoid/diminish the Deprivation state (SNS) from the word ‘No!’, when close to the swimming pool, is to reverse away.

This simple process opens the door to a very powerful educational process.

Originally, everything an Xzistor agent learns is through operant learning. The effector motions leading to grasping and eating an apple (simulated of course) will trigger a Satiation Event that causes learning. Similarly, if we just regard the speaker (voice) of the agent to be another effector, it can produce pressure waves sounding like ‘Give me the apple!’. This could also secure access to the apple as a reward source and broadcasting this sound pattern will become reinforced as an effector action (compulsion) when next the agent is hungry and close to the apple. In fact, this will become the effector action of choice because it entails less effort (cost) to secure the reward source.

Whereas with a human baby a long process of babbling simple sounds and words need to precede the point where mimicking the mother’s words / sentences becomes a source of Satiation in itself, and learning can occur, we can implement a much more expedited process with Xzistor agents.

We can give the Xzistor agent a mimic reflex.

Preceding handing over the apple, we can say ‘Give me the apple!’

The agent will then repeat: ‘Give me the apple!’ and on that moment we hand the agent the apple, it eats it and goes into Satiation meaning a Satiation Event takes place and it remembers the hand motions as well as the sounds it projected through the speaker that contributed to the successful eating event.

While the robot eats, we say ‘Good robot!’

And the robot repeats ‘Good robot!’

Important: The words ‘Good robot!’ is now locked into memory with a strong Satiation association and in future when the agent hears those words, whether hungry or not, the PNS (relax) emotion will be triggered – creating a positive emotion (we might even see a little smile on the agents’ face).

Because the words ‘Good robot!’ now triggers PNS Satiation – it can be used as an operant learning reward state. For anything the robot now does, we can say ‘Good robot!’ and the robot will remember those actions as a Satiation Event with operant learning. It will become eager to please (because it will find Satiation).

The robot can now be made to do things (and say things) in return for emotional reward.

If we now put an image of a pineapple in front of the robot and say: ‘Pineapple!’ the robot will mimic the word and also say ‘Pineapple!’ and if we then say ‘Good robot!’, a Satiation Event will occur that will lock the image of the pineapple with the words ‘Pineapple’ spoken by the robot into the robots association database.

In this way we can teach the robot to start to link images to words and slowly its vocabulary will start to grow.

The above process will not yet involve any understanding of grammar or sentence structures.

As the process continues the robot will learn that a bad sentence e.g. ‘Give him apple!’ will not lead to reward and only the correct sentence ‘Give me apple!’.

Later the robot will learn that just the word ‘Give!’ can be used to get handed an object from the tutor. This will happen because of the Xzistor model ‘context’ generator (and directed Threading). This algorithm ‘try’ random words from sentences in situations where agents have not been able to learn before. The model chooses these, and other actions, from closely related associations, as part of ‘intelligent guessing’, to solve problems i.e. to evade ‘avoidance’ states or achieve ‘approach (pursual)’ states. This is informed by similarity in sensory states, cues from the environment or other associative artifacts based on emotion value (+ or -) and salience. If a guessed action or word leads to Satiation, a Satiation Event will occur, and the reward will mean the action (or word) becomes the chosen action to perform in future to solve the newly encountered problem. This can solve a completely novel problem in unfamiliar environments – and is basically how humans also solve novel problems in unknown domains (there is often a bit of guesswork involved).

An example to illustrate this logic mechanism could be where the robot moves towards an open fire and the tutor shouts ‘No!’. The robot has never encountered this situation or seen fire before. But the ‘context’ algorithm will collect any associations that could relate to what is being seen, heard, felt etc. through a process called ‘directed Threading’. This subset of associations (some perhaps only vaguely related or sharing a weak link / attribute) will now be presented to the robot brain in order of preference based on similarity, emotional value and emotional saliency. The robot brain will be encouraged to ‘try’ the motion commands stored as part of these associations. Let’s assume one of the ‘context’ associations presented to the robot brain is the one where the robot has learnt to reverse away from the swimming pool. The robot activates this motion command set by recalling the stored association motions and reverses away. Suddenly the robot hears the tutor say ‘Yes!’. There and then a Satiation Event takes place in the robot brain as the negative emotion associated with ‘No!’ changes to the positive emotion associated with ‘Yes!’. The robot has just learnt through operant leaning that when it gets close to a fire, it must reverse away. This is how the Xzistor agent will use past experience to solve novel problems in new environments. If it has learnt to open a blue cupboard door to find food, it might enter a new room when hungry and see a green cupboard door that looks different but have some common features like doorknobs or keyholes. The ‘context’ mechanisms will now encourage the robot brain to try the actions it performed to open the blue door on the green door. This might lead to the robot finding out how to open the green door all by itself – and in this case without the help of the tutor. Finding the reward will reinforce the actions that were successful.   

When it comes to the words spoken by the robot, the grammar in these sentences will only be based on what works and what does not work. And this can be expedited with the tutor saying: ‘Good robot!’ when the correct sentence construction is used. Just like the objects in an optic image will start to get individual meaning as they are singly experienced in different contexts, so words will also get individual meanings. For instance when the robot says ‘No!’ it will learn that the tutor will act helpfully and remove the scary toy, but on another occasion also take away the bad tasting food. ‘No!’ now becomes detached from just one context and starts to serve as an expression (effector action) that can in different situations make avoidance states or objects disappear or be reduced.

The robot will learn that using the wrong sentence construction will delay getting reward sources and could generate disappointing ‘No!’ and ‘Bad robot!’ responses from the tutor (it is important to appreciate that the tutor will become a strong emotional reward object / source in the life of the robot). Question: Is this perhaps the basis of love?

Using a whole sentence to elicit a reward will be no different from executing a sequence of effector actions to get access to a reward. The words become the individual effector actions and they will be subject to reinforcement based on prediction errors (changes in anticipated Satiation) – just like doing the sequence of manual actions to get to a rewards source, placing the words in the correct sequence will lead to success and reinforcement.

Interesting: There is principally no difference between a manual effector action and pronouncing a word – and only executing these in the correct order will ensure access to the reward source.

We should therefore see the Xzistor agent learn to use the correct sentences to achieve goals.

Adding prefixes etc. as per grammar rules will be formed by learning (habit) and only much later when taught about grammar, sentence construction and writing – will any notion of grammar or sentence construction become known to the agent.

Xzistor robots learn just like humans.

Early Xzistor Embodiments – Troopy Physical Robot

The ‘mobile feeder’ below was used to start experimenting with Troopy around the theoretical aspects discussed above. See video on YouTube: https://youtu.be/7H0gUwAYnQo

Advanced Speech and Language Facility

After much more learning (years), Xzistor robots will learn that using correct words and word sequences will impress people and ensure clear communication. Then speaking will also become linked to reading and writing, and will become a means to convey complex thoughts – not just short cues to command rewards but sharing abstract thoughts and ideas. But during the early years, language learning will prominently revolve around restoring the basic homeostatic needs generated by Body UTRs and Brain UTRs and this will quickly progress to a strong tendency to try and please the tutor as this will lead to instant ‘emotional’ reward e.g. upon hearing the words: ‘Good robot!’.

A next phase more advanced Xzistor language development experiment should be very interesting as it could be based on what was discussed above. Again, the aim of the Xzistor project is not to see how far we can push the cognitive architecture but rather to systematically build evidence of how it ‘principally’ explains the working of the brain when it comes to an artificial speech and language facility.

Refining this into more complex implementations with higher resolution and with the ability to process large amounts of complex information can be added as a next phase.

iCub Robotic Platform Speech and Language Facility using Xzistor

How would a simple iCub robotic platform implementation work using the Xzistor Concept cognitive architecture?

1.) Install the Xzistor cognitive architecture on the iCub platform (the Simmy simulation was written in C++!)

2.) Teach iCub to access apple when hungry.

3.) Teach iCub to say ‘Give apple!’ by mimicking and operant learning.

4.) Teach iCub to associate ‘Good robot!’ with PNS Satiation (+emotion) during eating reward.

5.) 4. above can now act as reward source (Satiation Event) for iCub.

6.) Teach iCub more words by showing image, naming the object in the image, waiting for the robot to mimic, and rewards with: ‘Good robot!’

7.) Gradually implement longer sentences.

8.) Allow the context generator to break up sentences into single words and use them in other contexts e.g. [give] [my][blue][ball][please].

I fundamentally belief there are dedicated innate structures in the biological brain aiding in building an efficient speech facility. But this can be the basis of future studies based on the simple principles provided by the Xzistor Concept. There is a lot that can be explored not just refining the general model, but refining detail around the functionality of speech, writing and reading.

Final Thoughts

For now, I suggest following a modest route towards more complex implementations as overcomplicating a demonstration programme could distract from a basic model that ‘principally’ explains what happens in the brain – a crucial first step. I see the Xzistor Concept as a basic ‘Bohr Atom of the Mind’, a brain model that is now desperately needed after decades of searching for such a model in vain. A good way forward will be to use the speech facility demonstrated by the Xzistor Concept and then build on it – refining aspects to it and adding complexity – all which will be possible and will help elucidate the logic and underlying mechanisms of speech and language. This will hopefully help linguists and AI researchers gain a more complete understanding of how this complex functionality is achieved by the biological brain and how it can be replicated synthetically.

Xzistor Validation Project – The Drinking Mouse Mystery

When we are thirsty, drinking water should be rewarding. That’s how we experience it as humans. And after all, we are restoring the bodily homeostasis function that regulates hydration.

The Mouse Mystery

Figure below: Scientists at Caltech conducted a study to see why animals find drinking water rewarding. They recorded large spikes of dopamine release when thirsty mice drank both water and a salty saline solution. This proved that mice found both of these liquids rewarding. But strangely, when these scientists injected water directly into the digestive tracks of thirsty mice, they found no changes in dopamine levels, even though the injected water would also have hydrated the thirsty animals.

From blog post ‘The Neuroscience of Thirst: How your brain tells you to look for waterby Michelle Frank
figures by Jovana Andrejevic

This phenomenon might seem contradictory at first but is elegantly explained by a concept defined by the Xzistor Concept brain model called a ‘Satiation Event’.

Let’s briefly look at what is meant by a Satiation Event.

As the artificial agent (robot) becomes thirstier over time, meaning the simulated water level in the blood is effectively dropping, the agent will enter the Deprivation phase of the Thirst UTR curve. When the agent tries to drink something (again simulated), it will be a strong indicator that the fluid is a ‘legitimate’ hydration source if the water level in the blood suddenly increases (or a derivative marker shows a similar trend) and the UTR value suddenly starts to drop. This sudden drop in the UTR curve is indicated by the Satiation phase of the UTR curve in the figure below. This will signify that a Satiation Event (apex of the curve) had taken place.

It is crucial to the agent’s survival in its environment to learn from the Satiation Event that had taken place when the fluid (water) was consumed for the first time. The model will prominently flag this Satiation Event the moment it takes place and save all available information around it for future use: What did the agent do? What did the agent do just before that? What did the water source look like? How did it feel to the touch? Where was it found? How was it retrieved and manually handled? This will be important information for the agent’s survival in future and stored as a set of associations as part of learning. The Xzistor agent will learn based on the reward state generated by consuming the fluid, allowing it to navigate back to this fluid source in future when thirsty.

The Mouse Mystery explained.

The Satiation Event explains how any reward provided to the brain too long after the correct drinking behavior had been performed (e.g. 10 minutes later) will reinforce the wrong behaviors – not the moving to the faucet and sipping/swallowing sequence. Therefore the biological brain has no choice but to reward the ‘swallowing’ behavior (assuming it will lead to hydration) and deliberately assign no reward to the actual restoration of hydration into the blood stream that is expected to happen a few minutes later. The biological brain’s effective UTR curve will still broadly resemble the Xzistor Concept curve, except that the Satiation Event is moved to an earlier (pre-emptive) point and triggered by a ‘swallow sequence’ rather then a ‘lowering of osmolality’. Some mammals will closely resemble the Xzistor UTR curve while other mammals like mice, humans, etc. will require the Satiation Event to be moved earlier and triggered by the ‘swallow sequence’ to ensure learning and ‘valuing’ of this behavior as important to survival.

No more Mouse Mystery!

Another interesting insight provided by the Xzistor Concept brain model.

Validation of the Xzistor Concept against the Biological Brain

Breaking news!

We have just launched a very exciting new project to validate the Xzistor Concept ‘functional’ brain model against the Biological Brain (on 6 August 2022). Anyone can become a Team Collaborator and contribute to the project (see how to get involved inside the Validation Project Plan below). Join us and see how we systematically identify the very same functions defined by the Xzistor Concept brain model within the mammalian brain.

An early draft of the Validation Project Plan is available here:

This marks Rocco Van Schalkwyk’s first formal sojourn into the ‘wet’ brain after developing the ‘functional’ Xzistor Concept brain model over many years! We have decided to pilot the Homeostasis Thirst Mechanism to start the process and test the validation question set.

Initial results are very exciting. Even just a few hours into investigation the biological brain shows promise that similar functions as defined by the Xzistor Concept exist in the biological brain.

Remember we are just starting off! All is very much work in progress! And our promising results will ultimately be peer-reviewed by the Xzistor Concept Lead Validator who will have a background in neurology.

A *new* early draft of the Functional Validation Report_Thirst is provided below: