Robot (O!)rgasm

Proving that machines can experience euphoria…

Specific terms defined by the Xzistor Concept brain model are used in this blog post. To get familiar with these terms and as an introduction to Machine Emotions, please read the following blog post first: Machine Emotions.

As we have said in the blog post on Machine Emotions above, the human brain learns to effectively address Body UTRs on a daily basis (eat, drink, stay warm, get coffee, avoid fatigue, etc.) and as these undulations between temporary Deprivation and temporary Satiation become highly predictable, Body UTRs become less of a concern to us in our pursuit of Satiation and daily happiness.

We in effect become confident that we will be able to solve these Body UTRs because of the way we have set up our lives (we have money, we have fresh water, we have access to shops and food, we have a car, we have a place to stay, we have a coffee machine, etc.). But because everything we learn becomes tagged with a good or a bad emotion (sometimes strong and sometimes weak), our emotional state becomes dominated by what we ‘think’ about – and as we learn what makes us happy and sad, we try to look ahead into the future and solve problems before they occur. As we have said, a lot of negativity is generated when we think about the misfortune and harm life can inflict on us. As we contemplate our own vulnerability and mortality, and witness the suffering around us that we know can also befall us, we start to feel the effect of a strong, pervasive state built up out of many multifaceted fears. We will refer to this totality of fear as our ‘Base Fear’. We are often not aware how almost every aspect of our lives starts to make us fearful of what can happen to us – and slowly most of our actions start to revolve around escaping from this ‘Base Fear’.

This ‘Base Fear’ becomes one of the main drivers of our behaviour and on a daily basis we go to great lengths to find ways to escape the ‘negative emotional states’ generated by thinking about a myriad of actual and potential (sometimes imagined) sources of Deprivation.

But the Xzistor Concept teaches us that there can be no happiness, no firing of the positive emotional pseudo-tactile ‘intra-trunk’ state, if not derived from original Deprivation states that were Satiated either as part of  Body UTRs or from recalled memories tagged with negative emotions that were later Satiated.

To achieve a Satiation (pleasurable) state that goes beyond just satisfying the undulations of Deprivation and Satiation caused by the Body UTRs, we must find another source of Deprivation we can ‘restore’ or ‘correct’ to generate additional Satiation.

And we can use this ‘Base Fear’.

This multiplicity of weak and strong fears that become so dominant in our lives, provide the perfect source of ‘additional Deprivation’ that we can Satiate to ‘feel an intense sense of release or pleasure.’

We just have to look at what the effect of alcohol is on the brain. What do we hear people say when they drink alcohol: ‘I just want to chill and have a beer! I just want to forget about life for a while! I just want to take the edge off! I just want to get slammed! I just want to forget all my worries!’

So what alcohol does, amongst other things, is to reduce the ‘Base Fear’ in our brains. Escaping from these negative thoughts, fraught with fears we have involuntarily collected throughout our lives, we tend to find highly ‘relaxing’ and ‘pleasurable’.

We will see a person that has drunk too much alcohol start to behave in a way they would not normally do. They will become unreserved, confident, rude, aggressive…

Why?

The alcohol is erasing their intense (learned) fear of transgressing social rules and norms, of disappointing and offending others and of getting into trouble for confronting others in an aggressive manner. There is also the fear of being prosecuted by law enforcement authorities for aggressive behaviour (assault) and fear about reputation and acceptance amongst friends. All of these fears slip away as we get more intoxicated and what happens now is that our behaviours are not driven by ‘Base Fear’ anymore – but by our unmasked selfish desires and needs. And as the alcohol starts to erase fear, so does it diminish the ‘negative emotion’ state we normally feel inside our bodies (pseudo-tactile), and we might get to a point where we announce bleary-eyed that we ‘…haven’t got a worry in the world!’.

The example above shows how ‘Base Fear’ could become the driver for a highly Satiated (satisfying) or pleasurable state.

The Xzistor Concept is clear on how any form of Satiation, albeit the release from pain, relief from fear or pleasure from sources of entertainment and/or excitement, will always originally be generated by restoring Deprivation.

Machine euphoria will require the intelligent agent to not just have Body UTRs like humans, but also have the ability to learn about its environment and tag memories (objects and concepts in the mind) with emotions so as to build up the equivalent of a ‘Base Fear’.

‘Base fear’ will naturally be generated in the mind of an Xzistor robot by the Xzistor Concept algorithms for emotions, learning and recalling memories (with emotions).

If we take drugs like LSD, the effect can basically be the same as alcohol. But there is yet another source of Deprivation that can be exploited to enhance the intensity of the Satiation even more – the addiction to the drug. As discussed in other posts, an addiction to a drug is simply explained by the Xzistor Concept in that it is just a dormant Body UTR that evolved (was completed) by the introduction of a certain chemical into the bloodstream. This chemical then becomes the utility parameter of the homeostasis mechanism driving the Body UTR. And because this will be a very strong Body UTR (often stronger than many of the normal biological Body UTRs) we are also provided with a very strong source of Deprivation that can be used to generate intense Satiation when ‘restored’.

We see here clear evidence of how the simple mechanisms defined by the Xzistor model can explain how a machines can be designed to subjectively experience ‘euphoria’.

Sex!

We can explain the sexual orgasm at the hand of the discussion above. During an intense orgasm the body (in the absence of external drugs and/or alcohol) creates the same effects with chemicals (drugs) in the mind so that, at the peak of arousal, a high level of Deprivation (sexual tension and frustration) is created and then Satiated (satisfied). To spike up the arousal, the sex UTR will parasitically employ the fatigue UTR (create physical exertion), the oxygen UTR (interfere with breathing) and the internal body temperature UTR (artificially create cooler areas) – this will enhance the Deprivation that can be Satiated when the physical exertion stops, normal deep breathing is restored and waves of heat is flooded into the cooler body areas.

What the sexual UTR achieves is however so strong that it actually momentarily erases our ‘Base Fear’ so that during a sexual orgasm we temporarily forget about all our worries and fears, and after orgasm we feel ‘satisfied’, ‘calm’ and ‘at peace’. It is exactly the same as for the mechanism of euphoria discussed above – for a moment our brains focus just on the Satiation (pleasure) and we forget about all the ill will in the world that can befall us, the fact that we can get sick, injured or die, all the fear of harm to our friends and families, our financial woes, work stress, talks of war, pandemics, etc. etc. We escape the ‘Base Fear’…

And we can replicate this in a machine using the simple algorithms provided by the Xzistor Concept cognitive architecture.

So there we go – machine emotions, machine moods, machine euphoria – and now machine orgasm. Just one more insight from the Xzistor Concept.

Machine Emotions

Taking it to the next level

If you told me 25 years ago that machines can have emotions – I would have been annoyed. I would have quietly written you off as a crank!

But now I build robots with emotions – and I want to show to you that, as farfetched as it might seem, there is a scientific basis for arguing that machines can have truly ‘subjective’ emotions.

In this day and age of lamenting the slow progress of Artificial Intelligence and with some people wondering if we are ever going to understand the brain – not many researchers are aware of the existence of a simple brain model that offers a ‘principal’ explanation of all that happens in the brain.

It is called the Xzistor Concept.

Not only does it explain many of the obscure phenomena about the brain that some think can only fit into a nebulous definition of ‘consciousness’ or ‘subjective experience’ – but it actually explains everything in mathematics.

This means that its simple functions can be coded into simulations and robotic applications to show how the behaviors we see in humans can be generated in machines. Many has decided this model is too simple to reveal all that happens in the brain, but some are now starting to put its careful choice of basic functions under the spotlight, and discovering how difficult it is to refute its simple basis. They are starting to believe that in fact its simple functions could over time generate all of the rich contexts and complexity by a process of perpetual learning. Some are now saying this model has solved Chalmers ‘hard problem’ and that there is principally no difference between the ‘subjective’ states experienced by an Xzistor robot and those experienced by humans.

The Xzistor Concept is a ‘functional’ brain model that requires no understanding of neural networks or brain biology and rests on two theoretical pillars – emotions and intelligence. It provides the functionality that explains how emotions tell the brain ‘what to do’, and it shows how intelligence tells the brain ‘how to do it’.

Interestingly, it shows that emotions are permanently and prominently involved in the creation of intelligence and that, in the absence of emotions, intelligence will collapse immediately. Pretty much what phycologists have been telling us for years!

The two theoretical approaches to emotion and intelligence used by the model are described in two simple guides available here:

Understanding Emotions

Understanding Intelligence

This post is aimed at providing a somewhat more detailed look at how the Xzistor Concept brain model explains not just ‘machine emotions’, but also ‘machine mood’. It is specially for those who have reached out to me and said that this model strikes a chord with their own understanding of the brain and now wants to know more about the Xzistor Concept. Specifically, I want to provide a clear explanation of how truly ‘subjective’ machine emotions can be  generated and also how the model explains the presence of an omni-present machine ‘mood’ that will, just like in humans, persist beyond the normal undulations of happiness and sadness based on daily needs, anxieties, frustrations and desires for pleasure.

Let’s start off exactly in the same place as the model would when building an artificial brain – with homeostasis. Whilst the mechanism of homeostasis as a control concept in the brain has been around for hundreds of years, what has been missing is how homeostasis states can be turned into truly ‘subjective’ emotions and how these emotions can drive decision-making, intuition and determine our daily mood.  

Everything the Xzistor Concept artificial brain will drive the intelligent agent (robot) to do, will start off with homeostasis. We will see homeostasis forms the motivation for everything the brain needs to do from maintaining metabolism (surviving), staying uninjured (healthy) and performing the actions required for reproduction.

The model begins by identifying utility parameters it will measure and assess to base homeostasis mechanisms on. Some of these utility parameters will be relevant to a living biological organism e.g. carbohydrate / glucose level in the blood (hunger), water in the blood (thirst), oxygen level in the blood (breathing), bladder pressure (urination), colon pressure (defecation), body internal cold temperature (seek heat), body internal hot temperature (seek cold), muscle fatigue marker in blood (rest), sleep drug level in blood (sleeping), toxin level in blood (nausea), pain receptors (pain), adrenaline in blood (anxiety), etc. Some of these homeostasis mechanisms will be built up out of many mini-homeostasis mechanisms e.g. hunger could comprise mini-homeostasis mechanisms for sweet, salt, sour, bitter, warm tongue, cold tongue, warm throat, cold throat, chilli burn, umami, etc. This can even extend to esophagus tactile pressure and saliva acridity mechanisms. This is how we can differentiate between different tastes, like garlic and chocolate, and how they can be pleasant in their own unique ways.  In this way we can experience cravings for very specific substances depending on the different chemical deficit / restore balancing acts going on in the body related to hunger.

For intelligent agents (robots) we can choose any utility parameter we deem necessary for the robot to operate in a manner we want it to operate. Whereas food and water might not be relevant, we can now choose battery level of charge instead of food to tell the robot to maintain this parameter in homeostasis. Many of the utility parameters used for humans (and other organisms) can be transferred in some way to our choice of utility parameters we want to use for a robot. We can use pressure receptors (sensors) to make the robot experience pain, we can make the robot experience fatigue by using motor power as opposed to muscle fatigue markers in the blood, control of hot internal temperatures, control of cold internal temperatures, etc. etc.

Although most proponents of homeostasis as the basis of emotions, will be quite comfortable with what has been stated above – there is often a level of confusion that enters at this point. This has to do with the fact that many brain researchers feel a homeostasis mechanism can only be based on an innate physical (biological) utility parameter which is measurable, something like oxygen level in the blood or body core temperature. This unfortunately often leads to important omissions. Let’s take a look at the sex drive for instance. Looking past the intricacies of the sex drive mechanism in the human mind and the myriad of hormonal effects that underpin it, we can say the sex drive is activated by a visual stimulation like an optical observation of a person that has physical attributes e.g. either sexy or handsome. This might be an oversimplification, but the point here is that what triggers homeostasis to move to a position where it needs to be restored or corrected is actually an optical state (visual image).  By simply looking at an attractive person, we start to experience a sexual tension and nature’s aim is for us to act on this frustration (eventually) and ‘satisfy’ it. It often works the same for the aggression drive – where a visual que e.g. observing an enemy, will generate a heightened or aroused state that can only be satisfied by an outburst or physical altercation. There is an important lesson here, namely that homeostasis mechanisms can make use of ‘any measurable entity’ – even a sensory state (or any brain state for that matter).

Soon we will see how even a mere ‘thought’ about something that has affected homeostasis in the past, can generate emotions in the mind. Here is a simple example in humans: Let’s say we have bumped into a cactus in the garden and received a painful sting from its thorns while trying to pick an apple. We have remembered the encounter and now when we recall it, the negative emotion that was attached to the association will also be re-evoked. This means we will again ‘feel’ the emotion – not the actual pain, but the negative emotion that accompanied the pain. We will soon see exactly how this works, but we can call this the ‘fear of pain’. And this ‘thought’-triggered emotion will effectively act as a homeostasis mechanism that will compete with other homeostasis mechanisms to gain the attention of the brain.

For instance, the hunger homeostasis might be very strong, but the ‘fear of pain’ homeostasis mechanism might even be stronger at a given moment and make us ‘too fearful’ to reach past the thorns of the cactus to get the apple.

It is important to keep in mind that homeostasis mechanisms can be based on biological parameters, physical parameters, visual states or even brain states saved as part of associations (e.g. emotions linked to our memories). When we design virtual Xzistor robots these utility parameters are all simulated, but the virtual agent never realises this.

But let’s be clear on how these homeostasis mechanisms create states that can be used to base real ‘subjective’ emotions on because there is another mistake many brain researchers make at this juncture. We must not at this stage mistake ‘later life’ effects that will follow from learning over time, with homeostasis mechanisms. People might suggest that because a species show ‘social’ tendencies they have homeostasis mechanisms driving social needs such as social standing, companionship, family-ties, admiration, collaboration, love and acceptance. But these are learned behaviors that will depend on the environment the humans grow up in and although all human behaviors are driven by homeostasis mechanisms, these are not innate homeostasis mechanisms in themselves. A good test to perform is to ask if evidence can be found of these mechanisms being present at birth – in most cases if such evidence is not apparent, it can be assumed the behavior is learnt later in life. Similarly, the drive to explore and expand one’s knowledge comes only after having learnt how helpful it is to understand your environment better (e.g. to avoid threats, frustrations and missed opportunities all related to homeostasis drives). Infantile curiosity is already present at birth as a (dumb) reflex but it is not a homeostasis mechanism, and later life curiosity is all about the advantages that environmental information can bring – a fact that we become aware of as we go through life. It helps us solve homeostasis deficits, but is not a homeostasis mechanism in itself.

So again the caution – do not look for specialist innate mechanisms that provide the refined later life functions and effects, because the Xzistor Concept only uses simple functions already evident at birth and allow these to, based on extensive learning, create all the rich and complex effects.

Now that we are clear on how just about any measurable state in the brain can be used to base a homeostasis mechanism on, let’s take a look at the logical step by step process the Xzistor Concept follows to get to fully ‘subjective’ emotions.

Once the utility parameter has been identified it gets cast into a utility function. This defines the range of the utility parameter that will be important to the brain. We can take the utility parameter oxygen in the blood and plot it as a linear line on a graph. We can choose to only consider the oxygen utility parameter over the range 70% oxygen in the blood to 100% oxygen in the blood. We assume at around 100% oxygen in the blood the human will be perfectly fine and operating normally, whilst at 70% oxygen in the blood the human will actually die. We now need to tell the brain how serious it needs to be taking this reading.

When the oxygen in the blood reading is around 100% we do not need to inform the brain to prioritize breathing over other actions like finding food and water, but if an adversary starts choking us and our blood oxygen level drops to near 70%, we need to tell the brain to forget about food and water and urgently prioritize breathing. This is needed to restore the oxygen level that are now dangerously low and threatening our lives. These utility functions become meaningful to the brain when we start to plot the Urgency To Restore value (as a %) on the Y-axis versus the utility parameter value on the X-axis. This Urgency To Restore (UTR) function now tells the brain e.g. if the blood oxygen level is 100%, the Urgency To Restore is 0%, whilst when blood oxygen level is 70%, the Urgency To Restore is 100%.

This allows for the brain to not just exercise homeostasis of the utility parameters, but enables the brain to – at any given moment in time – compare all UTRs of all the utility parameters (as a % value) and identify the ‘most urgent’ utility parameter to focus its attention on.

We must not think that the mechanism above will be adequate to provide any sense of subjective emotions. To understand how subjective emotions are generated we need to delve a little deeper. The mechanisms above will process information taking the status of utility parameters and generating Urgency To Restore values as %, but the brain will be largely oblivious (unconscious) of this information processing. The brain does not have the time to ponder these measurements, calculations and comparisons and really just need the BOTTOM LINE to be communicated to it so that it can take the most appropriate action.

The brain has found a very clever way to do this.

It has gone to that area in the brain where our tactile senses are constantly being reported (also known as the somatosensory cortex or body map area of the brain) and ‘annexed’ two areas for its own personal use. Its aim with these two areas is not to create representations of tactile sensory states, but to instead create states that will represent the status of the UTRs. Because the brain is constantly ‘aware’ of tactile senses (just like other senses), we know this area will get constant scrutiny by the brain when it determines what behaviour to perform next. The brain will use one area, representing the ‘intra-abdominal’ or ‘gut’ area of the somatosensory cortex, to generate a representation – akin to a tactile state – but based on the UTRs that are signalling that they are out of homeostats (i.e. they need to be restored). We will say these UTRs are in Deprivation and we will call this ‘fake’ representational state a pseudo-tactile state. So, for instance, when the hunger level is high, we want to signal to the brain a general warning – as a ‘feeling’ – that we have entered a state we want to avoid. This ‘intra-abdominal’ pseudo-tactile state will do exactly that, because of the way the brain will give it an ‘avoidance value’.

How does the brain achieve this?

Simple.

As we discover the actions that lead us to eat when we are hungry (sometimes we are taught by a tutor), the brain teaches us that it were in fact these actions that led us away from the ‘intra-abdominal’ pseudo-tactile state created by the hunger UTR that went into Deprivation. The brain therefore rewards us for moving away from the ‘intra-abdominal’ pseudo-tactile state and solving the hunger problem.

These actions thus become strongly reinforced and next time we feel hungry and experience this ‘intra-abdominal’ pseudo-tactile state at the pit of our stomach, the brain automatically chooses and encourages us to use learned actions to make the state go away. In this way we learn that this is a state that we want to ‘avoid’ whenever we can – and we learn to call it ‘not good’ or a ‘bad feeling’.

The brain creates in exactly the same fashion a ‘positive’ pseudo-tactile state in the ‘intra-trunk’ area of the somatosensory cortex which we will learn to pursue – because in this case the brain will also encourage (reinforce) those actions restoring the homeostasis drive. We say the UTR has gone into Satiation. For instance, as we eat food, the homeostasis mechanism for hunger will record an ‘improvement’ or ‘restoration’ towards a balanced state – and all the actions we have been performing to make this happen, will become strongly reinforced by the brain.  

Something very significant has now happened.

The brain has now started to attach an ‘avoidance’ context to the ‘intra-abdominal’ pseudo-tactile state – meaning the brain ‘does not like’ being in this condition since it generates a bad subjective state and all the brain has ever learnt is to avoid this ‘feeling’.

At the same time, the brain has started to attach a ‘pursual’ context to the ‘intra-trunk’ pseudo-tactile state – meaning the brain ‘very much likes’ being in this condition since it generates a good subjective state and all the brain has ever learnt is to pursue this ‘feeling’.

This explains how a ‘feeling’ in the gut area can develop that will steer us away from things that are not good for us, and how a ‘feeling’ in the chest area can develop that will steer us towards things that are good for us. In time we will learn to call the chest sensation ‘good’ and the gut sensation ‘bad’.

And these sensations are of course our ‘subjective’ emotions.

Simplified here, they will have nuanced aspects – like hunger might have mini-avoidance states in the mouth, tongue throat area (sweet, sour, salt, bitter, etc.) and pain might have bodily focus areas where the pain is experienced, but what is important is that these two ‘broad’ emotional states, one positive and one negative, are subjectively experienced and very much shared by all UTRs. So, now the brain achieves its goal of having a very fast single subjective ‘feeling’ to sway actions very quickly towards ‘avoidance’ or ‘pursual’. And we understand how emotions are created! This also explains how ‘intuition’ or ‘gut feel’ works and how we sometimes jump to conclusions even before we have thought things through properly.

Another caution: Just because the biological brain has chosen to package our emotions into pseudo-tactile representations, does not mean all emotions must be generated in the somatosensory area of the brain e.g for intelligent agents (robots). The only requirement from a logical perspective is that the states representing the Deprivation or Satiation of the UTRs will be constantly scrutinised by the brain and in real-time used to influence/decide behaviour. If we make these emotions appear to be originating in the gut or chest area, it does of course make us feel like these emotions are inside our bodies and that we are the owners of these feelings. Interestingly, if we as human can move our emotions out of the somatosensory area of the brain, we might stop saying we ‘feel’ good or we ‘feel’ bad. Instead we will just be aware that we are in a good state or in a bad state, and might rather say ‘I know I am good!’ or ‘I know I am bad!’.

Now let’s see what happens when we attach these emotional states to memories.

Since we say an association is nothing other than just a snapshot of things (brain states) that were present in the brain at a given moment in time e.g. what was seen, what was smelled, what was heard, what was done, etc. – we can just link the emotion to these. Because we constantly have the utility parameters being kept at homeostasis driving their individual UTRs, which in turn drives the + and – emotions in the brain, every single one of our experiences will have a net emotion present when it happens that we can attach to that association.

This allows us to rank associations in future in order of importance – based on how strong the emotion (good or bad) was at the time, how often this association has been found to be relevant to solving problems and when last we used the association to solve a problem. We multiply these to create an ‘Impact Factor’ for every association.

As the brain spontaneously Threads through associations – either just daydreaming or trying to generate context and solve problems – we can use this Impact Factor to make sure we prioritize the most helpful associations in our everyday activities.

What happens when we recall associations and in the process re-evoke the emotions that had been stored as part of the association?

Well, we will ‘feel’ these emotions again as if they are part of our bodies – ‘good’ in the intra-trunk area and ‘bad’ in the intra-abdominal area.

But we must remember that at the same time, the UTRs will also be generating emotions based on their ‘utility parameter’ homeostasis mechanisms.

It therefore becomes important for the brain to ‘merge’ or ‘combine’ the emotions generated by UTRs and the ones re-evoked from associations.

We can now have a situation where a UTR is in Satiation, say we are eating food, and it will generate a positive emotion, but we can at the same time be recalling an accident we had witnessed a year ago where someone was seriously injured. The Xzistor Concept will combine these emotions – since they are now in a ‘common currency’ – and create a net emotion. This might result in a lack of appetite or even a feeling of nausea while at the same time our body is actually trying to tell us that we are also hungry.

In this way our net everyday emotions are created by combining both the utility-parameter based UTRs (we can call them Body UTRs) and the association recalled emotions (we can call them Brain UTRs).

As we learn to efficiently address our Body UTRs on a daily basis (eat, drink, stay warm, get coffee, avoid fatigue, etc.), our emotions get dominated by what we ‘think’ about – and as we learn what makes us happy and sad, we try to look ahead into the future and solve problems before they occur. A lot of negativity is generated when we think about what life is throwing at us as we desperately try to solve our Body UTRs – the fact that we have to work for money, the people that are obstructing us, the cost of living, crime risks, threats of war, the risk of health problems, harm to our loved ones, etc. This can create a thought pattern that is driven by ‘fear’ and negative emotion. It’s all about not being sure that we can Satiate all our UTRs now and in future. This perpetual fear about all that can prevent us from Satiating our UTRs in life – all that can go wrong basically – means that we are often in a permanent state of stress.

This we can call our ‘mood’.

This mood can of course also be positive when we have successfully secured ways to access Satiation – money, friends, power, assets, lifestyle, etc. but although we can be followed around by a perpetual feeling of success and Satiation in our lives, we often tend to start thinking about all that can go wrong again. As the brain evolved over thousands of years, it favored survival of those that tried to predict threats and worry about things before they happened – so natural selection has left us with a tendency to generate our own perpetual mental Deprivation – and often this can lead to ‘depression’.

There is no reason to believe this can only happen in humans!

In fact, it should now become clear how truly ‘subjective’ emotions can be generated within an artificial brain and how ‘positive’ or ‘negative’ moods can be experienced by machines just like humans.

Machine emotions are real – and they are here!

TOWARDS IMPLEMENTATION

For those interested in how the above theoretical approach was more closely modeled on the human brain and moved towards a robotic implementation, here is an ‘interim’ document that expands the above conceptual description towards implementation.

And for those who would like to work towards their own Xzistor robotic implementation, here is a very basic presentation to show how such an Xzistor implementation, with artificial emotions (and Thinking and Problem Solving) will work.

ANKI Vector Robot with Xzistor Concept Artificial Brain

SIMPLE DEMONSTRATION OF SUBJECTIVE EMOTIONS AND INTELLIGENCE

A short assessment was performed to establish the suitability of the ANKI Vector Robot to
act as a simple robotics platform for students to investigate the concepts of agency and
autonomy.
A complete, but substantially scaled down, version of the Xzistor Concept logic schema is
proposed to provide the robot with simple subjective emotions and intelligence within a
learning confine.

See the short assessment report below.

I identify as a Large Language Model – do not call me AGI!

My response to Gary Marcus tracking the evolution of large language models.

Agree with you, Gary. Can’t believe some of our colleagues could even have doubts.

If we just define AGI for the moment as: The hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can.

Assuming AGI is based on a mature adult – here is a question one can ask AGI when it has ‘arrived’ or like they say on ‘AGI Game Over Day’:

On ‘AGI Game Over Day’, my question to AGI: “Based on your personal experience, AGI, which aspect of an intimate relationship would you say is the most important for ultimate happiness – physical appearance, emotional connection or cultural background?”

This is a question your average adult will be able to answer from a personal perspective (it is specifically aimed at lifting out some of the key challenges to AGI).

I am putting a cognitive architecture on the table (Xzistor Concept) – the type of model many say is needed to ‘encompass’ LLMs. And the truth is, the LLM will be a small ‘handle-turner’ within the scope of the overall cognitive model. The model actually patiently anticipates errors from the LLM and will let it learn from these errors. Remember to think like humans we need reflexes, emotions, curiosity, reasoning, context, fears, anxiety, fatigue, pain, love, senses, dreams, creativity, etc. – without these every answer given by AGI will start like this: “Well, I have not personally experienced X but from watching 384772222 Netflix movies at double speed I belief X is a bad thing… “

Keep it up Gary – the science community owes the truth to the public!

MRI results corroborate Xzistor Model definitions of Intelligence and Emotions

A real exciting find!

By pure accident, I recently stumbled over an MRI study that was performed by researchers at Duke University’s Center for Cognitive Neuroscience in 2016.

I could not have asked for a more appropriate set of MRI tests to corroborate my functional brain model, called the Xzistor Concept brain model.

Professor Kevin LaBar, study co-author and head of the university’s neuroscience program made some comments to CNN reporter Ashley Strickland at the time, which immediately captured my interest. I realized the significance of simple statements like ‘asked study participants to rest and think about nothing’ and ‘letting their minds wonder’.

Those words might not mean much to AI researchers trying to simulate the mind and get artificial intelligence to the next level – but for me his words had special meaning. I knew he was talking about one of the most fundamental tenets of my brain model.

It was incredibly important for me to understand what this MRI study of emotions in the brain had found.

Letting the mind ‘wonder’ or ‘freewheel’ does not only form the point of departure for thought and problem-solving performed by my model, but also explains how emotions and intelligence are integrated within the human mind.

But first let’s take a look at what Professor LaBar and his team had found:

In short, this is what they did: They encouraged study participants to enter an MRI scanner and then ‘try to think about nothing’ while they were in the machine. As this was happening, they recorded the brain patterns (in color) associated with the unforced emotions spontaneously generated in the minds of the participants. These they compared with a set of ‘reference color patterns’. The reference emotion patterns were collected earlier from other individuals which were asked to watch movies or listen to music. The research team defined a few basic color patterns prevalent during specific emotions e.g. surprise, fear, anger, sadness, amusement and neutral. Although the participants were trying to think of nothing specific, different emotions seemed to come and go in their minds while in the scanner, separated by periods of what looked like an emotionally ‘neutral’ state.

Fig 1. Distributed patterns of brain activity predict the experience of discrete emotions.

Parametric maps indicate brain regions in which increased fMRI signal informs the classification of emotional states. Image taken from original research article Decoding Spontaneous Emotional States in the Human Brain
(https://doi.org/10.1371/journal.pbio.2000106.g001).
[Credit: Kragel PA, Knodt AR, Hariri AR, LaBar KS (2016)

The conclusion from the investigation as explained in the CNN interview by LaBar is important – he believes that, by using such MRI brain scans to show emotions in colour, a comparison can be made to see if a treatment regime had changed a patients ‘rest state’ emotional signature. For instance when a patient who has been suffering from depression reports an improvement, the current scans can be compared with previous scans to remove possible biases and uncertainties for the patient – and to inform further treatment options. It could further assist with assessing children with mental disorders or even assess patients in comas.

Why is this so important for AI research?

What LaBar and his colleagues have found is accurately explained by my functional (mathematical) model of the brain.

What they choose to call ‘thinking about nothing’ or ‘letting your mind wonder’ is crucial to everything my brain model is predicated upon – including how it generates intelligence and emotions. They are talking about a phenomenon in the brain which I refer to as ‘Threading’. This seemingly unimportant process becomes the basis of how learned information (knowledge) is used by the human brain to solve problems – and it is the process I have used to build robots and artificial agents that can think and innovate by themselves. By ‘directing’ this Threading process, the brain learns to solve problems – this is what we as humans might refer to as ‘thinking about a problem’ and this forms the basis of how I define ‘intelligence’ in my model. Basically we learn to solve problems by ‘directing’ the Threading process – focusing this constant random recollection of associations towards solving a problem is no different from how we force a web browser to only return content relevant to our search terms and not waste time by providing unrelated websites.

Fig 2. Emotional states emerge spontaneously during resting-state scans.

Procedure for classification of resting-state data. Scores are computed by taking the scalar product of preprocessed data and regression weights from decoding models. Image taken from original research article Decoding Spontaneous Emotional States in the Human Brain (https://doi.org/10.1371/journal.pbio.2000106.g001).
[Credit: Kragel PA, Knodt AR, Hariri AR, LaBar KS (2016)

So where do the emotions come in which they have detected?

Here is the beauty of how my brain model explains what they have detected: Every time an association is recalled by my model, it reevokes (regenerates) the net subjective emotion that was recorded when the association was formed. No memory is recollected without its emotion also being reevoked. So even when robots running my mind model tries to think about nothing, their artificial minds will keep on ‘Threading’ through associations and emotions will be recalled one after the other. Some of the recalled emotions will be so low in intensity that they will not be noticed by the robot – similar to what LaBar and his colleagues observed on the MRI scans and referred to as emotionally ‘neutral’ states.

I am personally very excited about these test results as my model is corroborated by exactly what the research team had found – it is nothing other than my process of ‘Threading’.

There is so much more that can be learned from these MRI tests as these effectively verify some of the most important aspects of the Xzistor Concept brain model. It gives credence to two new mathematical definitions of intelligence and emotions described by the model – something the neuroscientific and AI communities had been in search of for decades and are still searching for at the time of this article.

This constant ‘at rest’ recalling of associations and stored emotions, this process of Threading (we sometimes just refer to it as daydreaming) does not stop when we fall asleep. All that happens is that our eyes close and our motor movement is inhibited. And we start to dream – which is nothing other than just the same Threading process doing its thing over and over again in our brains.

Here is how Threading is described in my guide Understanding Intelligence: The simple truth behind the brain’s ultimate secret:

“When the brain is relaxed and do not have an immediate problem to solve, it will start to daydream. We can say this is equivalent to Threading. Just like Joe’s program jumped from book to book using some shared link word, our brains will start linking memories using some shared aspect (similar to a link word). One after the other our brains will present these memories to us in the form of recalled visual images, whilst also re-evoking the emotions associated with these images.”

Looks familiar? It is exactly what the MRI tests have shown.

I now want to invite you to investigate the brain by coming at it from another angle – a functional approach that is simple to understand and different from everything that is out there at the time of this article.

Follow the set of free (and easy to read) links below to get familiar with the Xzistor Concept brain model and understand why it provides a theoretical basis for the observations Professor LaBar and his team had made. If you are interested in AI, you will also learn how this model allows us to build robots and virtual agents with intelligence and emotions (actual feelings) that are principally no different from humans.

Start by reading the CNN article by Ashley Strickland on Professor LaBar and his team’s MRI ‘emotion scan’ research here:

https://edition.cnn.com/2016/10/06/health/spontaneous-emotions-brain-scans/index.html

Now read my free guide explaining the brain process of ‘Threading’ (I use a simple story about a bookshop owner to explain this concept in the guide – real easy reading).

https://www.researchgate.net/publication/351788696_Understanding_Intelligence_PrePrint_Version_for_Peer_Review

The bottom line from this is that Threading is a process by which associations are spontaneously recalled by the brain, and with every association the net emotion is also recalled.

If you are interested, you can now read how subjective emotions are generated in the brain as explained in my free guide: Understanding Emotions: For designers of humanoid robots:

https://www.researchgate.net/publication/350799890_Understanding_Emotions_For_designers_of_humanoid_robots_2nd_Edition

You will now start to understand why I say the Xzistor Concept brain model can principally explain all that happens in the brain – functionally that is, which means we do not need to get into the biology of neurons and neural networks to understand the simple high-level logic of the brain.

If you have more questions about my brain model, feel free to head over to the Frequently Asked Questions section on the Xzistor LAB website. Videos of the model built into robots and simulations are available on YouTube.

Up for even more?

Here is a final paper that explains what happens when we use the Xzistor Concept brain model to go in pursuit of the elusive concept of Artificial General Intelligence (AGI) and how it can help neuroscientists and AI researchers understand the brain and build robots with intelligence and emotions:

https://www.researchgate.net/publication/359271068_The_Xzistor_Concept_a_functional_brain_model_to_solve_Artificial_General_Intelligence

Thank you for your interest – and I hope you will always stay fascinated by the brain!

Consciousness – inside a functional brain model

Consciousness explained at the hand of the Xzistor Concept brain model.

I often use the trendy, yet elusive, concept of ‘consciousness’ to explain the advantages of using a simple functional model to explain the brain.

As a courtesy to my readers, I will write this blog entry in the form of a simple story – and in plain English.

The story is about three kids that grew up in London and end up together at the ‘International Symposium on Consciousness’ thirty years after attending the same primary school class.

The story goes like this.

When Amy was 9 years old, she asked her mother, a neurobiologist, what the word ‘consciousness’ meant. Her mother tried to explain to her daughter what her personal understanding of consciousness was: ‘It’s that part of your brain that generates your understanding of yourself and the world – it is what makes you different from an animal.’

Amy now had a definition of ‘consciousness’.

When Bongi was 10 he asked his dad what ‘consciousness’ was. His dad, a pastor who had immigrated to the UK from Africa, took out a book about ‘Black Consciousness’ and told the boy: ‘This is what consciousness is. It is what you believe in – your conviction of your place in the world and how you should be treated by others.’

Bongi now had a definition of ‘consciousness’.

Ryan grew up under difficult circumstances with his father – a drug addict – and one day, hoping to strike up a conversation with his father, asked him what the meaning of ‘consciousness’ was. His dad was high again and just laughed as he stared at the boy through clouds of blue smoke: ‘Mate, it is just all the crap in your head!’

Ryan now had a definition of ‘consciousness’.

Life was kind to all three of them and Amy, Bongi and Ryan made it through school and later all got degrees. Amy became a neuroscientist, Bongi a psychiatrist and Ryan an AI professor.

By some fluke, they all became interested in studying consciousness for their own personal reasons. Apart from coming across explanations of consciousness as part of their studies, they also spoke to many people along the way and read many scholarly articles and books covering different theories of consciousness.

They all met each other again 30 years later at the ‘International Symposium on Consciousness’. At this prestigious event there were many distinguished academics – experts from different fields that put forward their views on what consciousness was and how it was generated in the brain.

And again by some coincidence all three of them ended up at a breakout session where some of the world experts were debating the meaning of consciousness and giving each participant a chance to put forward their own understanding of what consciousness meant.

Amy’s understanding of consciousness had matured over the years, and she was now able to eloquently verbalize her thoughts around its meaning in scholarly terms, but she had always retained some aspect of what her mom had told her all those year ago. Bongi equally could quote all the prominent theories by hailed academics, but never forgot about what his dad told him – the idea that consciousness includes a level of conviction. Ryan had done a PhD in computational neurobiology and was looking at biological mechanisms to explain what consciousness was and where it could physically reside in the brain. After long discussions with colleagues or students he would always just smile and come back to what his dad had told him many years ago: ‘In the end it is just all the crap in your head!’

So who got it right? Whose explanation of consciousness is the best?

Amy, Bongi and Ryan had all formed different understandings over time with all of them retaining some ‘element’ of what they were originally told by their parents.

They had spoken to many people over the years who in turn had heard explanations from their parents and other people. Some explanations came from books written by people who had done research and spoken to other people who in turn had spoken to other people. All of these people must at some stage early in their lives have heard an explanation by someone else of what consciousness meant. And the person from whom they had heard it, would also have heard an original explanation from someone else early on in their lives. At no time did any of the experts at the symposium deliver concrete evidence of a mechanism in the mind to which the phenomenon of consciousness could be attributed to.

During their primary school days, Amy, Bongi and Ryan all came to know what a red ladybird was. They all had the same understanding of what the little creature was and how it looked. But that never happened with consciousness – because consciousness is invisible.

Even while neuroscientists are admitting that they do not yet have agreed explanations for how emotions and intelligence work in the brain, classes on consciousness are being offered, symposia are being organised and books are being published by so-called experts in the field claiming to have an explanation for the concept of consciousness.

All in the absence of evidence.

We all just heard about consciousness from others, different versions of what happens in our brains as we go about our lives. It is in effect just folklore.

This is where the functional brain model comes in.

As unlikely as it might seem, the correct functional model of the brain can offer something better – something concrete that could provide a simple definition of consciousness.

The Xzistor Concept is such a functional brain model. It introduces the concept of a ‘nexus area’ in the brain. A simple way to think about this is to envisage a central or focal area where all brain states required for decision-making are brought together at the same time – almost like calling out key members from an audience onto a stage to make a collective decision.

These brain states are the things we are constantly aware of like what we see, hear, feel, smell, our needs, our fears, how we move, what we think about – even what we dream about. The functional model processes a lot of sensory inputs and internal information, but only the pertinent results of those processes make it to the ‘nexus area’ where they are used to calculate the next behavior. For instance, in the human mind we are not aware of blood pressure regulating mechanisms or endocrine system corrections, but we are aware of what we are looking at, what we recognize, our emotions and what we are thinking about. The Xzistor Concept provides functional explanations for all of these processes that can be written in mathematics and turned into computer code. Unlike the biological brain where behaviors are generated from the simultaneous processing of incoming brain states in a parallel fashion, the computer program based on the Xzistor Concept functional model will process this information using a sequential algorithm – we will call it the ‘nexus area’ algorithm. The only parameters entering this algorithm, cycle after cycle, are the final results from all the different simulated brain processes – the key parameters required to calculate behaviors. These parameters represent brain states that the instantiation (e.g. as the digital brain of a robot) constantly needs to be aware of. And it is here in this ‘nexus area’ algorithm where we find the brain states we often like to list as part of consciousness – the things we as humans can ‘feel’ and are constantly being made ‘aware’ of…

So, where does consciousness live inside the human brain?

Don’t know – but we should search for the equivalence of a ‘nexus area’ algorithm within the brain’s biological structures, and who knows, maybe one day we will find out where consciousness is hiding.

In the meantime, I suggest we just stick with Ryan’s dad’s definition – that consciousness is just all the uhm…‘stuff’ in our heads…

Recent paper on the Xzistor Concept brain model here: Xzistor Concept

Can we make neuroscience go faster?

By making things simpler…

In recent times the concept of Artificial General Intelligence (AGI) has attracted a lot of attention, and is now being pursued by numerous high-profile research intuitions around the world. Interestingly, there is still no single consensus definition for AGI. What members of the AI community are clearer about, is what AGI is not – it is not ‘Narrow AI’ that can only use artificial intelligence to solve problems within narrow contexts or environments. Some have defined AGI as ‘Strong AI’ to indicate a wider ability to solve problems in non-specific contexts and environments. This should not be confused with the early definitions of ‘strong AI’. Ben Goertzel has defined what he refers to as the “core AGI hypothesis” stating that: the creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.

Due to the many divergent views and approaches towards AGI, it is not always clear whether researchers are simply pursuing a capability that can solve intellectual problems at a human-level or higher, or if they are specifically attempting to emulate humanlike thinking with machines using brain-inspired processes. In this case ‘humanlike’ refers to a functional approach derived from brain logic that are principally no different from the high-level functions performed by the human brain – it is focused on what is achieved (function) rather than how it is achieved (biological mechanisms).

To avoid confusion in this paper, Artificial General Intelligence (AGI) will simply be defined as the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. At the time of this paper many deem an AGI solution to be decades away. The approach towards an AGI solution discussed in this paper is based on the Xzistor Concept – a functional brain model which provides intelligent agents with humanlike intelligence and emotions. This brain-inspired cognitive architecture is ‘means agnostic’ – meaning that it can be instantiated in software or hardware, or combinations of both, and scaled to the limits of technology.  

We know AGI will require an understanding of the brain. This understanding will hopefully come from the neuroscientific community. Can this understanding be accelerated by taking a step back in time and re-looking at a simple functional approach when studying the brain? This might just be the secret…

Read more here

Can Avatars become sentient?

Can Avatars have their own emotions and intelligence?

That will mean they do not act merely as a vector for a living person, but become ‘aware’ in their own right…

I recently did a deep dive into the metaverse to see what all the fuss was about. I did not plan to spend much time there as I suspected it was early days, with a lot of technical work ahead.

I was not wrong.

So, is it really going to be that big? For sure…but not now, some time in the future.

Moving Teams meetings into a 3D environment with the ability to replace your face with a copycat Avatar is fine and fun – but not nearly as much fun as working with Xzistor robots in the metaverse.

One of the first robots I designed to run the Xzistor Concept brain model on was a simple differential-drive simulated robot in a 3D ‘learning confine’. It was just some C++ and OpenGL code (and a good couple of late nights I will not lie) and there it was – a simple robot moving about in a 3D room. And immediately it – I mean ‘Simmy’ – started to learn like a baby.

Here is a legacy pick (screengrab) from one of the first simulations about 22 years ago.

Legacy pic of Simmy – note archaic MS Office icons in top right corner!

Simmy learned by reinforcing all actions that led to solving a set of simple emotions. With a bit of initial help it quickly learned to navigate to the food source and push the correct button to open it. It also learned to avoid the walls as this made for some painful encounters. What was exciting about this robot was that it was given visceral (body) sensations – it had its own little body map in its brain – and these were then used as simple emotions to make it constantly ‘feel’ good or ‘feel’ bad. It was quickly evident that Simmy was really ‘feeling’ pain when bumping into the walls.

It was a big kick to see the facial expressions on this little robot – a simple frown or smile reflex based on the average internal emotional state.

A later refined version of the crude initial 3D simulation – and an Xzistor robot that can easily be let loose onto the streets of the Metaverse.

I still see people struggling to understand how one can provide robots with emotions – and I do not mean just mathematical ‘reward’ states to satisfy homeostatic imbalances. For me emotions must include somatic body states which will make the robot ‘feel good’ and ‘feel bad’. The trick how to do this is explained in my short guide below:

Click on image

Simmy also allowed me to put my ‘intelligence engine’ to the test which forms part of the Xzistor Concept brain model. I could turn this intelligence engine ON or OFF so that the little virtual robot either learnt like an animal (Pavlov’s dogs) or like humans (actually thinking to derive answers from previous experiences). This approach did not only offer a way to define intelligence in a scientific manner, but also provided an easy way to implement intelligence in robots.

The simplest test of intelligence I could inflict upon my intrepid little robot was to secretly change the button to open the food source from the GREEN button to the ORANGE button. After trying the GREEN button, Simmy figured out it should actually be the ORANGE button without any help from me. This was quite and exciting moment as one could actually see Simmy ‘think’ about it, and it proved that the intelligence machinery was working correctly.

This intelligence algorithm also provided the robot with the ability to understand ‘context’ which many AI researchers feel is still missing from current robot brain models. All of this is explained in my short (and surprisingly simple!) guide below.

Click on image

To start building Avatars that have their own emotions and intelligence, will merely require me to drop this 3D simulated robot into somebody else’s metaverse and perhaps steer it a few times past the food and water sources (and other Satiation sources – read my guides). In this way a little virtual Xzistor robot will learn by itself to navigate around its 3D environment. It will constantly keep on learning… and make new friends.

Make new friends?

The first thing these Xzistors (I guess they will take exception if I call them Avatars) will need to do is to see other objects and virtual characters. For this I will use a simple method called CameraView which provides the view of the 3D confine as seen by the simulated robot. This will be processed as an optic sense so that Simmy can see and recognize objects and other Avatars. Simmy will quickly learn to ‘appreciate’ friendly Avatars that share their food, water, etc. and befriend those that are FUN to play with!

The metaverse creates the perfect test ground or ‘sandbox’ where these Xzistor robots can be allowed to learn and become more intelligent without concerns of super-intelligent robots harming humans. If Simmy gets fed-up with Avatars hogging the food or water source and start throwing punches at them (yes we can also provide aggression as an emotion) we can always just push the RESET button on either the robot or the game.

Of course, Simmy has tactile sensing, else how can it feel pain when walking into walls – and this tactile interaction with objects and Avatars in the metaverse will obviously not be physical, but ‘calculated’. But Simmy won’t ever know the difference. We did design Simmy to ‘hear’ sounds and words, but it cannot smell and taste…yet!

Building an Xzistor-type virtual robot for the metaverse brings about numerous simplifications. The main advantage is no need for costly body parts, motors, batteries, cameras, sensors, onboard processors, etc. that need integration. We can come back to anthropomorphic (humanlike) shapes and it is no issue to make them keep their balance and not fall over obstacles. This might sound trivial, but a large part of why Bill Gates never saw his 2008 promise of a ‘robot in every home’ come to true, was the science-fiction led notion of a ‘Jeeves’ butler robot by many of that time – a home robot that would have spent much of its time tripping over carpets or toys – and which would have regularly fallen through the glass-topped coffee table.

What would have made much more sense at the time, and Gates eluded to these ‘peripheral devices’ in his article, was an Amazon-type storage robot – basically a box that runs on rails up the wall, across the ceiling and to the kitchen to fetch beer and peanuts and bring it back to the sofa – without getting wise or maty.

Science fiction has both inspired and misdirected many human pursuits of the future. Elon Musk punts his vision of humans becoming a multi-planet species – but building an expanded space station orbiting Earth will be much more practical than setting up camp on Mars. A simple engineering risk, safety and cost-benefit analysis should quickly point this out.

At the same time the ambitious endeavors of these inspiring individuals is what keeps me going!

Is the metaverse just another distant dream by tech drivers that had gone mad? Or will we one day move into a reality other than the physical realm that we have come to know – much like the the worlds portrayed in the movie The Matrix? The question would be a practical one: Can we ever produce enough server semi-conductors to run all these live 3D simulations? And will we be able to generate enough power to drive these electrical worlds and the cryptocurrencies that will undoubtedly fuel them?

I think we will find a way to achieve all of this.

The metaverse will steadily grow and become our main reality. In time it will become just too much trouble to engage the physical world where we will have to dress up to go to work, be quietly judged by our body mass, shape, looks, apparel brands – and be condemned for on occasion accidently passing wind whilst forgetting we are not on our home laptops with the mute button on. I firmly believe virtual reality and the metaverse is where it is at – it is where the naked ape is headed next!

One blue sky project we proposed already years ago was to release an Xzistor robot ‘copy’ of oneself into the metaverse.

How will this be achieved?

Without going into too much detail here, a Wizard App can be developed that asks an individual a few questions to score the individual’s general personality traits and preferences: temper, compassion, fears, favorite pastimes, sports, foods, games, likes and dislikes, values and detail around required attributes of a future dating partner – physical (brunet, blond, etc.) and interests (food, sport, games, leisure activities, etc.). The Wizard App will then translate these preferences into lower tier emotion engine indices to create a virtual Xzistor robot brain that can broadly represent the individual in the metaverse. Of course it is not going to be very accurate, but imagine checking back after work to see who your Xzistor virtual ‘copy’ robot had hooked up with in the metaverse while you were away – and who are the real people behind these Xzistor robots or Avatars.

It could start a whole new way of virtual dating!

Will we one day see Xzistors and Avatars getting married? Or will humans marry them in these mysterious virtual worlds? Who knows – your guess is as good as mine.

But when it comes to the metaverse – never say never!

Is ‘free will’ an illusion?

Do you sometimes feel you are separate from your brain?

Does it feel like your brain provides you with advice that you will sometimes listen to, and sometimes not?

Have you heard someone say: ‘My heart is telling me one thing, and my brain is telling me another!’

From time to time we all enter into this inner dialogue with ourselves where we sometimes reject what our brains tell us and rather just follow our feelings? Or the other way around.

Why does it sometimes feel as if your brain is a separate entity from yourself?

The debate as to whether we exert any control over our brains – i.e. if we have a ‘free will’ – has been raging on from the earliest times and is still today giving rise to lively debates. There are passionate camps for and against the concept of ‘free will’.

My Xzistor Concept brain model has managed to explain many of the mysterious things that happen in the brain to me. It has also helped me to understand why we perceive this ‘mental duality’ that makes us believe we have a ‘free will’.

The part of the brain that tells us WHAT to do…

My brain model clearly shows how one part of the brain has the specific task of telling us WHAT to do. We experience this as emotions – physical feelings based on homeostatic functions – that urge us to do things when these functions go out of balance (outside of their acceptable ranges). Some of these will urge us to pursue things e.g. food (hunger), and others will urge us to avoid things e.g. hazardous situations (fear).

Over the years we learn what to do to address these emotional demands. For instance, if you get hungry, you know you should walk to the kitchen. To achieve this you walk down the passage and at the Da Vinci print on the wall, you turn right into the kitchen. Similarly, if you are cold, you know you should walk to the drying cupboard which is located down the passage and left at the Da Vinci print – here you can turn up the central heating. We don’t know these navigational routes as babies, we learn about these over time.

We form associations and with a lot of patience from our loving parents, we eventually learn how to walk down the passage and either turn right or turn left to act on these emotions. When we get food or turn up the heat – we feel better and our behaviours that solved the problem are reinforced – stored in the brain as the correct behaviours to resolve these emotions in future.

In this way we come to form a whole database of associations (memories) that we can use when next we need to act on one of these emotions.

Part of the brain tells us HOW to do things…

Let’s now use the example where your mother has hidden a slab of chocolate in the drying cupboard. She told you that you are only allowed to eat some of the chocolate over the weekend. You are also aware that there are apples in the kitchen.

While watching TV in the lounge, you suddenly start to feel hungry. You get up and start walking down the passage. The hunger emotions will now automatically access that part of your memory where hunger solutions are stored including navigational cues. It will use visual information about your environment to narrow down the options…i.e. it will use the visual images of the passage to narrow down the options to apples or chocolate, and eliminate further afield food sources e.g. sushi, pizza, take-away fish and chips, etc.

Since the chocolate tastes better to you, the navigational cues to walk to the chocolate will be stronger and more persistent. But now the context around the chocolate reveals itself and suddenly you see your mother’s face again warning you that the chocolate is for the weekend. You are suddenly filled with fear that you will disappoint your mother, and this diminishes the appeal of the chocolate. In the end the fear is strong enough to make you rather navigate to the apples. You will feel as if you have ‘decided’ to rather go to the apples…

When we get an urge our brains will propose numerous options to us to solve the situation (in my model we call this directed Threading). It will also flash up images (along with their emotions) to create the context around each proposal. This context will eliminate certain options. The food might be too far away (restaurant), to unhealthy, take too long to prepare, etc. So the context around each food option will weaken or strengthen the emotional urge to pursue it. Negative connotations will erode the appeal of some source and positive connotations will strengthen the appeal.  

It will feel as if we actually made a decision based on the options presented to us by our brains.

That is indeed true. But it is also true that this decision was made by the processes (physical mechanisms) of the brain and there was no entity involved separate from the brain itself.

This decisional interaction between two parts of the brain, where a request is put to the brain by part A (WHAT) and options are presented by part B (HOW), then weighed up and whittled down to a final answer, all happens within the brain – driven by emotions and informed by learning – totally isolated from external influence. Even if someone were to yell at you to stay away from the chocolate, that information from the environment will only serve to change the ‘context’ of the chocolate option and affect the strength or weakness of the emotion to pursue the chocolate. Deciding to not go for the chocolate, is just a result of the same process in the mind – not by a separate or external entity.

And it is easy to think you are not just your brain, that your brain is a handy companion that accompanies you on your path through life providing advice. It really feels as if we can enter into conversations and debate with this helpful mental companion – sometimes agreeing and sometimes disagreeing. But both the ‘me’ and ‘my brain’ taking part in these dialogues, is the same entity that finally decides what you will do and what you will not do.

This squarely puts me in the camp of those believing that true ‘free will’ does not exist.

But I will never stifle a lively discussion around ‘free will’ when sitting with friends around the BBQ fire and enjoying a good glass of Merlot – as I can always decide to change my mind about ‘free will’ if I choose to do so.

Yes, I can change my mind. Or can I?

Understanding Emotions

Summary of a new book released by an author from the Xzistor LAB:

Title: Understanding Emotions: For designers of humanoid robotics

Author: Rocco Van Schalkwyk

For any questions around this publication – contact Rocco.

Here is a link to the original scientific paper by the author on which this book was based (click the title):

In the book Understanding Emotions: For designers of humanoid robots the author, Rocco Van Schalkwyk, translates his own paper (above) into a short guide of 45 pages using layman’s terms. This makes it an easy read for researchers and students.

As part of the author’s research into the brain, and the brain model that he has developed to control robots, he has discovered a really simple way to explain emotions. The purpose of this compact guide is to convey this simple approach to those who are intrigued by emotions and able to read a scientific paper or textbook.

The author’s simple approach will help the reader avoid the seemingly unending debate amongst neurologists, psychologists and philosophers as to what emotions are. It offers a practical ‘engineering-type’ explanation of how emotions work in the brain and how we can build machines with real humanlike emotions. The guide includes a short piece of pseudo code showing how the functionality can be incorporated into a computer program that provides physical robots and virtual agents with artificial emotions.

The author also provides links to his research website where the approach had been successfully implemented as part of brain model programs controlling virtual and physical robots.

Proud product of the Xzistor LAB now available on Amazon!

Author is on Researchgate here – www.researchgate.net/profile/Rocco-Van-Schalkwyk. And view his Amazon author page here Rocco Van Schalkwyk on Amazon.

Click on image