An informal discussion with those who have questions about robot brains I think I can answer – else expect a reply like ‘Very busy at work at the moment!’
It all started on Twitter when I (alias @xzistor) said:
Understanding that machines can subjectively experience emotions requires a mental journey. Study the scientific explanation I offer very carefully. Next realize that there are other ways to provide the same bio functions necessary and sufficient to generate emotions. Be brave!
By the way, my name is Rocco Van Schalkwyk (just Rocco) from the Xzistor LAB. I am an engineer working in the field of marine/subsea robotics and autonomous ships – and do brain modeling as my hobby. I have a few collaborators who are PhD level specialists in different fields including neuroscience and brain modelling. I will reach out to them when the questions get too hard for me to answer!
The above was my response to @rome_viharo from Big Mother who said he wanted to understand the Xzistor Concept brain model.
I said he must be brave – it could be a lonely journey.
He said he’ll be brave. Brrr…he is still skeptical, so I will have to come up with something new if I want to smash the ‘hard problem. Think I have something for him! But ready to hear his arguments too!
All that happens in the Xzistor brain model can be understood at the hand of a few basic terms, principles and concepts. Easiest is to read my two short (free) guides (absolutely necessary to proceed on the ‘…not so hard journey…’).
Without reading the above two guides it will be hard to proceed…really…as they introduce some key definitions and principles.
“The problem I have to solve is to provide scientific evidence that will convince @rome_viharo and others that the functions in the human brain can be simplified and generated by other means, and that this can over time, with sufficient learning, be sufficient to develop a sense of subjective reality in the mind of robot that is principally no different from what is experienced by humans.”
Nice and easy. Let’s go!
Question 1 by @rome_viharo:
If Bibo has feelings as you say, this means Bibo should be able to experience ‘ideas’…how then?
Xzistor robots’ brains are designed to wander all the time (just like humans). Their brains are restrained (focused) from this default tendency to wander – by priority actions required to address needs (just like humans). The Xzistor model prescribes a clear mathematical algorithm for how and when this mind wandering takes place – called Threading (there is a ‘threading’ process in the human brain as well – when the mind is free-wheeling with no urgent actions to perform). This Threading process recalls Associations (memories) from the Association Database one after the other based on shared attributes. It often starts with what is observed through the senses in the current environment and then follows a process of re-evoking Associations related to this first input based on shared artifacts. The algorithm will now identify and re-evoke Associations from anywhere in the Association Database as long as each next Association that is re-generated has a link to the previous one based on the Xzistor logical Threading protocol (based on shared attributes/artifacts). Most of the time each new Association called up will re-evoke visual imagery that was stored when the Association was formed (pictures from memory like when the human brain daydreams). In Xzistor robots these visual images re-evoked from memory will not be the same as direct optical sensing as it will be lower resolution and retinally defused – meaning what was not close to the centre of the retina will be out of focus. The Xzistor robot will learn that these incomplete (fragmentary/defuse) visual recollections with retinal blur come from memory and is not real-time input from the environment (again – just like humans!).
Although this mind wandering that happens naturally in the human brain (and Xzistor robots) can provide a great daydreaming experience, we also learn that we can use this Threading process to solve problems. We learn that sometimes these ‘random’ Associations that our brains come up with can suddenly present us with insight into how to solve a problem. Often this happens because an Association was recalled containing a ‘principle’ which was applied to solve a problem in another domain that is not completely unlike the problem we are trying to solve now.
Listen to Ben here: “I was sitting at the coffee shop today thinking about having to mow the lawn and I suddenly got this idea! I thought what if I use the same control logic – the same programmable schema – that I built into my autonomous vacuum cleaner, and put it into my electric lawnmower! Will it control itself and mow the lawn and move back to the charging station by itself?”
While sitting at the coffee shop, Ben’s mind was casually wandering through thoughts from things he had experienced in the past – only occasionally changing tack when he noticed something in the environment or heard something. Then he would simply carry on Threading again without any urgent actions to attend to.
A sudden problem then entered Ben’s brain as he remembered he needed to go and mow his (large) lawn – an arduous and time-consuming task. Because the problem was suddenly centre in his mind, it was being repeatedly presented to the ‘executive part’ of the brain driven by a slight level of stress (the fear of exertion and not being able to watch football). This stress state (sympathetic nervous system generating small amounts of adrenaline) served to focus the brain and prevented it from wandering off – a process we call ‘directed Threading’ or Thinking. Since the stress state is an emotional avoidance state, it kept on focussing the mind on the ‘problem’ rather than allowing it to wander through the Association Database in an unhindered fashion.
Ben has moved from daydreaming, to problem solving.
So, where did Ben’s ‘idea’ come from?
He was starting to explore thoughts like: ‘Do I really want to push that lawnmower? Do I really have to be there all the time to control it?’. And Ben’s brain changed from casual mind wandering to narrowing its search for new Associations to those that had similar attributes to the the lawnmower he was thinking about – for instance to Associaitons that involved similar devices that needed control inputs from humans, and how this can be avoided. These thoughts were conceptually close enough to Ben’s autonomous vacuum cleaner that required to inputs from humans.
Here we see that the Association of the vacuum cleaner came not only with the visual imagery of the vacuum cleaner – but also with the attribute of ‘autonomous operation’ – which linked further to its programmable logic schema. We can now deem this programmable logic schema to be the ‘idea’ that the brain came up with (from memory) to help Ben solve his problem.
So Ben had a problem and his brain led him into a problem-solving modality where his persistent concerns (fears) about excessive exertion and missing the football managed to narrow (direct) the mind wandering process in his brain. This meant that only Associations sharing similar attributes to the ‘problem concept’ were extracted from memory and placed before the ‘executive part’ of the brain to help it find a solution. And bingo!
When the directed Threading process provides an Association with attributes (like a programmable logic schema) that could potentially solve a problem – we can call this an ‘idea’.
Xzistor robots solve problems by using ‘ideas’ they get from Associations as explained above. But when they have no direct knowledge (experience) available in their Association Database (memory), the robot will have to stop and Think – it will start to perform directed Threading and look for relevant Associations that might provide ‘ideas’ to solve the problem. Xzistor robots make use of inductive inference and will try these ‘ideas’ even if they are wrong because they have nothing else to go on.
Early life Xzistor robots will use simple ‘ideas’ from memory to find simple solutions e.g. cannot see red apple, will try green apple (and see what happens). Later life Xzistor robots, given enough resources to learn, will theoretically develop on a trajectory similar to humans and later extract ‘ideas’ from Associations like – recipes, rules, procedures, methodologies, techniques, flow diagrams, schemas, etc. as all of these will become known as potential aids – ‘ideas’ basically – when trying to solve specific problems (just like humans!)
Listen also what I say in this interview with Dr. Denise Cook (@personalitygeni) about how ideas are used by Xzistor robots – go to 1:07:44 – 1:09:00 in the interview:
Question 2 by @rome_viharo:
Only if we are defining self-awareness as identical to a thermostat having awareness would this make sense?
I hope the explanation above shows how the Xzistor model provides functions similar to the human brain well beyond the limited functions of a thermostat.
Question 3 by @rome_viharo:
I am very much of the school that the hard problem is a very real hard problem, but I have my own reasons on why the problem should be predicted to be really really hard, so I am really going to reserve my skepticism pretty strong on this one with you, which is tough because I love everything else about what you have done!
Let me present my case (supported by neuroscientists at PhD level) and then I will be equally open-minded to hear your arguments.
Comment 4 by @rome_viharo:
It is very beautiful so far, I think you have genuinely created a whole system here that could model fluidity of mind in terms of behavior, so that is why you being “wrong or right” about view of mind/feelings would not be relevant to me.
Great – thank you. I dedicated my life – every grown-up waking hour – to this understanding of the brain. But don’t feel sorry for me because I could not have had a better life and my model so clearly shows that humans only ever do things for Satiation! So it was, and still is, very much an adventure for me offering lots of intellectual stimulation and fun!
Question 5 by @rome_viharo:
But is Bibo really having an experience?
We have spoken about what Bibo can do once there are some Associations in his mind e.g. daydreaming (Threading) and Thinking (directed Threading) in order to solve problems. But how do these Associations end up in the Association Database along with all the appropriate and useful attributes like emotion values (+ or -), emotional saliency, sensory states, and other artifacts? It is through experiences that the brain learns – from the first simplest experiences to the more complex and nuanced experiences later in its life. Again the way the robot experiences life is ‘principally’ no different from humans – it has all the emotions (also those from ‘recognising’ and recalling emotional events/objects from the past) and the robot will store Associations in real-time as it experiences its ‘life’ – automatically giving more provenance to those Associations with higher emotions values/saliency. It is difficult to prove that a robot’s subjective experiences are the same as a human’s, just like it is difficult to prove that one person’s subjective experience is the same as another person’s (how would you measure this and put irrefutable evidence on the table?). But what becomes obvious as you get more familiar with the Xzistor brain model and the way it uses a simple set of functions to grow an incredibly rich experience-basis – is that there is no reason why it would not reach a stage where it has developed speech and speak the words: “I feel hungry, I feel cold, I feel happy, I feel sad, I hate exercise, I love my tutor, etc.”
And now one can turn the ‘hard’ challenge around and say, based on this mathematically predicable trajectory towards complexity and the inclusion of synthetically constructed subjective emotional states, why would one argue that this ‘reality, this set of experiences’ will feel different to an Xzistor robot from the way a human would experience similar subjective states.
When an Xzistor robot develops up to where it eventually can use language so that it can self-report that it ‘feels’ cold, it means it has learnt to associate the word ‘cold’ with a sensory representation of a homeostatic deficit state – just like humans would. So what disqualifies the robot’s subjective experience from being just as subjective as a human’s experience? We must be careful not to be so in awe of our own human brain’s complexity and the rich way in which we experience our own subjective reality, that we make a ‘hard’ decision that robots cannot experience similar subjective states. The human brain is a complex multi-variable adaptive control system that generates the functional states necessary and sufficient to make it experience all the effects that make up its human’s subjective sense of reality. The Xzistor robot brain is a complex multi-variable adaptive control system that generates ‘functionally’ similar (but simplified) states to those in the human cognitive brain to create the robot’s subjective sense of reality. On what basis can we say the effects created by these functionally similar systems will not ‘principally’ be the same?
Question 6 by @rome_viharo:
I’m not sure neurochemistry itself can account for experience at this stage so when I see you using language like “sending signals” to the hunger area it just makes me ask the question, okay, how does sending signals to the hunger area generate the experience of hunger instead of the behavior or the reaction of hunger?
This is one of the biological brain’s basic systems modelled by the Xzistor Concept. By now you will hopefully be familiar with the Body and Brain UTRs defined by the model. These are homeostasis/allostasis mechanisms that send information about their deficit and recovery levels to sensory areas where sensory representations are created that also contains information about the extent to which these UTR states have been biased (through operant learning) into avoidance (bad) or pursue (good) states. These sensory states can now be presented to the ‘executive part’ of the brain in a common ‘emotion’ format so that this central assessment/relay complex can use it to select the appropriate action the brain should take to solve Hunger (using additional information from e.g. Brain UTR emotion states coming from the Association Database).
For Hunger to be truly experienced subjectively as a ‘feeling’, there are a few necessary conditions that must be met:
- A macro-Body Hunger UTR state must be generated that could represent the aggregate of all homeostasis micro-Body Hunger UTR states (salt, sugar, sour, carbo, spicy, umami, etc.) conditions contributing the overall Hunger state.
- A Body UTR state creating a ‘stress’ state (modelled on the sympathetic/parasympathetic nervous system) that is parasitically generated by the macro-Body Hunger UTR state.
- The macro-Body Hunger UTR state will contain the information around the Hunger level that needs to be made available to the brain to solve for Hunger.
- The ‘stress’ Body UTR state (modelled on the sympathetic/parasympathetic nervous system) will also ensure that an allostatic ‘fear of’ Hunger stress state (also an avoidance state) will in future be generated when recalling Associations about feeling Hungry.
- To ‘feel’ Hunger the above macro-Body Hunger UTR state needs to be turned into a sensory state (e.g. a pseudo-sensory tactile state in S1 or insula) associated with areas inside the body like the gut or the trunk. It has now been turned into an emotion.
- To ‘feel’ stressed about Hunger the above ‘stress’ Body UTR state needs to be turned into a sensory state (e.g. a pseudo-sensory tactile state in S1 or insula) associated with areas inside the body like the gut or the trunk. It has now been turned into an emotion.
- The ‘sensory’ state based on the macro-Body Hunger UTR state and felt in the body as a good(pursue) or bad(avoid) ‘emotion’ will represent the Hunger condition, the intensity level and if the level is increasing or decreasing.
- The ‘sensory’ state based on the ‘stress’ Body UTR and felt in the body as a good(pursue) or bad(avoid) ‘emotion’ will represent the ‘stress’ level associated with the Hunger condition, the stress intensity level and if the stress level is increasing or decreasing.
The brain will now have an ‘emotion’ (felt in the gut/trunk body areas) state that in addition contains all the information required to confirm it is a Hunger state, the level of the Hunger state and if it is in Deprivation or Satiation, if it should be avoided or pursued and to what level. Parasitically the brain will at the same time experience an ‘emotion’ (felt in the gut/trunk body areas) state that in addition contains all the information required to confirm it is a ‘stress’ state, the level of the ‘stress’, if it is in Deprivation or Satiation and if it should be avoided or pursued and to what level.
In short, the brain will now feel Hunger as if coming from inside the body along with a stress state also felt in the body, and it will feel a conditioned compulsion to take action to avoid these states.
Interesting – in future when the brain is not Hungry but thinking back to a Hunger episode, the brain will not re-evoke Hunger based on only these thoughts, but it will re-evoke the ‘stress’ state. This will teach the brain to start looking for food long before it gets Hungry. Clever!
To solve for Hunger, the brain can now use the information contained in the above ‘emotions’ and approach the Association Database with the request to see if Hunger Satiation sources can be sensed (‘recognised’) in the environment, or if any object in the sensed environment forms part of navigation cues leading to a Hunger Satiation source.
If the brain has had enough time to learn a language, it might also have learned to utter words to a person/robot it can observe such as: ‘I feel Hungry!’ This voice (effector) skill would have been learnt during a past Hunger solving event where the brain was rewarded with food after uttering the correct words to a tutor during training.
For an Xzistor robot to be able to self-report a subjective Hunger state, all that is required are the conditions listed above to experience an internal Hunger state and enough operant learning to eventually link the internal Hunger states (and emotions) with the phrase: ‘I feel Hungry!’
(Pssst….no different in humans.)
Question 7 by @rome_viharo:
What happened to Bibo’s point of view or sense of somatics in between? If not touching or feeling anything, if neither too far from “food” or too close, neither satiated nor hungry?
What is Bibo’s “ground state” of being like?
When no sensory stimuli is grabbing Bibo’s attention and no ‘emotion’ is demanding Satiation, Bibo’s life (and actions) will still resolve around Satiation. In order to find Satiation when there are no emotions to Satiate, Bibo will learn that Satiation can be created artificially. He will start to look for opportunities to create small amounts of stress that can be Satiated. For instance Bibo will start to explore unknown areas of his confine (slight fear of the unknown) and enjoy coming away from it unscathed and knowing that it holds no threats. Or Bibo can start to play games that generate artificial tension that can be relieved. If there are other sources if intertainment or humor that will create Deprivation-to-Satiations undulations, Bibo will go for it.
If Bibo gets tired fo playing exciting games the fatigue emotion will go ‘active’ and Bibo will find Satiation from resting – this is when he could easily slip into daydreaming (Threading) or even ‘directed’ Threading (like Ben above) if he suddenly thinks of some problem he wants to solve in order to remove it as a source of Deprivation (e.g. fear) – something he needs to be worried about for the future. A mental solution to this problem will also create Satiation (based on an appraoch where the calming effect of the parasympathetic nervous system is modelled).
Next questions will be answered soon!