An informal discussion with those who have questions about robot brains I think I can answer – else expect a reply like ‘Very busy at work at the moment!’ 😄
It all started on Twitter when I (alias @xzistor) said:
Understanding that machines can subjectively experience emotions requires a mental journey. Study the scientific explanation I offer very carefully. Next realize that there are other ways to provide the same bio functions necessary and sufficient to generate emotions. Be brave!😄
By the way, my name is Rocco Van Schalkwyk (just Rocco) from the Xzistor LAB. I am an engineer working in the field of marine/subsea robotics and autonomous ships – and do brain modeling as my hobby. I have a few collaborators who are PhD level specialists in different fields including neuroscience and brain modelling. I will reach out to them when the questions get too hard for me to answer!
The above was my response to @rome_viharo from Big Mother who said he wanted to understand the Xzistor Concept brain model.
I said he must be brave – it could be a lonely journey.
He said he’ll be brave.
All that happens in the Xzistor brain model can be understood at the hand of a few basic terms, principles and concepts. Easiest is to read these two short (free) guides of mine (absolutely necessary to proceed on the ‘…not so hard journey…’).
Without reading the above two guides in will be hard to proceed…really…as they introduce some key definitions and principles.
I am an engineer. I am a scientist. I need a problem statement: “The problem I have to solve is to provide scientific evidence that will convince @rome_viharo and others that the functions in the human brain can be simplified and generated by other means, and that this can over time, with sufficient learning, be sufficient to develop a sense of subjective reality in the mind of robot that is principally no different from what is experienced by humans.’
Nice and easy. Let’s go!
So, @rome_viharo went through my guide Understanding Emotions and came back with some really good questions. I will now address his questions. Hear me out!
PS. Just keep in mind that I have completed all work on this model many tears ago (typo – but I’m gonna keep it! J). I have finished (patented) and I witnessed it work correctly in a demo simulation and a physical robot. I am just sharing now how this model explains the brain in simple terms – and writing about it…
Question 1 by @rome_viharo :
Quoting from Understanding Emotions (read it!): “What we can now do is to deliberately feed a special hunger signal from the Urgency To Restore mechanism for hunger in Bibo’s brain to this ‘intra-abdominal’ sensory area.”
Makes sense as a model, but how it becomes an experience or an actual “state of presence” producing a subjective experience (a dimension somewhere who knows where) as opposed to a (very) elegant model of one I still do not see.
“Bibo will become aware of the sensory state in the ‘stomach’ or ‘intra-abdominal’ area of his brain, but it will have no meaning to him (i.e. he will not know whether it is a good or a bad thing). “
The hard part though is the “aware”, not the semantics. I could argue the entire affair has no meaning to Bibo, that part should not be surprising to learn.
How did Bibo become aware of presence as a point of view of Bibo?
Answer 1 by @xzistor:
We must systematically build up the complete picture of what is required to experience an actual “state of presence” or a sense of reality in the way we as humans experience it. The mental mechanism described above (from Understanding Emotions) is just a small part of the total emotion and cognition machinery that is necessary to form a complete sense of reality – but not sufficient.
But, WOW! We are straight into the ‘hard’ problem here! Let’s go for it!
There is a part in your brain that will use all the incoming and available information to decide for you what you must do (no free will – sorry!). Let’s just call it the ‘executive part’ of the brain for now (neuroscientists often associate the basal ganglia with this functionality). The main thing is – we know incoming and available information (say from memory) are used to base your behaviors on.
How do we provide information to this ‘executive part’ of the brain? What format must it be in? What will it understand? How will it decide what is important and what not?
It was designed to understand certain signals and states created in the brain and presented to it. And we can now present a hunger signal to it in a format it will understand.
We need to tell the brain the body it controls is hungry – and that hunger is bad i.e. hunger must be avoided.
How do we tell the ‘executive part’ of the brain the body is hungry and hunger must be avoided?
Using markers from the bloodstream, gut, etc. we create a representation (a state) in the brain that the ‘executive part’ of the brain can interpret. Firstly, the signals flood into the hunger center(s!) and set up a spatiotemporal state within the neural network (and supporting circuity) that is uniquely associated with that type of hunger, and also with the level of hunger. (By the way, it is fun to read the neuroscientific stuff on where such a hunger center, or centers!, might be located in the biological brain. It exists! It is real!)
This hunger state that has just taken shape in the hunger center (this could be in any cortical area(s) of the brain) will now make contact with the ‘executive part’ of the brain and present itself. In itself this state does not have any idea what it is or what it represents – but the ‘executive part’ of the brain was designed to work with the information presented to it in this format based on hunger signals from the body. The ‘executive part’ of the brain is now ready to process this information, decide how it compares with other incoming sensory states (an recalled states) and whether it is perhaps the strongest state and therefore needs to be prioritized.
If this hunger state has a high activation level and trumps the other states flooding into the ‘executive part’ of the brain, the ‘executive part’ will use this ‘high-activation’ hunger representation to approach the memory and see if it can unlock some previous learning that could contain cues (associations) as to what would be the best thing to do. This search will be augmented by information about the environment based on all the other sensory inputs at the time. This will help to isolate the most appropriate and effective actions from prior learning.
Now it gets interesting!
This ‘most appropriate’ cue or association coming from the memory needs not just be any cue – it can and should be one that was formed during an event where the brain learned what actions to perform to solve the hunger problem (preferably in the current environment). How did the brain learn to solve the hunger problem? Yip – during a learning event where it found food (either by itself or with some help from someone!) the brain is pre-programmed to reward the actions that led to the discovery of the food by storing these actions to memory – effectively telling the brain: “You don’t like being hungry – because when you are hungry and you find food, I very strongly reward you for finding food (and reducing hunger). I reward you by reducing your activation state and by making you store those actions to memory with the clear instruction to use them when next you get hungry and you are in this environment.”
What many people miss is that on that moment of storing preferred actions to memory in the brain – the brain is actually given its first preferences, its first biases. It gets given a state to avoid (hunger) and a state to pursue (eat when hungry and you have food!). Later when the brain has learnt a little more about life and learnt some words as part of a language, it will learn to say when experiencing this state “I feel hungry! It feels bad! I don’t like being hungry!” because the brain wanted it to avoid hunger. And “That tastes great! I like steak! I feel better now – I don’t feel hungry anymore!” when consuming food because the brain wants it to pursue satiation actions to make hunger go away. But these verbal expressions are based on deeply-rooted homeostasis mechanisms that we have in our bodies and brains (well described in the neuroscience academic literature) that sets us up to learn what we must pursue and what we must avoid to survive and thrive – and one can argue these form the basis of emotions i.e. this is where emotions start (after which they will permeate all of our everyday experiences and tag objects as good, bad and…meh).
Have you noticed what happened above?
We have said “we feel hungry”. This state in the brain representing hunger has become associated with the word ‘hunger’ and at the same time, it has become associated with an avoidance state. When we experience this state in the brain, we are automatically driven to avoid it.
The ‘executive part’ of the brain gets ‘a strong state’ coming in from the hunger center via its hunger portal and part of the information contained in this hunger state is the fact that it is an avoidance state (and how urgent it is compared to other incoming states). Note that the ‘executive part’ of the brain does its job without understanding what hunger is – it acts as a relay station manager knowing only how to adjudicate and direct incoming information based on a few simple rules. This will allow the ‘executive part’ of the brain to retrieve some cues – some bodily actions – from memory that might help to solve the current hunger problem. We will later see that the ‘executive part’ of the brain – when it has time – can also retrieve more context from memory around this hunger state and around what else is going on in the environment, especially when no resolving actions could be found immediately from past experience.
But let’s stop there for a moment.
Let’s come back to @rome_viharo’s question:
How does a state representing hunger (just a unique representation consisting of electrochemical signals propagating through a biological neural network) how it becomes an experience or an actual “state of presence” producing a subjective experience?
Ready for the ‘hard’ bit?
This state presented to the ‘executive part’ of the brain is real! We see the activation of the hunger areas in the brain on fMRI scans and we see how these are presented to the ‘executive part’ of the brain through neural pathways – measured empirically.
There is nothing more required from the brain than to register this real, physical incoming state and work with it in the manner that it does – for it to be able to become part of our reality. We have this state coming in and the brain teaches us to avoid it and soon we learn people use the words ‘feel hungry’ to refer to this unwanted state. And nothing more is required than for this state to act in the way described above and for the ‘executive part’ of the brain to help contextualize it to other sensory states from the environment and what it had been associated with in the past to retrieve helpful memories.
This is the part that people struggle to get!!!
They somehow think there must be more to it. But if a state is generated in the brain that you have been made to avoid, and its activation level makes it more urgent, and it had become associated with the word ‘hungry’ – you have all that you need to look someone in the eye and say: ‘Right now – I feel hungry!’
And it will be verifiably true – this is you experiencing subjective hunger. we see on fMRI scans predictably states activated in hunger centers when people describe how they are feeling hungry, and also when they are feeling satiated.
And why can we not built what we have described above into a robot?
Maybe we do not measure a marker in the bloodstream, but we measure battery level of charge. Now we can generate a state to represent this to the ‘executive part’ of the robot brain.
And here is the thing – the type of information contained in the neural hunger state, can easily be provided to the ‘executive part’ of the robot brain as a numerical state. Although it is in the form of a numerical value represented in machine code – it is still a unique state! It still contains all the information around where it comes from, how strong (urgent) it is and it can easily be compared to other incoming numerical states for prioritization. It can even serve for the robot brain to search for past learning (in an association database) that relates to this state within a specific environment and in this way appropriate avoidance actions can be retrieved. This is exactly what happens in my simple simulations and robots.
We must not make the fundamental mistake of saying what happens in the biological brain is fundamentally (principally) different from what we can do in a robot brain.
It is the same basic functions that we are providing – one is just in the biological brain and the other in a computer. The function is what is needed – and how the information is provided to the function should not make a difference.
I always say the Xzistor Concept is a functional brain model – and it is ‘means agnostic’. Strictly speaking , the human brain is just one instantiation of the model.
I wish and pray I can wake up tomorrow morning and the whole AI world just suddenly say: “We get it!”
And I will say: “Good! 60 years has been too long! Now let’s go build ’em!”
(Guys – need to bail out now and have some shopping to do tomorrow. But will pick up on the next brilliant questions from @rome_viharo. As other questions come in I will log them and add more blog posts. Hope you are enjoying your brave mental journey so far. Soon we will get into more complexity and explain why robots can have just as rich a personal experience as we have. But let me not get ahead of myself! Be brave!)