My response to Gary Marcus tracking the evolution of large language models.
Agree with you, Gary. Can’t believe some of our colleagues could even have doubts.
If we just define AGI for the moment as: The hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can.
Assuming AGI is based on a mature adult – here is a question one can ask AGI when it has ‘arrived’ or like they say on ‘AGI Game Over Day’:
On ‘AGI Game Over Day’, my question to AGI: “Based on your personal experience, AGI, which aspect of an intimate relationship would you say is the most important for ultimate happiness – physical appearance, emotional connection or cultural background?”
This is a question your average adult will be able to answer from a personal perspective (it is specifically aimed at lifting out some of the key challenges to AGI).
I am putting a cognitive architecture on the table (Xzistor Concept) – the type of model many say is needed to ‘encompass’ LLMs. And the truth is, the LLM will be a small ‘handle-turner’ within the scope of the overall cognitive model. The model actually patiently anticipates errors from the LLM and will let it learn from these errors. Remember to think like humans we need reflexes, emotions, curiosity, reasoning, context, fears, anxiety, fatigue, pain, love, senses, dreams, creativity, etc. – without these every answer given by AGI will start like this: “Well, I have not personally experienced X but from watching 384772222 Netflix movies at double speed I belief X is a bad thing… “
Keep it up Gary – the science community owes the truth to the public!