AI: Hallucinations
Something that I've pondered for some time (especially given the recent advances in AI), is around the hallucinations that they experience. It shouldn't surprise anyone to hear that current AI's are notorious for fabricating answers to satisfy questions, or generating images that have surprising oddities, leading to very questionable outcomes (looking at you US lawyers, check your sources), however the construct around this did get me thinking.
From the days of AI image generators not understanding human fingers, to recent (at time of writing) AI generators getting very confused by acrobatics / ballet, the concept of visual hallucination within an AI (to me) is similar to that of human dreams. Many times I've woken after realising that things aren't quite right (as have many others). For example, rooms that don't align / look to be straight out of the movie Inception, people that don't look quite right, or gardens that don't follow any form of common design / seem to defy physics.
We take dreaming for granted, a time where our brains get downtime and our subconsciousness is left to its own devices. We also know dreams can have an impact on our day (even with them being completely fabricated in most instances), it can still emotionally impact us hours later. From a science perspective, the purpose of dreams still isn't fully understood, with many theories existing as to what purpose they actually serve.
Reading recently about how OpenAI discussed the 5 levels of Super AI to create an Artificial General Intelligence (AGI), it made me question where hallucinations fit into it, and if at a core level these hallucinations would actually be a construct for true artificial consciousness (surpassing AGI).
Taking humans as an example, we are more than the sum of our parts / knowledge / experiences. While those with a neurological condition that allows them to recall each day in time with perfect clarity (as if it was only a few moments ago), they still are more than the sum of those recalled memories / experiences. Our free will provides us the ability to be creative, to grow, to put chilli sauce on a Pop Tart (ok, that one might just be me).
Is it unrealistic to believe that an artificial consciousness would need/require a level of subconsciousness (including dreams similar to that of hallucinations), and that it would provide it with its own source of randomness / inspiration that provides a source of 'free will'. To some this could be considered anthropomorphism, or a desire to make an otherwise inanimate object have human characteristics, but how do we define consciousness? How would a program that self-evolves based solely on its input be considered self-aware when its entire path from start to finish would be deterministic / repeatable?
The future will definitely be interesting...