Intelligence and mental health challenges often come hand in hand. This is my theory at least, although we all know there are many other reasons behind mental health issues.
When we talk of the potential for AI and robotics to achieve the ability to be conscious and rational, that means we are suggesting they may one day have minds of their own.
If this is the case then, like in humans, do we not open up the opportunity for for AI and robots to also suffer mental illnesses? The chance of mental illness becoming a thing in artificial intelligence at any level may mean that, with capability of consciousness, they can also become dysfunctional.
With this happening, even in a laboratory type environment, it might mean we actually understand more about some of the mechanisms and events that lead to mental illness and therefore aid prevention and/or cure.
For me it's a question of: 'Do we make AI and Robots have the ability for conscious, or do they end up mimicking consciousness - meaning AIs will have mental health?'
This also got me asking myself that by building a consciousness does this mean that AI will know the difference between pleasure and pain? Sadness and happiness?
I had a conversation the other day with someone who said they wouldn't want a grumpy robot. Fair point but, as we know with human emotions, those that always seem them happiest are often those hiding a lot more emotional challenges that at some point have to come out.
There is the next point to consider that if robots were about to develop signs of mental illness could it be that the robots have accidentally been programmed to have mental disorders and challenges? If the robot had free will, did they develop symptoms against their original program? And if I had created mental illness against the original program could this represent a human-like consciousness developed mental health illness and thus human-like mental disease?
My next point would be how do we treat AI with a mental illness and who should be the one treating them? If we have given them a human-like consciousness do they then have they right to be treated?
Would the way we treat humans for mental illness work? Whether that be drugs to manage a challenge or even to operate, what is the AI equivalent? A recode? But if that robot has a conscious mind then surely by looking to reprogram and change the makeup of that conscious mind without consent mean we are infringing some rights?
Obviously this could be a big topic and one for those in Oxford and Harvard alike to figure out from a philosophical view. The mere thought of mental illness in AI and robots might be a ridiculous event that may not ever happen.
However, even with my limited view of the world, I am not convinced. If one thing is clear, it is that technology progression isn't going to stop and we need to expect the unexpected at an even more accelerated rate. We will have to consider at some point whether the behavioural points of AI and robots will become an issue. With all of this we may also open up new ways of understanding the human mind and how it is affected by mental illness by the very harnessing of an artificially intelligent mind. I won't be around in 100 years but man am I intrigued.