This article argues that dangers of AI can be avoided if we create an AI conscience that is aligned with our values, needs and wishes.
You can also read this article on LinkedIn here.
We live in an ever more complex world with exponentially rising knowledge and content. Humanity struggles to comprehend and incorporate this knowledge beneficially. AI is starting to be used to process humanity’s knowledge, but we need to substantially extend its usage in the future to keep up with global calamities and individual issues. But creating an immensely powerful AI is not without justifiable concerns and trust issues, mainly as humanity is not yet capable to create a safe learning dataset and thus leading to dangerous AI decisions.
This article argues that dangers of AI can be avoided if we use a known evaluation system, namely a conscience developed for AI. We can align AI conscience with our values, needs and wishes, thus create a safe AI and with that build the much-needed trust in this technology.
This is part one of two introducing the idea and concept of an AI conscience. This article focuses on the question “Why we need AI Conscience”, with the next upcoming part two discussing “How to create AI Conscience”.
The world is growing ever more complex. Throughout human history, we have accumulated a lot of complex knowledge. And with more and more people joining to generate new knowledge, art, and content in general, the complexity keeps increasing. If we look at some numbers in research, around 14,800 papers are submitted to arXiv each month and around 100 AI papers are published per day. And these numbers are growing exponentially. Additionally, with the rise of social media and the interconnectedness of humanity, the number of opinions and entertainment content is also massively increasing. There is no end to this trend in sight, either in science or in society, and with artificial intelligence (AI) opening up new possibilities for generating new knowledge and content, and with ambitious projects bringing the Internet to every corner of our planet, thereby taking billions of new people online, there will be a downright explosion of new knowledge and content in the near future.
The knowledge created is valuable information that we can and want to use for the benefit of humanity and our businesses, but we are struggling to keep up with the increasing amount of new knowledge and content. For example, I am personally very interested in AI, technologies overall and the environment, but how am I supposed to read through the sheer number of new papers and articles on the internet every day? Even with an entire team dedicated to sifting through new content, this task is already exceedingly difficult. And while humanity is starting to quite inventively use AI to find relevant content and even summarize the content of entire books, this will not be enough. Yes, AI supported search and summarizations is a trend we need and naturally will focus on in the future. But to make all our knowledge and thus our understanding of the world comprehensible to humans, this limited kind of AI usage will not suffice to really benefit from humanity's knowledge.
What we need is a possibility to comprehend and combine knowledge as well as draw implications from and take actions according to it. Calamities like climate change or worldwide pollution and even the social issues that have existed for millennia like inequality and poverty call for much better solutions then we have now. An advanced automated usage of humanity’s knowledge could bring us much closer to that. Optimally, we need to have all available relevant knowledge incorporated in all our products and decisions and be able to see the implications to our designs and actions proactively instead of having to depend on the current reactive way. This way we could solve current urgent issues and avert coming ones, before they even start.
But how can we incorporate the abundance of complex knowledge, and use it wisely? Well, we cannot – yet. But if we look closely at the possibilities of AI, it might just be able to do exactly that. Is it not the strength of AI that it can recognize patterns even in the vastest of information and optimize output results? If we could create an AI that could not only put in context but apply all the knowledge available to humanity, our potential to find solutions to even the hardest challenges would increase manifold. Such an AI would have vast knowledge and we could use it to make crucial decisions and create important technology to shape our world, to shape us.
But this idea also raises many concerns. Not only sounds this far more powerful than many would be comfortable with right now, but this AI might also have errors and fail to actually understand our needs and wishes. It may also have an own hidden agenda or just a way to implement things that we do not agree with. In the short term, loss of jobs and restriction of opinions might happen, in long-term maybe biological life forms will be deemed suboptimal and be eliminated. And although I do not support world ending scenarios made my Hollywood, I certainly do not think that they are impossible nor improbable in some form. Right now, if we look at our training datasets for AI, they are full of errors, full of biases, and there are so many hidden patterns we humans cannot hope to perceive, that we actually do not know what an AI will make of it. Not to mention the hostile intentions of us humans, that an AI would inherit. For example, if we think of the internet as a possible dataset to teach an advanced AI (AGI, ASI), we should think again. It is full of violence, porn, and general stupidity. Teaching an AI on this, well it would be worthy of a Hollywood movie.
But how can we hope to ever teach AI in a way, that allows us to trust the decisions and actions it makes? How can we trust it even though we probably we will not be able to fully understand it?
There is a very much rational fear towards AI and its possibilities, but we still should not try to stop its development. We cannot. The industry demands it, for people it is convenient and to solve our worlds calamities we need it. But we should also not trust blindly and instead work together to create an AI that we can truly trust and that acts in best interest – an AI with a conscience.
We need AI to make use of humanity’s knowledge, and we increasingly need AI to help us manage our world, but on the other hand we justifiably fear its power and its incompetence. Therefore, we need to find a way to create an AI that we can trust. An AI that we are sure will act with benevolence, even if we do not understand each of its thinking processes. We need an AI with a conscience.
Right now, humanity lacks the ability to generate the right data to teach an AI in a safe and sustainable way to reach these goals. This inherently comes from how humanity is structured. Humanity today and historically has lots of different cultures, beliefs, habits and opinions as well as different stages of knowledge and technology we rely on. And this diversity is good. But humans lack to understand our world, humanity, the people they meet and even and most importantly themselves. Because of that, humans developed bad traits like nationalism, racism, religious hostility, and other destructive bad habits. Additionally, evolution tells us to impose our DNA and social group and antagonize everyone else. And while this is a very outdated position, most of humanity still falls for these primal thinking patterns. Teaching an AI these values would be disastrous. AI is vastly different to us, and it could very well, consciously or unconsciously, antagonize all of humanity, all of us, no matter color, religion or status.
So how do we create an AI, that is aligned with our values, needs, and wishes if we cannot use the standard human behavior and thinking patterns safely? The answer is, we need an AI with a universally applicable conscience.
If we think of an AI as basically a mind, then the question is, which part of our mind is tasked to evaluate if we act in accordance with what we learned is right. Well, the thing we tend to use to separate right from wrong is our conscience. Conscience is a complex and maybe even a little bit strange concept, as it is a mixture of knowledge and logic but also, and maybe more importantly, also a mixture of emotions and empathy. Conscience is learned from the values and actions of the surrounding community and the experiences a person has. If we talk about conscience, we assume, that someone is considering not only their own interests or emotions but those of someone else, too. We give people with a strong conscience a lot of positive character properties, for example honesty, kindness, prudence, wisdom, decency, patience, sincerity, fairness, benevolence, equality, and mercy. Although not all of these must apply to a person, most of us would agree, that a person with a conscience will have at least some of these traits.
But all these characteristics have a flipside, too. For example, the notion of treating some people fairly is not unambiguous, as the elimination of unfairness might lead to some people having less of something than they had before fairness was implemented. And while this is just one of many examples, it illustrates, why something seemingly simple like fairness can be a lot more complicated and have much higher impact, than it might seem to have at first sight. So, conscience is not as easily objectifiable as one might think.
But let us assume we could create an AI with a positive conscience, what could AI conscience achieve in our personal and professional life as well as society?
An AI with a conscience could enable the value-perfected decisions we all hope for and with it foster the trust in AI that humanity urgently needs to shape a beneficial future for us all. An AI with a conscience would consider involved parties whenever possible, their values, needs and wishes, and not just its own, or that of its owner. Decisions made by an AI with conscience would be much more holistic and would be solutions that are tailored to the local and the world situation and favor a win-win outcome for everybody. This means, an AI with a conscience can build the urgently needed trust in the technology by minimizing the probability of making individual focused egoistic decisions, as happens so often today, and instead increase each individual’s, customer’s and company’s trust in AI decisions. And with this trust the above-described necessary scale of AI into all parts of our lives is made possible. It would be a justified trust, because this kind of AI is not just algorithmically and calculatory superior to humans in narrow tasks and could manage and evaluate the knowledge complexity way better than humans ever before, but could also really make the best out of humanity’s moral and ethical concepts.
But how to define, design and implement such an AI conscience? This will be the subject of discussion in my next article.
Copyright © 2024 INVENAUT | All rights reserved.