Artificial intelligence (AI) comprises software with at least one of the following capabilities: perception (audio, visual, textual, or tactile), decision-making, prediction, automatic knowledge extraction or pattern recognition from data, interactive communication, or logical reasoning.
This technology continues to advance in innovation and industrial efficiency, and is increasingly present in the lives of individuals and companies. However, in addition to being an exciting development in building the future, it undoubtedly generates concern, because if its progress is allowed to go unchecked it could one day lead to serious problems. That is not to suggest that AI could evolve into some kind of destructive robots, as some people imagine, rather that we must consider how it can be correctly applied to resolve our most urgent, necessary and decisive problems, for the equitable benefit of a sustainable future.
A recent study published in Nature Communications shows how AI can facilitate or inhibit the achievement of the 17 objectives and 169 goals established by the United Nations (UN) that were agreed in the 2030 Agenda for Sustainable Development, highlighting the importance of considering its interconnectivity with social, environmental and economic systems.
Currently the most intelligent algorithms are mostly used to optimize advertisements, financial transactions, cars and autonomous weapons, deciding where they should go or which target to eliminate. Most ideas are born from competitiveness, without improvements for a protected global transformation included as part of each project. CEOs and scientists have the power to decide where to guide this new intelligence, therefore a mindset that also reflects ethical principles would work to obtain the best results for human beings. The efficiency of AI requires validity: it cannot determine what is good or bad by itself. Its possible effects need to be morally evaluated so as not to endanger humanity.
The ethics of technology is dedicated to fostering debates today about tomorrow, so that before launching any technological product, the impact it will have on society is evaluated. Computer engineering is showing greater interest and awareness of legal, social and ethical issues when a new algorithm emerges. More reasonable testing and the implementation of regulations and failsafes has become an indispensable part of any invention. There is a greater requirement for control and responsibility due to the recent data infrastructures that have proven to be able to analyze, calculate and memorize everything we do.
These regulations and failsafes are not contrary to innovation or creative capacity, it is just that technological desirability, due to the benefits and facilities it provides, must also be aligned with social desirability. Only then can it have a positive meaning, because in addition it includes in its technical specifications improvements relating to fundamental values such as social cohesion, trust, equality, and protecting the environment, among others, and meets and solves global expectations and problems.
Friend or Enemy?
An attempt is being made to advance human-machine interaction and utilize it in the creation of artificial intelligence that can work not only with limited context (automation), rules and pre-programmed steps that manage to simulate ethical human behavior in certain foreseeable situations through repetitive and monotonous tasks, but is also able to consider the multi-context: a “more intelligent” way to complete tasks. The necessary tools are provided so that, after some initial «training», it can make the best decision by itself depending on the situation it finds itself in. That is, it produces broader perspectives that are not conditioned on a single set of determined data, but is able to consider deeper and more complex variables.
Current machine learning is based on the formation and feeding of the system with a large amount of data (including unstructured data, generated by everyone in their digital lives) that enables the detection of patterns based on statistics. The system learns from all of this data, coming up with several possible responses, reinforcing its great decision-making power based on all the conclusions drawn, while positioning itself as an influential factor on a variety of topics (education, health, employment, finance and many more). This branch of development is investigated in parallel with logic, where algorithms are based on “symbols” that aim to conceptualize human knowledge. Together they are intended to provide the intelligence system with the ability to learn from its successes and failures, and, as a result, improve its responses over time.
For example: a machine learning system fed with images of animals is trained to differentiate the different types of animals that exist. It can identify a photo of a cat because it recognizes patterns that it has already seen in a lot of photos, but cannot explain what a cat is. From this problem the logic of the AI begins to fulfill its function.
The advantage is that it is a very powerful method. It evolves quickly to solve problems due to its relentless automatic data incorporation. Several systems of this type have already exceeded human performance, managing to detect similarities, differences and repetitions in large volumes of data. What was previously only obedience now implies the ability to find results and conclusions without being explicitly programmed to. However, this loss of control is alarming. Its opacity, or being a black box, is a cause for concern, because it is not known what it is really thinking and understanding, whatever the level of development and perfection of the system.
Help without Harming
Artificial intelligence can fail, possible errors may not be foreseen and for which we may not be prepared. Naturally, it takes humans time to establish the best way in which to teach AI that a mistake has been made, and to find a way to solve it.
Creation with self-awareness is fundamental, the responsibility of moral judgment cannot be delegated to machines. They must be prepared so that they fully understand each particular piece of information, and what we think and feel, so that they grow in a complementary way to our own experiences.
AI decisions cannot be retrained by data that repeats mistakes that have already been made. Greater care is needed so that human hostility does not become part of the technology that is being built. Biases in systems could end up codifying all kinds of prejudices and outdated stereotypes of their creators (classism, sexism, and racism, among others), running the risk of re-perpetuating them in the future.
Whatever the situation, human decision-making is essential and artificial intelligence can be a great complement. A society that revolved around the subjective decisions of machines that we still do not fully understand would not be safe at all.
Good practices that follow algorithmic responsibility in suspicion and investigation, auditing, meaningful transparency, and some kind of objective regulation so as not to end up loading these systems with messy human values that start to invade algorithms in confusing and imprecise ways are needed.
Software is increasingly learned and not designed. Systems are often trained using data generated from our actions, from which our prejudices could be reflected, or worse still, amplified. The future continues to be based on what we build and create now, with a general awareness of what it is today and what it can become. Objectives and strategies must be defined to help achieve the best result from a perspective including everyone.
About the Author
Fabrizio Amelotti is a Full-Stack Developer with 10 year experience in the IT and software development world. Fabrizio is a technical leader able to move forward a simple idea until the implementation and beyond.