AI and Social Responsibility

AI and Social Responsibility: Morality and Ethics in the Public Sectors

By Avraham David Sherwood
The Editor in Chief

Abstract

Nowadays the use of artificial intelligence algorithms is increasing, as these have become increasingly available in the public sectors.

Notable countries in Europe, France and the United Kingdom, for example, use such algorithms for more accurate heart transplants, fairer tax calculations, more stringent social conditions and proper school-student matching. These are few examples of administrative processes already happening across Europe, thanks to the use of various AI-based systems. This issue is not new but has been around for a decade.

The use of such algorithms is being renewed and innovated in public policy, thus enabling automation, personalization and interaction processes, especially for decision-makers in the public sectors. Despite the innovative use of acceleration, there are new moral and ethical challenges facing society, especially in the areas of transparency and public accountability.

For this reason, great attention must be paid to a new regulatory framework when it comes to the challenges posed by artificial intelligence algorithms.

In this article, I review and explain how different forces in the industry can be combined with maximum efficiency to meet the important mission of open and secure data policy and information so that it works in harmony with government and the public, with optimal accountability and transparency.

The emergence of artificial intelligence has occupied many researchers in the last two decades, but like any new technology discovery, there are also contradictions between those who fear progress and call for close scrutiny and those who are passionate about it and claim that everything will only improve and that there is nothing to fear.

Opponents of artificial intelligence explain that this technology will have the capabilities they can beat people, they will decide for them, take their jobs, breach their privacy and create some kind of supreme control over their lives because of the high usability it will provide for all lifestyles.

Proponents of Artificial Intelligence explain that using this technology machines will be able to perform diverse processes autonomously and utilize the high computational capabilities as a powerful computational tool for processing and interpreting large amounts of data in the best possible way, so that most of the technical operations machines can do and leave more free time for operations that require creative thinking , Productivity, management and solutions that are able to reduce crime and help alleviate serious illness.

These are two population groups and basically these are two technological conceptions, which are opposite to each other.

Opponents negatively value AI management in the public sectors, while highlighting critical issues that could adversely affect both resource efficiency and civil rights.

Supporters positively value the use of AI and believe that implementing such technologies can significantly improve not only public activity and public sectors, but also the quality of life of all citizens and should therefore prioritize budgets to fully implement research and development processes in this field.

In recent years there has been a constant debate between the scientific community and civil society about the impact of progress in the context of implementing AI systems on our lives.

The moral and ethical challenge of artificial intelligence-based systems is to balance these two groups, between the opponents and the supporters, while combining creativity and innovation, given the impacts of this technology that will continue to evolve while maintaining and honoring the universally recognized core values ​​and the social rules of every community in every country.

The different uses of AI-based systems using algorithms that analyze data and help decision makers in the public sectors (social, legal, health and etc.) require a thorough discussion of morality and ethics, at a much wider public level from the governments.

The cost of using the various algorithms that are used for data analysis and so on is significantly high and includes the entire evolutionary cycle of their functioning, from their application to the various systems, their maintenance, control and validation of results and to the training of users who are forced, obliged, informed and responsible. About reducing the tax rate for organizations that use AI technologies, especially in the public sectors, which may be misleading because proper development of various AI-based tools and systems requires many resources including fees and visas related to ethical and ethical aspects when it comes in artificial intelligence field.

A very positive task is to focus on the functional development of this technology as it requires efficient utilization of economic and professional resources capable of aligning with moral and ethical development and emphasizing the alignment with the data it processes and the conclusions it draws from the insights. In other words, if this is not the case, then what will be obtained from the data analysis can only help finance development costs, without providing solutions to people, since that was the initial goal in the first place. The worst option is to distort or create biases, which can create crucial errors in algorithm results and prevent decision makers from taking responsibility.

It is of the utmost importance to make informed use of the benefits of this technology and therefore serious investment is needed from the public sectors that will be committed to significantly improving the quality and efficiency of services in order to provide secure systems capable of reducing inequality. In order to understand in depth, I will analyze the aspects that represent the key elements of the existing public and scientific discussion>>

Data Quality and Neutrality: Machine learning systems need data prepared by humans (supervised learning) or at least selected by humans (unsupervised learning). This process also includes errors that have been introduced, even unintentionally, by professionals and duplicated in all common applications. For example, datasets with biases that propagate the same errors, as happened, for example, with crime prevention algorithms, where the data were compromised as a result of a series of errors that highlighted ethnic differences in the different groups in the population. Another example can be seen in unbalanced datasets, which estimate the weight of certain variables in relationship reconstruction, which predicts the results that force some events to be explained.

Responsibility: The examples we mentioned above highlight the powerful impact that artificial intelligence-based systems have on public sector decision making, even when they act as technical assistants to humans and also as a fully autonomous entity. AI produces a variety of effects on people’s way of life and therefore there is a need to establish legal responsibility. However, it is not possible to clearly identify the person responsible because the system owners, the producer, the process of artificial intelligence and in some cases even the end user can be attributed. Anyone designing AI systems may be responsible for some flaws associated with system or application design, but not behavior caused by faulty datasets. The question therefore arises whether the decision makers in the public sectors have a responsibility for decisions made on the basis of algorithms that process data that are affected by these biases? And if so, what kind of responsibility can such a decision maker have? If a robot is hurting someone, who should be directly responsible and who has the duty to compensate the victim? Can the public sector decision maker transfer its responsibilities to an AI system that does not respond like a human? Is there a moral and ethical professional duty to improve the efficiency of AI systems and how to control consistency over time? These are questions that raise a small part of the issues related to this area and highlight the need to establish core principles for moral and ethical use of AI technologies in public sector contexts.

Transparency and Openness: The public sector’s responsibility begins when a decision is made to provide services (or decisions) that concern the general public through the use of artificial intelligence-based systems. The role of responsibility is to meet the transparency and openness criteria, with transparency referring to a basic condition: avoiding discrimination and solving the information asymmetry problem, which guarantees citizens the right to understand the decisions of elected officials. In addition, the reference benchmarking policies must be strictly adhered to in order to avoid greater unmanageable effects. For the reason that a manager may be wrong and act in a way that is not transparent, when he is not pursuing the public good but his own interests, so too a non-transparent algorithm can make the same crimes (as the manager committed) more broadly and not only do wrongs but also Social discrimination.

Protecting the Private Domain: Another vital need that is closely linked to the previous section relates to protecting people’s data. The public sectors must design services based on AI systems that are capable of ensuring efficiency and prompt response, but equally important is protecting the sensitive data of citizens. This is a requirement that is directly related to the legal field and there are some ethical clauses which refer to the use that the authority may make in the data that came to its notice in situations different from those in which it was collected. It is right to ask Is it right, morally and ethically, to use data collected for other purposes and to draw conclusions in favor of prediction systems that represent completely different tasks? Can existing public information be used to predict new information?

In order to address these kinds of challenges, a number of general principles must be adopted: the need for an anthropocentric approach, which argues that artificial intelligence should always be put to the service of people rather than vice versa. Implementation of principles containing procedural (non-arbitrary), formal (equal treatment of individuals or equal groups) and significant capital (effective removal of economic and social obstacles). Satisfying certain basic universal needs, including respect for liberty and the rights of the individual and community, and more. All of these principles refer to the need of people in AI-based systems designed to benefit people’s services and their benefits in every context and not the other way around.

Source: https://www.researchgate.net/publication/336981280_AI_and_Social_Responsibility_Morality_and_Ethics_in_the_Public_Sectors

 

Leave a Reply

Your email address will not be published. Required fields are marked *