About Us

Executive Editor:
Publishing house "Academy of Natural History"

Editorial Board:
Asgarov S. (Azerbaijan), Alakbarov M. (Azerbaijan), Aliev Z. (Azerbaijan), Babayev N. (Uzbekistan), Chiladze G. (Georgia), Datskovsky I. (Israel), Garbuz I. (Moldova), Gleizer S. (Germany), Ershina A. (Kazakhstan), Kobzev D. (Switzerland), Kohl O. (Germany), Ktshanyan M. (Armenia), Lande D. (Ukraine), Ledvanov M. (Russia), Makats V. (Ukraine), Miletic L. (Serbia), Moskovkin V. (Ukraine), Murzagaliyeva A. (Kazakhstan), Novikov A. (Ukraine), Rahimov R. (Uzbekistan), Romanchuk A. (Ukraine), Shamshiev B. (Kyrgyzstan), Usheva M. (Bulgaria), Vasileva M. (Bulgar).

Additional Information

Authors

Login to Personal account

Home / Issues / № 6, 2018

Philosophical sciences

MACHINE ETHICS
Dedyulina M.A.
Machine ethics is an area of applied ethics that has developed rapidly in the past decade. Increasingly autonomous robots have expanded the focus of machine ethics from the issues of ethical development and the use of technology by people to the development of ethical rules for machines. Among ethicists and engineers in the field of robotics, there is a debate about whether ethical robots are possible or only desirable.

Robots as intelligent agents are one of the most promising new technologies of the future. The smarter they become, the more useful and effective they are. However, historical experience shows that highly intelligent agents without ethical qualities can easily turn out to be unfair and destructive.

The ethics of robotic agents are the subject of two main areas of computer ethics. First, it is engineering ethics, which, first of all, places the responsibility on the design engineers, hoping that they will retain full control over the artifacts, regardless of complexity or autonomy. Secondly, the ethics of machines, which argues that ethics must be developed as intellectual artifacts that must behave autonomously in accordance with ethical standards.

Imagine for a moment that it will soon be possible to program a computer so that it can generate an answer to any particular moral dilemma that we might face. Now imagine a process that can lead to the writing of such a program, and what is the content of this program.

Philosophically, our view is interesting and promising is the ethical concept of P. Danielson. In his work 'Artificial Morality: Virtuous Robots for Virtual Games'[1], the philosopher models the design of artificial morality based on the moral theory developed by David Gauthier.

In the theory of morals by agreement the Canadian philosopher Gauthier argues that rational principles for selection include some of them, hindering social actor, pursuing his own interests, which he defines as moral principles [2, p. 3]. In other words, this theory brings a kind of 'golden rule' in thinking for taking a moral standpoint that carries the essence of morality from mutual defection to mutual cooperation.

Danielson invents many possible strategies, inspired by Gauthier's original introduction to the idea of 'forced maximization'. Limited maximization implies (1) that choices made by rational agents are made over strategies, or orders, rather than actions, and (2) that with such a discussion model, the agent must be a 'conditional employee', that is, the collaborator, when this agent recognizes another agent as a conditional co-operator. The choice of such a decision leads to the achievement of the Pareto-optimal result in the Prisoner's Dilemma, if an 'immoral agent' is found, or a defector. Innovation P. Danielson is that he was able to simulate a number of different types of strategies with increased ability to recognize that recognize different strategies and cooperate or deserts in accordance with his strategy and the strategy of his opponent.

In its study, the scientist has shown that the basic components of morality were artificial, using a computer in this field, we are just expanding artificial feature of ethics.

Artificial moral agents can be considered as 1-individual entities (complex, specialized, offline); 2-open and even freely acting Executive system (with specific, flexible and heuristic decision mechanisms and procedures), 3-free behaviour cultural creatures attached to the cultural value of human activity (natural) or artificial substance, 4 - systems which are opened for formation not only for training, 5 - objects with 'lifegraphy', not only 'stategraphy', 6 - endowed with diverse or even multiple forms of intelligence, such as moral intelligence, 7 - having not only automatisms, but also by convictions (cognitive and emotional complexes), 8 are capable of thinking, 9 are members of some real (physical or virtual) community [3].

Thus, the ethics of the machine must take into account the need to ensure freedom of choice when applying moral standards in certain areas of action and situations. A person must take the risk associated with providing freedom to cars, and he must also agree that cars increase the degree of freedom, his behavior depends not only on people, but also on his own decision and even on decisions of other machines.



References:
1. Danielson, P. Artificial Morality: Virtuous Robots for Virtual Games. - London: Routledge. -1992.- 255 p.

2. Gauthier, D. Morals by Agreement. - Oxford: Clarendon Press. – 1986.- 297 p.



Bibliographic reference

Dedyulina M.A. MACHINE ETHICS. International Journal Of Applied And Fundamental Research. – 2018. – № 6 –
URL: www.science-sd.com/478-25465 (29.03.2024).