Perceiving others? minds is a crucial component of social life. People do not, however, always ascribe minds to other people, and sometimes ascribe minds to non-people. This article reviews when mind perception occurs, when it does not, and why mind perception is important. Causes of mind perception stem both from the perceiver and perceived, and include the need for social connection and a similarity to oneself. Mind perception also has profound consequences for both the perceiver and perceived. Ascribing mind confers (...) Show
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...) With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of (...) The idea of cooperation has been recently used with regard to human–animal relations to justify the application of an associative theory of justice to animals. In this paper, I discuss some of these proposals and seek to provide a reformulation of the idea of cooperation suitable to human–animal relations. The standard idea of cooperation, indeed, presupposes mental capacities that probably cannot be found in animals. I try to disentangle the idea of cooperation from other cognate notions and distinguish it from (...) This study uses the theory of dyadic morality to analyze construction of cyberbullying as a contested social issue in U. S. newspaper opinion pieces. The theory of dyadic morality posits that when... The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it means to be an (...) Der Beitrag beschäftigt sich mit Fragen, die die kulturellen Grundlagen der Servicerobotik betreffen. Die Diskussion und Beantwortung dieser Fragen werden im Diskurs über Service-Robotik noch immer vernachlässigt. Zunächst wird erörtert, wie unabhängig Service-Robotik von kulturellen Vorgaben sein kann. Kulturelle Dispositionen haben Auswirkungen auf die angestrebte Adaptivität und Autonomie der Systeme, konkret auch auf deren Sensorik und Aktorik. Service-Robotik muss als kulturell eingebettete Technologie konzipiert werden. Nur in einer physischen und symbolischen Nähe zum konkreten Menschen kann sie zu einem adaptiven und (...) Purpose The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be (...) Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...) Under what circumstances if ever ought we to grant that Artificial Intelligences are persons? The question of whether AI could have the high degree of moral status that is attributed to human persons has received little attention. What little work there is employs western conceptions of personhood, while non-western approaches are neglected. In this article, I discuss African conceptions of personhood and their implications for the possibility of AI persons. I focus on an African account of personhood that is prima (...) The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...) This article defends three interconnected premises that together demand for a new way of dealing with moral responsibility in developing and using technological artifacts. The first premise is that humans increasingly make use of dissociated technological delegation. Second, because technologies do not simply fulfill our actions, but rather mediate them, the initial aims alter and outcomes are often different from those intended. Third, since the outcomes are often unforeseen and unintended, we can no longer simply apply the traditional (modernist) models (...) In What Things Do , Verbeek (What things do: philosophical reflections on technology, agency and design. Penn State University Press, University Park, 2005a ) develops a vocabulary for understanding the social role of technological artifacts in our culture and in our daily lives. He understands this role in terms of the technological mediation of human behavior and perception. To explain mediation, he levels out the modernist separation of subjects and objects by decreasing the autonomy of humans and increasing the activity (...) With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...) Abstract: The information ethics (IE) of Floridi and Sanders is evaluated here in the light of an alternative in virtue ethics that is antifoundationalist, particularist, and relativist in contrast to Floridi's foundationalist, impartialist, and universalist commitments. Drawing from disparate traditional sources like Aristotle, Nietzsche, and Emerson, as well as contemporary advocates of virtue ethics like Nussbaum, Foot, and Williams, the essay shows that the central contentions of IE, including especially the principle of ontological equality, must either express commitments grounded in (...) In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...) A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...) In the past decades, computers have become more and more involved in society by the rise of ubiquitous systems, increasing the number of interactions between humans and IT systems. At the same time, the technology itself is getting more complex, enabling devices to act in a way that previously only humans could, based on developments in the fields of both robotics and artificial intelligence. This results in a situation in which many autonomous, intelligent and context-aware systems are involved in decisions (...) Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and values that should be adhered to in the design and deployment of artificial intelligence. These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. (...) Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...) Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...) Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...) The paper offers a solution to the problem of specifying computational systems that behave in accordance with a given set of ethical principles. The proposed solution is based on the concepts of ethical requirements and ethical protocols. A new conceptual tool, called the Control Closure of an operation, is defined and used to translate ethical principles into ethical requirements and protocols. The concept of Generalised Informational Privacy (GIP) is used as a paradigmatic example of an ethical principle. GIP is defined (...) In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...) That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...) In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...) The cross-disciplinary framework of Material Engagement Theory (MET) has emerged as a novel research program that flexibly spans archeology, anthropology, philosophy, and cognitive science. True to its slogan to ‘take material culture seriously’, “MET wants to change our understanding of what minds are and what they are made of by changing what we know about what things are and what they do for the mind” (Malafouris 2013, 141). By tracing out more clearly the conceptual contours of ‘material engagement,’ and firming (...) Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...) This essay critically analyzes Luciano Floridi’s ontological theory of informational privacy. Organized into two main parts, Part I examines some key foundational components of Floridi’s privacy theory and it considers some of the ways in which his framework purports to be superior to alternative theories of informational privacy. Part II poses two specific challenges for Floridi’s theory of informational privacy, arguing that an adequate privacy theory should be able to: (i) differentiate informational privacy from other kinds of privacy, including psychological (...) The present essay includes an overview of key milestones in the development of computer ethics as a field of applied ethics. It also describes the ongoing debate about the proper scope of CE, as a subfield both in applied ethics and computer science. Following a brief description of the cluster of ethical issues that CE scholars and practitioners have generally considered to be the standard or "mainstream" issues comprising the field thus far, the essay speculates about the future direction of (...) Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. (...)
In this article I propose an ethical analysis of information warfare, the warfare waged in the cyber domain. The goal is twofold: filling the theoretical vacuum surrounding this phenomenon and providing the conceptual grounding for the definition of new ethical regulations for information warfare. I argue that Just War Theory is a necessary but not sufficient instrument for considering the ethical implications of information warfare and that a suitable ethical analysis of this kind of warfare is developed when Just War (...) This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...) Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...) In this report we focus on the definition of autonomous weapons systems. We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement (...) Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence, we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these (...) Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...) The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...) Artificial Life has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” (...) Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...) There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...) The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...) An important question one can ask of ethical theories is whether and how they aim to raise claims to universality. This refers to the subject area that they intend to describe or govern and also to the question whether they claim to be binding for all (moral) agents. This paper discusses the question of universality of Luciano Floridi’s information ethics (IE). This is done by introducing the theory and discussing its conceptual foundations and applications. The emphasis will be placed on (...) Following the success of Sony Corporation’s “AIBO”, robot cats and dogs are multiplying rapidly. “Robot pets” employing sophisticated artificial intelligence and animatronic technologies are now being marketed as toys and companions by a number of large consumer electronics corporations. -/- It is often suggested in popular writing about these devices that they could play a worthwhile role in serving the needs of an increasingly aging and socially isolated population. Robot companions, shaped like familiar household pets, could comfort and entertain lonely (...) A form of metaphysical humanism in the field of philosophy of technology can be defined as the claim that besides technologies’ physical aspects, purely human attributes are sufficient to conceptualize technologies. Metaphysical nonhumanism, on the other hand, would be the claim that the meanings of the operative words in any acceptable conception of technologies refer to the states of affairs or events which are in a way or another shaped by technologies. In this paper, I focus on the conception of (...) What refers to knowledge of how the mind works and the ability to control the mind?Neuropsychology is the discipline which investigates the relations between brain processes and mechanisms on one hand, and cognition and behavioral control on the other.
Which type of memory is the ability to generate a memory of a stimulus encountered previously without seeing it again?Recognition memory, a subcategory of declarative memory, is the ability to recognize previously encountered events, objects, or people. When the previously experienced event is reexperienced, this environmental content is matched to stored memory representations, eliciting matching signals.
What is the term for descriptions of what occurs in a particular situation?Scripts. General descriptions of what occurs and when it occurs in a particular situation, used to organize and interpret everyday experiences.
Which refers to the different hemispheres of the brain becoming specialized through childhood?This process of hemispheres becoming specialized to carry out different functions is called. lateralization. Lateralization ("of the side," in Latin) begins before. birth and is influenced both by genes and by early experiences.
|