Questions about moral reasoning, such as “is it ok to lie?” are often the focus of which approach?

  1. Moral Difference Between Humans and Robots: Paternalism and Human-Relative Reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
  • Human, Technology and Architecture - the Change of AI-Robot Technology and the Industry of Architectural Service -. 변순용 - 2017 - Environmental Philosophy 24:77-93.
  • Causes and Consequences of Mind Perception.Adam Waytz, Kurt Gray, Nicholas Epley & Daniel M. Wegner - 2010 - Trends in Cognitive Sciences 14 (8):383-388.

    Perceiving others? minds is a crucial component of social life. People do not, however, always ascribe minds to other people, and sometimes ascribe minds to non-people. This article reviews when mind perception occurs, when it does not, and why mind perception is important. Causes of mind perception stem both from the perceiver and perceived, and include the need for social connection and a similarity to oneself. Mind perception also has profound consequences for both the perceiver and perceived. Ascribing mind confers (...)

  • Conformity Assessments and Post-Market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.

    The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...)

  • From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 1:1-33.

    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of (...)

  • Cooperation with Animals? What Is and What Is Not.Federico Zuolo - 2020 - Journal of Agricultural and Environmental Ethics 33 (2):315-335.

    The idea of cooperation has been recently used with regard to human–animal relations to justify the application of an associative theory of justice to animals. In this paper, I discuss some of these proposals and seek to provide a reformulation of the idea of cooperation suitable to human–animal relations. The standard idea of cooperation, indeed, presupposes mental capacities that probably cannot be found in animals. I try to disentangle the idea of cooperation from other cognate notions and distinguish it from (...)

  • “We All Know It’s Wrong, But…”: Moral Judgment of Cyberbullying in U.S. Newspaper Opinion Pieces.Rachel Young - 2022 - Journal of Media Ethics 37 (2):78-92.

    This study uses the theory of dyadic morality to analyze construction of cyberbullying as a contested social issue in U. S. newspaper opinion pieces. The theory of dyadic morality posits that when...

  • A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights From the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.

    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it means to be an (...)

  • Zur Kulturellen Dispositi on der Service-Robotik.Klaus Wiegerling - 2019 - Filozofija I Društvo 30 (3):343-365.

    Der Beitrag beschäftigt sich mit Fragen, die die kulturellen Grundlagen der Servicerobotik betreffen. Die Diskussion und Beantwortung dieser Fragen werden im Diskurs über Service-Robotik noch immer vernachlässigt. Zunächst wird erörtert, wie unabhängig Service-Robotik von kulturellen Vorgaben sein kann. Kulturelle Dispositionen haben Auswirkungen auf die angestrebte Adaptivität und Autonomie der Systeme, konkret auch auf deren Sensorik und Aktorik. Service-Robotik muss als kulturell eingebettete Technologie konzipiert werden. Nur in einer physischen und symbolischen Nähe zum konkreten Menschen kann sie zu einem adaptiven und (...)

  • It Would Be Pretty Immoral to Choose a Random Algorithm.Helena Webb, Menisha Patel, Michael Rovatsos, Alan Davoust, Sofia Ceppi, Ansgar Koene, Liz Dowthwaite, Virginia Portillo, Marina Jirotka & Monica Cano - 2019 - Journal of Information, Communication and Ethics in Society 17 (2):210-228.

    Purpose The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be (...)

  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.

    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)

  • Artificial Intelligence and African Conceptions of Personhood.C. S. Wareham - 2020 - Ethics and Information Technology 23 (2):127-136.

    Under what circumstances if ever ought we to grant that Artificial Intelligences are persons? The question of whether AI could have the high degree of moral status that is attributed to human persons has received little attention. What little work there is employs western conceptions of personhood, while non-western approaches are neglected. In this article, I discuss African conceptions of personhood and their implications for the possibility of AI persons. I focus on an African account of personhood that is prima (...)

  • Implementing Moral Decision Making Faculties in Computers and Robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.

    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)

  • Technological Delegation: Responsibility for the Unintended.Katinka Waelbers - 2009 - Science and Engineering Ethics 15 (1):51-68.

    This article defends three interconnected premises that together demand for a new way of dealing with moral responsibility in developing and using technological artifacts. The first premise is that humans increasingly make use of dissociated technological delegation. Second, because technologies do not simply fulfill our actions, but rather mediate them, the initial aims alter and outcomes are often different from those intended. Third, since the outcomes are often unforeseen and unintended, we can no longer simply apply the traditional (modernist) models (...)

  • From Assigning to Designing Technological Agency.Katinka Waelbers - 2009 - Human Studies 32 (2):241-250.

    In What Things Do , Verbeek (What things do: philosophical reflections on technology, agency and design. Penn State University Press, University Park, 2005a ) develops a vocabulary for understanding the social role of technological artifacts in our culture and in our daily lives. He understands this role in terms of the technological mediation of human behavior and perception. To explain mediation, he levels out the modernist separation of subjects and objects by decreasing the autonomy of humans and increasing the activity (...)

  • Commentary: Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems.Witold M. Wachowski - 2018 - Frontiers in Psychology 9.
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.

    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)

  • Why Information Ethics Must Begin with Virtue Ethics.Richard Volkman - 2010 - Metaphilosophy 41 (3):380-401.

    Abstract: The information ethics (IE) of Floridi and Sanders is evaluated here in the light of an alternative in virtue ethics that is antifoundationalist, particularist, and relativist in contrast to Floridi's foundationalist, impartialist, and universalist commitments. Drawing from disparate traditional sources like Aristotle, Nietzsche, and Emerson, as well as contemporary advocates of virtue ethics like Nussbaum, Foot, and Williams, the essay shows that the central contentions of IE, including especially the principle of ontological equality, must either express commitments grounded in (...)

  • Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.

    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)

  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.

    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)

  • Refining the Ethics of Computer-Made Decisions: A Classification of Moral Mediation by Ubiquitous Machines.Marlies Van de Voort, Wolter Pieters & Luca Consoli - 2015 - Ethics and Information Technology 17 (1):41-56.

    In the past decades, computers have become more and more involved in society by the rise of ubiquitous systems, increasing the number of interactions between humans and IT systems. At the same time, the technology itself is getting more complex, enabling devices to act in a way that previously only humans could, based on developments in the fields of both robotics and artificial intelligence. This results in a situation in which many autonomous, intelligent and context-aware systems are involved in decisions (...)

  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.

    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and values that should be adhered to in the design and deployment of artificial intelligence. These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. (...)

  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.

    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)

  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.

    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)

  • The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.

    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)

  • Ethical Protocols Design.Matteo Turilli - 2007 - Ethics and Information Technology 9 (1):49-62.

    The paper offers a solution to the problem of specifying computational systems that behave in accordance with a given set of ethical principles. The proposed solution is based on the concepts of ethical requirements and ethical protocols. A new conceptual tool, called the Control Closure of an operation, is defined and used to translate ethical principles into ethical requirements and protocols. The concept of Generalised Informational Privacy (GIP) is used as a paradigmatic example of an ethical principle. GIP is defined (...)

  • Ethics and Consciousness in Artificial Agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.

    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)

  • A Challenge for Machine Ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.

    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)

  • The Artificial View: Toward a Non-Anthropocentric Account of Moral Patiency.Fabio Tollon - 2021 - Ethics and Information Technology 23 (2):147-155.

    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)

  • What’s the Matter with Cognition? A ‘Vygotskian’ Perspective on Material Engagement Theory.Georg Theiner & Chris Drain - 2017 - Phenomenology and the Cognitive Sciences 16 (5):837-862.

    The cross-disciplinary framework of Material Engagement Theory (MET) has emerged as a novel research program that flexibly spans archeology, anthropology, philosophy, and cognitive science. True to its slogan to ‘take material culture seriously’, “MET wants to change our understanding of what minds are and what they are made of by changing what we know about what things are and what they do for the mind” (Malafouris 2013, 141). By tracing out more clearly the conceptual contours of ‘material engagement,’ and firming (...)

  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.

    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)

  • Floridi’s Ontological Theory of Informational Privacy: Some Implications and Challenges. [REVIEW]Herman T. Tavani - 2008 - Ethics and Information Technology 10 (2-3):155-166.

    This essay critically analyzes Luciano Floridi’s ontological theory of informational privacy. Organized into two main parts, Part I examines some key foundational components of Floridi’s privacy theory and it considers some of the ways in which his framework purports to be superior to alternative theories of informational privacy. Part II poses two specific challenges for Floridi’s theory of informational privacy, arguing that an adequate privacy theory should be able to: (i) differentiate informational privacy from other kinds of privacy, including psychological (...)

  • Can We Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right From Wrong, Oxford University Press, 2009, Xi + 273 Pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  • Computer Ethics as a Field of Applied Ethics.Herman T. Tavani - 2012 - Journal of Information Ethics 21 (2):52-70.

    The present essay includes an overview of key milestones in the development of computer ethics as a field of applied ethics. It also describes the ongoing debate about the proper scope of CE, as a subfield both in applied ethics and computer science. Following a brief description of the cluster of ethical issues that CE scholars and practitioners have generally considered to be the standard or "mainstream" issues comprising the field thus far, the essay speculates about the future direction of (...)

  • Trusting Artificial Intelligence in Cybersecurity is a Double-Edged Sword.Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.

    Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. (...)

  • Just Information Warfare.Mariarosaria Taddeo - 2016 - Topoi 35 (1):213-224.

    In this article I propose an ethical analysis of information warfare, the warfare waged in the cyber domain. The goal is twofold: filling the theoretical vacuum surrounding this phenomenon and providing the conceptual grounding for the definition of new ethical regulations for information warfare. I argue that Just War Theory is a necessary but not sufficient instrument for considering the ethical implications of information warfare and that a suitable ethical analysis of this kind of warfare is developed when Just War (...)

  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.

    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)

  • Ethical Principles for Artificial Intelligence in National Defence.Mariarosaria Taddeo, David McNeish, Alexander Blanchard & Elizabeth Edgar - 2021 - Philosophy and Technology 34 (4):1707-1729.

    Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...)

  • A Comparative Analysis of the Definitions of Autonomous Weapons Systems.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Science and Engineering Ethics 28 (5):1-22.

    In this report we focus on the definition of autonomous weapons systems. We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement (...)

  • The Possibility of Deliberate Norm-Adherence in AI.Danielle Swanepoel - 2020 - Ethics and Information Technology 23 (2):157-163.

    Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence, we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these (...)

  • Robowarfare: Can Robots Be More Ethical Than Humans on the Battlefield? [REVIEW]John P. Sullins - 2010 - Ethics and Information Technology 12 (3):263-275.

    Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...)

  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.

    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)

  • Ethics and Artificial Life: From Modeling to Moral Agents. [REVIEW]John P. Sullins - 2005 - Ethics and Information Technology 7 (3):139-148.

    Artificial Life has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” (...)

  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.

    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)

  • Responsible Computers? A Case for Ascribing Quasi-Responsibility to Computers Independent of Personhood or Agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.

    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)

  • Organisational Responses to the Ethical Issues of Artificial Intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.

    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...)

  • Discourses on Information Ethics: The Claim to Universality. [REVIEW]Bernd Carsten Stahl - 2008 - Ethics and Information Technology 10 (2-3):97-108.

    An important question one can ask of ethical theories is whether and how they aim to raise claims to universality. This refers to the subject area that they intend to describe or govern and also to the question whether they claim to be binding for all (moral) agents. This paper discusses the question of universality of Luciano Floridi’s information ethics (IE). This is done by introducing the theory and discussing its conceptual foundations and applications. The emphasis will be placed on (...)

  • The Cambridge Handbook of Information and Computer Ethics, Ed. Luciano Floridi , 327 Pp., 978-0-521-88898-1. [REVIEW]Richard A. Spinello - 2013 - Business Ethics Quarterly 23 (1):154-161.
  • The March of the Robot Dogs.Robert Sparrow - 2002 - Ethics and Information Technology 4 (4):305-318.

    Following the success of Sony Corporation’s “AIBO”, robot cats and dogs are multiplying rapidly. “Robot pets” employing sophisticated artificial intelligence and animatronic technologies are now being marketed as toys and companions by a number of large consumer electronics corporations. -/- It is often suggested in popular writing about these devices that they could play a worthwhile role in serving the needs of an increasingly aging and socially isolated population. Robot companions, shaped like familiar household pets, could comfort and entertain lonely (...)

  • Humanist and Nonhumanist Aspects of Technologies as Problem Solving Physical Instruments.Sadjad Soltanzadeh - 2015 - Philosophy and Technology 28 (1):139-156.

    A form of metaphysical humanism in the field of philosophy of technology can be defined as the claim that besides technologies’ physical aspects, purely human attributes are sufficient to conceptualize technologies. Metaphysical nonhumanism, on the other hand, would be the claim that the meanings of the operative words in any acceptable conception of technologies refer to the states of affairs or events which are in a way or another shaped by technologies. In this paper, I focus on the conception of (...)

  • What refers to knowledge of how the mind works and the ability to control the mind?

    Neuropsychology is the discipline which investigates the relations between brain processes and mechanisms on one hand, and cognition and behavioral control on the other.

    Which type of memory is the ability to generate a memory of a stimulus encountered previously without seeing it again?

    Recognition memory, a subcategory of declarative memory, is the ability to recognize previously encountered events, objects, or people. When the previously experienced event is reexperienced, this environmental content is matched to stored memory representations, eliciting matching signals.

    What is the term for descriptions of what occurs in a particular situation?

    Scripts. General descriptions of what occurs and when it occurs in a particular situation, used to organize and interpret everyday experiences.

    Which refers to the different hemispheres of the brain becoming specialized through childhood?

    This process of hemispheres becoming specialized to carry out different functions is called. lateralization. Lateralization ("of the side," in Latin) begins before. birth and is influenced both by genes and by early experiences.