This post was originally published by Regina Nkemchor Adejo at Medium [AI]
Cogito software is an Artificial Intelligence (AI) system that provides call center agents with real–time feedback and conversational guidance to enhance customer experience. Backed by behavioural science, Cogito is unique because it gives human call center agents live suggestions from concepts such as their empathy levels and pace. It is a human-AI interaction software that has proven successful for call centers in the healthcare and finance industries. The Markkula Framework is used in analyzing the Cogito Software from an ethical perspective by applying Consequentialism, Deontology and Virtue Ethics theories. In this paper, I focus on the ethical perspectives of the Cogito software.
According to (Hassan et al., 2018), Ethics is the philosophical discipline interested in right and wrong, good and bad, do’s or don’ts. Ethics involves analyzing conduct that can benefit or harm customers (Moore, 1993). In addition, Ethics can be defined as the moral principles governing or influencing conduct or the branch of knowledge concerned with moral principles (Soanes & Stevenson, 2004). Ethics is essential to help frame our argument about right or wrong using rational and logical arguments. Ethics is woven in every aspect of society, governmentality, science and technology.
This paper focuses on discussing a real-life case of Cogito that sells AI software to their stakeholders to determine the emotional content of voice interactions between their call center agents and customers. In this case, I will be focusing on ethical perspectives.
Innovative software developed with Artificial intelligence (AI) technologies endeavour humans to explore how to make better sense of the world. However, while we make sense of the world to make better choices, we should also investigate the fundamental AI ethical aspect. We should not mistake efficiency for morality; something more efficient does not mean that it is morally better. Though often, efficiency is a dramatic benefit to humanity. For example, humans can make more efficient weapons — more efficient at killing people and destroying things — but that does not mean they are good or will be used for good.
The Markkula framework ethics helped to identify critical ethical considerations related to the Cogito Software. I group the framework into three theories: Consequentialism, Deontology, and Virtue Ethics. In the remaining part of the paper, I give a brief background of the Cogito software and then discuss the ethics and technology. After that, discussions of the ethical concerns found in the Cogito software case, supporting theories and framework followed. Then, I discuss the theories and framework, benefits, drawbacks, recommendations, and future directions. Finally, I gave a conclusion.
Background of Study
To decide what is good, we need to know what is (Hassan et al., 2018) by getting the facts of the case. Getting the facts helps us to understand better what is.
Cogito is an AI software designed to help call center agents communicate more clearly, improve overall performance, and empathize with frustrated customers. Cogito listens to the tone, pitch word frequency, and hundreds of other factors in their phone conversations with customers to analyze the conversation. The Cogito software with the capabilities of AI voice-analysis analyzes elements of conversations in different situations and scenarios, for example, “if they start speaking fast, the caller and agent talk on top each other, or a caller is silent for a long time.”
AI technologies like Cogito are deployed for inexplicit interactive uses. Other updates are employed behind the scenes in proactive services acting on behalf of users, such as automatically filtering content based on inferred relevance or importance (Amershi et al., 2019). The Cogito Software follows the AI interactive design guidelines, which helps to categorize user interaction with the software. The AI interactive guidelines for the Cogito software are categorized into Initially, During Interaction, When Wrong and Overtime.
According to a Times report written by Alejandro De La Garza in 2019, Cogito software is recorded to have been implemented to more than three dozen call centers across the United States in the previous year, with clients in Healthcare — Metlife and Humana.
According to Kevin Roose, New York Times report in 2019 says “The goal of AI softwares like Cogito is to ensure workers are more effective by providing live behavioural guidance to improve the quality of every interaction”. He said several MetLife employees he spoke to said they don’t have any problems getting pop-up notifications during their calls. However, some said they had struggled to figure out how to get the empathy notification to stop appearing.
Cogito software has earned good credit from its stakeholders; for example, according to Kelvin Ross, New York Times report in 2019, He said that MetLife representatives have noted that customer satisfaction has increased by 13% since their call centers first began to use the software.
Cogito software notifies customers when calls are being monitored and recorded during their phone conversations with call center agents. But, there is no additional disclosure explaining this layer of analysis of their voices, tone, or conversation patterns. Some of the concerns with AI software like Cogito are linked to data leaks as systems may be subject to new security vulnerabilities such as model poisoning attacks (Jagielski et al., 2018). Other concerns are Privacy, Lack of Transparency, Bias and Safety.
Ethics and Technology
According to the Markkula Center of Applied Ethics Brian Green Report (2016), “Ethics refers to standards of behavior that inform how customers ought to act in the various situations in which they find themselves — as friends, children, parents, business customers and so on. Ethics cannot be identified as a feeling, religion, cultural norm, science, and law. Technology Ethics is applying ethical thinking to the practical concerns of technology. Technology is changing every day, and since we choose the technologies, we make and live by, it becomes necessary to investigate ethical concerns around the technologies we use.
Technologies have different capabilities. AI technologies can be understood as conducive to human flourishing because they lead to a better understanding and deeper insights into various phenomena. For instance, reducing commuting times or increasing the effectiveness of email spam filters are two everyday examples of where AI can make the life of busy professionals easier (Faggella 2020). Another example is the Cogito software voice-analysis algorithms, which improve call agent conversations by using notification icons on the agents’ screens to alert them about their conversation partners’ mood and their patterns.
Artificial Intelligence (AI) technologies are becoming a part of our daily activities as humans. The technical capabilities of AI cut across different sectors -Technology, Manufacturing and Legal Systems and it also helps humans explore and make better sense out of the world. The European Union’s AI High-Level Expert Group (2019) made this point when it states that “AI in itself is not an end, but instead a promising means to increase human flourishing, thereby enhancing individual and societal well-being, common good and bringing progress and innovation.” Intelligent softwares can prove to be very useful in positively propelling our society forward when implemented correctly (Clever et al., 2018). Yet, it can be quite invasive when appropriate policies and control mechanisms do not regulate it.
In developing applications, there are always possibilities to unintentionally create software’s that may discriminate based on some characteristic or operate unfairly to specific customers or communities (Clever et al., 2018). In the Cogito case, customers are informed that their call is being monitored and recorded, but no disclosure explains the layer of analysis of their voices, tone, or conversation patterns. To this, one cannot know the extent to which customers privacy right is infringed. Therefore, one of the ethical issues that should be addressed as part of deploying the Cogito software to stakeholders is the Lack of Privacy. Privacy is the right of data subjects to control when, how, and to what extent their personal information is accessed and used (Stoycheff et al., 2018). According to General Data Protection Regulation (GDPR), 2018, “Data privacy means empowering users to make their own decisions on who can process their data and for what purpose.”
Lack of Transparency in the use of customers data is another ethical concern found in the Cogito software. Customers feel threatened that their data could be used for purposes other than the intended. AI systems are, by definition, not transparent, or at least not transparent in the way that other Information Communication Technology (ICT) systems could be because of the commercial confidentiality of algorithms and models that limit Transparency (USACM 2017).
The Lack of Transparency of how customers’s data are used in the Cogito system can further lead to other ethical concerns such as Bias, as AI poses a risk to human rights and the ability of machine learning to infringe the right to non-discrimination and equality (Access Now Policy Team 2018). The ethical issue with biases is determined by the use of the algorithms themselves since they inevitably make biased decisions (Mittelstadt et al., 2016). In this situation the bias decision might be made if the algorithms’ design and functionality reflect the values of the designers of the software themselves.
A system that ensures safety can be the biggest threat if not used safely and ethically. Safety is also an ethical issue in the Cogito system, mainly because the system interacts directly with the physical world. For instance, customers having phone conversation with call center agents but unknown to them, they are also interacting with voice-analysis algorithms in the Cogito software. Through their conversation, the voice-analysis algorithm places notification icons on the agents’ screens to alert them about their mood and conversation patterns. In this case, monitoring such massive data might lead to a privacy threat which might threaten the user’s safety and security.
The ethical use of data involves knowing how to use data to protect the privacy and maintain data confidentiality. Unfortunately, AI’s ability to detect patterns using data is often automated, being processed by devices using machine learning algorithms that are insensitive to these issues.
Supporting Theories and Framework
As humans, we must weigh moral issues ourselves, keeping careful track of both the facts and the ethical considerations.
An Ethical Framework helps provide valuable information on recognizing ethical issues, getting facts, evaluating alternative actions, making a decision and testing it, acting and reflecting on the outcome.
I apply the Markkula Framework to analyze the Cogito software. I group them into three ethical decision-making moral theories — Consequentialism, Deontology, and Virtue Ethics. This framework aims to help identify critical ethical considerations in the Cogito software and the ethical courses of action.
The consequentialist ethical theory builds on the insight that determine good and bad need to consider the consequences of acts (Hassan et al., 2018). This type of reasoning is strongly linked with the utilitarian tradition and individuals like Benthem (1983). According to Bentham’s theory of utilitarianism focused on which actions were most likely to make customers happy; if happiness is the experience of pleasure without pain, the most ethical actions caused the most happiness and the least possible pain. This theory focuses on the Utilitarian, Common Good, Rights, and Justice approaches.
According to the Markkula Center for Applied Ethics Brian Green report (2016), Utilitarianism is the moral reasoning that emphasizes the consequences of actions. The Utilitarian perspective tries to maximize happiness and minimize suffering. The Utilitarian approach, in this case, focuses on the Cogito immediate stakeholders — Customers (current, previous, and future customers), how they are affected and what options will produce the best with the least harm done. While the Utilitarian idea of balancing pros against the cons is familiar to most as a system for judgment, there are fundamental problems concerning how AI technologies’ utility can be measured and realized (Stahl, 2004).
The customers, the stakeholders utilizing the Cogito software are the stakeholders directly involved in the system’s innovation. They are all affected by this project. The customer’s services centers utilize this software to get data from their customers through their phone conversations with call center agents, and the Cogito organization makes income through stakeholders who buy the software. Scientists and researchers are third-party stakeholders as they gain and contribute insights from the project.
However, the most harmed in this project is done to the customers whose privacy rights is taken for granted. They are the ones who are only told that their calls will be monitored and recorded but unaware that their data has been used to gauge their conversation and further use for performance analysis of call center agents. In other words, they are the ones whose data is collected and at risk of being misused. Privacy harms resulting from unauthorized access can include a breach of confidentiality and trust or financial harm to individuals from identity theft or identity fraud. Unfortunately, the Cogito software has only enhanced the potential for Privacy issues. In weighing the benefits and harm of this case, the decision involves a choice between stakeholders improving call center agent’s quality of interaction by using undisclosed voice-analysis algorithm to collect customers data during their phone conversation and threatening the customers privacy, which affect their trust in the organization.
It is legal to collect data with transparency and consent. Even though it might lead to better efficiency, efficiency is affected if customers do not feel secure, are unaware of the use of their data and do not trust those who use such technology. The option that will produce the best with the least harm is for the Cogito organization to involve stakeholders by making them an active part of the project — making stakeholders involved in the project can come from several activities such as workshops and demonstrations, which will help them to be more transparent and earn their customer’s trust.
The common good approach promotes the best outcome for all involved parties. Ideland and Malmberg ( 2015) argue that engaging in the common good makes you a desirable person.
Artificial intelligence helps learn about systems that are too complex for humans to understand well. For example, in the Cogito case, voice-analysis algorithms in the software use a cartoon cup as a helpful nudge for call center agents to sit up straight and speak like the engaged helper. It also analyses the conversation to let the agents know when they speak more quickly and if the caller is silent for a long time.
Cogito software is supposed to benefits everyone using it by making call center agents more effective in improving their phone conversations with customers and ensuring the customers privacy and safety during the conversation. However, this is not the case as this common good is averted by customers privacy being threatened, resulting in a lack of trust in the organizations utilizing this type of technology.
Cogito software could better serve the common good by gaining the trust of the community through protecting their privacy. The option that best serves not just a few customers but also the community is to include the public in the decision-making, letting them decide how their data will be used and ensuring transparency and safety. By this, everyone, will be carried along and enjoy the intended benefits of the software.
The right approach stipulates that the best ethical action protects the ethical rights of those who are affected by the action. It stresses the belief that all humans have a right to dignity and have the freedom to choose as a moral right. The right act produces the greatest happiness for a community or society. Here, I discuss how the Cogito software might impact individual rights and which option best respects the rights of all stakeholders. Developers of AI technologies have the right to collect data intended to improve humans’ well-being and efficiency, for example, in the Cogito case, where data is collected to improve phone conversations between call center agents and their customers. The customers also have the right to consent to collecting and using their private data. According to General Data Protection Regulation (GDPR) (2018), individuals have the right not to be subject to a decision based solely on automated processing. Industry and researchers also have the right to research and innovate. The option that will best respect the rights of all who have a stake is to ask customers consent before utilizing their data for conversational and emotional analysis.
Greek philosophers and Aristotle have contributed to the fairness approach that all ideas should be treated equally. The fairness and justice approach are concepts center on giving people what is properly due to them. The justice approach focuses on the option that treats humans equally or proportionately. The Cogito software seems fair as it is used by several industries such as Healthcare, Insurance, Financial Services, Travel, and Hospitality. It does not single out one sector or anyone based on irrelevant criteria. The software is used to make call center agent more efficient during phone conversations with customers. Data is needed to provide all the services proposed making it fair for software utilizing AI technology to collect data. However, not being entirely transparent about how customers’s private data is being used and processed during their phone conversation with call center agents is unfair. The call center agents during phone conversations with customers are able to see their emotions, using the data patterns collected by voice-analysis algorithms in the Cogito software, but customers are not aware of what is happening behind the scenes.
The option that helps to treat customers equally in this perspective is that they should decide based on what will benefit the majority by ensuring transparency in the use of customers data and allowing customers to decide if their data should be recorded and used for further analysis.
After analyzing from the consequentialism point of view, the Cogito software should not continue unless customers are allowed to exercise their rights and given some sort of power over their personal data and assured of their safety while interacting with the software.
Deontology focuses on the duties and obligations we have in each situation and considers what ethical obligations we have and what we should never do. Ethical conduct here is by doing one’s duties and doing the right thing by performing the correct action.
Deontology is when the agent’s duty is deduced from reason (Hassan et al., 2018). In this case, only when the customers are fully aware of the extent to which their recorded calls are used to gauge their
conversation with call center agents and for what purpose the organization utilizing the software has the right to use their data in such a manner.
Since AI can make incredibly complex moral decisions, humans must identify the logic used in a given decision transparently to determine the morality of the action in question accurately (Lin et al., 2012). Stakeholders who utilize the Cogito software have a duty to protect customers privacy. The developers of software with AI capabilities like Cogito have a duty to be transparent to stakeholders in the logic used in the software development to gain the trust of their customers.
However, this duty seems already questionable when customers are only told that their phone conversations are monitored and recorded, and there is no additional disclosure explaining this layer of analysis of their voices, tone, or conversation patterns. This will lead to a breach of trust, and the customers’s privacy right infringement as this data may be used to generate types of personal data such as personal emotional data, further exacerbating the situation (Tao et al. 2005, Flick 2016). The developers of the Cogito Software should update the software they already implemented in different organizations to address customers’s privacy issues rather than acquiring more stakeholders. The need to preserve privacy and protect personal data are critical considerations in implementing technologies with AI capabilities while following data protection law.
After the duty perspective analysis, stakeholders should not engage in utilizing the Cogito software because the developers/owners of the software would not genuinely identify their duty and act accordingly.
Virtue ethics is strongly associated with the classical Greek philosopher (Aristotle 1934). It is a long-standing ethical principle that argues that ethical actions should be consistent with ideal human virtues. For example, Aristotle argued that ethics should be concerned with the whole of a person’s life, not with the individual discrete actions a person may perform in any given situation. Hence, Virtue ethics seeks to answer what is good by focusing on the agent’s character in question.
In this case, the moral concern that seems more prominent is the lack of transparency of the stakeholders collecting private data to improve the quality of call center agents phone conversations with customers without the customers consent, which does not speak good of them. The moral values that potentially conflict with each other are efficiency and effectivity versus transparency, honesty and openness.
The owners of the Cogito software are interested in using the software for financial gain through providing services that utilizes AI technology to improve the quality of interactions of call center agents of their stakeholders. However, they overlook the importance of being open and transparent (Mingay, 2008) about how they use customers’s data to give the call center agents a heads-up about their emotions and how they should act and respond to their calls. The lack of transparency in the Cogito software leads to a lack of trust and customers turning to their reasons about how call-center agents are using technology against them. The stakeholders utilizing the Cogito software and the owners should prove that it is not using voice-analysis algorithms against customers during their phone interactions with call center agents. The only way they might prove such a negative is to be credibly committed to openness and transparency in the software’s logic, understanding how the algorithms work, and disclosing the layer of analysis in customers’ conversations. Hence one can say here that the moral wisdom behind the development of the Cogito software is inadequate and insufficient.
According to Kelvin Roose, the New York Times reporter 2019, the goal of software programs like Cogito is to make their stakeholder’s call center agents more effective and improve the quality of their interaction. However, customers right who engage in these phone conversations is being ignored and violated.
Looking at this case from a different angle when customers know their voices and tones are being monitored for further analysis. They might put up an act to change their tones during conversations with call-center agents, giving them a false analysis of their customer satisfaction.
After analyzing virtues, stakeholders should not utilize Cogito Software because it is not transparent and lacks the trust of customers.
In analyzing the Cogito software case, one cannot ignore its benefits as it takes hard work to build this type of system in the first place.
Software like Cogito that utilizes AI offers several technical capabilities that can have instant ethical benefits. The International Risk Governance Center (2018) names AI’s analytical prowess, that is, the ability to analyze sources and quantities of data that humans simply cannot process. In this case, the main benefits of the Cogito software are the AI voice-analysis algorithms, which uses data pattern to improve phone conversations at contact centers by enabling call center agents to enhance the quality and efficiency of their phone interactions with customers. Although, one can argue the customers involved are not even aware of how their data is used to actualize such effectiveness.
Also, the data collected from the Cogito software during call conversations could provide new opportunities for companies to enhance their performance (Herschel and Miori, 2017) when used alongside machine learning algorithms and data mining, the data could offer new opportunities for understanding and building relationships with existing customers.
Drawbacks of Cogito Software Ethic Case Study
One of the main drawbacks of the Cogito software is the non-disclosure to customers the capabilities of the voice analysis algorithm which is responsible for analyzing customers data, their tone and emotions during their phone conversations with call center agents. Although, the customers interacting with the call center agents know that their calls are recorded and monitored, they are unaware of the layer of analysis during their conversations. The implication of these is an infringement of the customers’s privacy and lack of trust in the stakeholders utilizing the software.
Another crucial drawback is that the Cogito software will heavily influence the call center agent’s behavioural patterns and could make them lose track of their identity in the long run. The software makes call center agents correct themselves 8 hours a day through its ability to serve as an adjunct manager, always watching agents at the end of every call and drawing up a statistics dashboard that their supervisors can review.
Recommendation and Future Direction
Since AI technologies can make incredibly complex moral decisions, humans must be able to identify the logic used in a given decision in a transparent way to determine the morality of the action in question accurately. Hence to work ethically with the Cogito software, customers whose data is being used should know how they are used or sold (Herschel and Miori, 2017) and control their private information flow.
The process of transitioning from an uncultivated to a morally habituated state technology moral virtues like care, courage, humility, magnanimity and others can be fostered and acquired (Vallor 2016; Harris 2008; Kohen et al. 2019; Gambelin 2020; Sison et al. 2017; Neubert and Montañez 2020; Neubert 2017). Developers of AI technologies like Cogito software should imbibe a sense to care for other needs and the will to address them. They should have a solid connection to empathy which is a precondition for taking the perspective of others and understanding their feelings and experiences. Caring for customers can help motivate developers to avoid building AI solutions that cause direct or indirect harm to customers, ensuring safety, security and privacy-preserving techniques. Furthermore, care can encourage AI practitioners to design AI applications to foster sustainability, solidarity, social cohesion, common good and peace.
As an essential condition for the harmonious innovation process, the scientific community should ensure more transparent working methods, strengthen self-regulation, and improve relations between science and society (Stemerding et al., 2015). Also, software’s like Cogito that utilizes voice-analysis algorithms produce increased institutional awareness. Hence future research should pay attention to ensuring individual rights and privacy are protected.
AI itself is not an end, but instead a good means to increase human flourishing, thereby enhancing individual and societal well-being, common good and bringing progress and innovation. However, AI technologies can be too invasive when appropriate policies and control mechanisms do not regulate them. The Cogito software used voice-analysis algorithms to improve call center agent efficiency during phone conversations with customers. Cogito faced ethical issues, the main being lack of privacy of the customers whose data is being collected to analyze their emotions during their phone conversations with call-center agents.
I analyze the Cogito software from an ethical perspective. To work ethically with the stakeholders, customers should have a transparent view of how their data is used and control the flow of their private data. Future researchers should critically look into ensuring that individual rights and privacy are protected in software like Cogito that utilizes AI technologies to provide services to the public
Amershi et al., (2019). Guidelines for human-AI interaction: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, NY.
Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines forhuman-AI interaction. Conference on Human Factors in Computing Systems –Proceedings. https://doi.org/10.1145/3290605.3300233
Bentham, J. (1983). Deontology; together with a table of the springs of action; and the articleon utilitarianism.
Clever, S., Crago, T., Polka, A., Al-Jaroodi, J., & Mohamed, N. (2018). Ethical Analysis of SmartCity Applications. Urban Science, 2(4), 96.
Faggella D (2020) Everyday examples of artificial intelligence and machine learning. Emerj, Boston,MA. https://emerj.com/ai-sector-overviews/everyday-examples-of-ai/. Accessed 2021–08–16
Flick C (2016) Informed consent and the Facebook emotional manipulation study. Res Ethics12. 10.1177/1747016115599568
Gambelin, Olivia (2020): Brave: what it means to be an AI Ethicist. In AI Ethics, pp. 1–5.GDPR, 2018 https://gdpr.eu/data-privacy/
Harris, Charles E. (2008): The good engineer: giving virtue its due in engineering ethics. In Science and Engineering Ethics 14 (2), pp. 153–164.
Hassan, N. R., Mingers, J., & Stahl, B. (2018). Philosophy and information systems: where are we and where should we go? European Journal of Information Systems, 27(3), 263–277.https://doi.org/10.1080/0960085X.2018.1470776
Herschel, R., & Miori, V. M. (2017). Ethics & big data. Technology in Society, 49, 31–36.High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI.European Commission, Brussels. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai Accessed 2021–08–16
Ideland, Malin & Malmberg, Claes (2015) “Governing ‘eco-certified children’ through pastoral power: critical perspectives on education for sustainable development”, Environmental Education Research, 21 (2), s. 173–182.
International Risk Governance Center (IRGC) (2018) The governance of decision-making algorithms. EPFL International Risk Governance Center, Lausanne. https://infoscience.epfl.ch/record/261264/files/IRGC%20%282018%29%20The%20Governance%20of%20Decision-Making%20Algorithms-Workshop%20report.pdf.
Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., & Li, B. (2018). Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. Proceedings — IEEE Symposium on Security and Privacy, 2018-May, 19–35 https://doi.org/10.1109/SP.2018.00057 Kevin Roose, New York Times report in (2019), https://www.nytimes.com/2019/06/23/technology/artificial-intelligence-ai-workplace.html
Kohen, Ari; Langdon, Matt; Riches, Brian R. (2019): The Making of a Hero: Cultivating Empathy,Altruism, and Heroic Imagination. In Journal of Humanistic Psychology 59 (4), pp. 617–633 Lin, P., Abney, K., & Bekey, G. A. (2012). Robot ethics : the ethical and social implications of robotics. 386.
Lin, T. C., Wu, S., Hsu, J. S. C., & Chou, Y. C. (2012). The integration of value-based adoption and expectation-confirmation models: An example of IPTV continuance intention. Decision Support Systems, 54(1), 63–75. https://doi.org/10.1016/J.DSS.2012.04.004
Markkula Center for Applied Ethics, Brian Green Report (2016), https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/social-robots-ai-and-ethics/
Mingay, S. (2008). IT’s Role in a low carbon economy. In Keynote Address, Greening the Enterprise 2.0 Conf.
Mittelstadt BD, Allo P, Taddeo M et al. (2016) The ethics of algorithms: mapping the debate. Big Data Soc, 3(2):1–21
Moore, G. (1993). Principia ethica. Cambridge University Press.
Neubert, Mitchell J. (2017): Teaching and Training Virtues: Behavioral Measurement and
Pedagogical Approaches. In Alejo José G. Sison, Gregory R. Beabout, Ignacio Ferrero (Eds.): Handbook of Virtue Ethics in Business and Management. Dordrecht: Springer Netherlands,pp. 647–655.
Neubert, Mitchell J.; Montañez, George D. (2020): Virtue as a framework for the design and use of artificial intelligence. In Business Horizons 63 (2), pp. 195–204.
Sison, Alejo José G.; Beabout, Gregory R.; Ferrero, Ignacio (Eds.) (2017): Handbook of Virtue Ethics in Business and Management. Dordrecht: Springer Netherlands.
Soanes, C., & Stevenson, A. (2004). Concise oxford English dictionary. http://electronics.seiko.co.uk/media/downloadable/Oxford/Guide to COD 11.pdf
Stahl, B. C. (2004). Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. Minds and Machines, 14(1), 67–83. https://doi.org/10.1023/B:MIND.0000005136.61217.93
Stemerding, D., Rerimassie, V., van Est, R., Zhao, Y., Chaturvedi, S., Ladikas, M., & Brom, F. W. (2015). A comparative framework for studying global ethics in science and technology. In science and technology, governance and ethics (pp. 99–110). Springer, Cham. Stoycheff, E., Liu, J., Xu, K., & Wibowo, K. (2018). Privacy and the Panopticon: Online mass surveillance’s deterrence and chilling effects: Https://Doi.Org/10.1177/1461444818801317, 21(3), 602–619. https://doi.org/10.1177/1461444818801317
Tan, J., Conference, R. P.-S. I., ACII, undefined, & 2007, undefined. (n.d.). Affective computing and intelligent interaction. Springer. Retrieved August 16, 2021, from https://link.springer.com/content/pdf/10.1007/11573548.pdf
Times report written by Alejandro De La Garza in (2019), https://time.com/5610094/cogito-ai-artificial-intelligence/
USACM (2017) Statement on algorithmic transparency and accountability. ACM US Public Policy Council, Washington DC. https://www.acm.org/binaries/content/assets/public-policz/2017_usacm_statement_algorithms.pdf. Accessed 2021–08–16Access Now Policy
Team (2018) The Toronto declaration: protecting the right to equality and non-discrimination in machine learning systems. Access Now, Toronto.https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08- 2018.pdf. Accessed 2021–08–16
Vallor, Shannon (2016) Technology and the Virtues. A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press.
This post was originally published by Regina Nkemchor Adejo at Medium [AI]