Big%20Data

Coeckelbergh, Mark (2020). AI ethics. The MIT Press

Destaques

Rui Alexandre Grácio [2024]

Destaques feitos no próprio livro em cada capítulo (em páginas destacadas):

Cap. 1 MIRROR, MIRROR, ON THE WALL
"AI is already happening today and it is pervasive, often invisibly embedded in our day-to-day tools.” p. 4
"AI ethics is about technological change and its impact on individual lives, but also about transformations in society and in the economy.” p. 9

Cap. 2 SUPERINTELLIGENCE, MONSTERS, AND THE AI APOCALYPSE
"In Mary Shelley’s Frankenstein—which has the telling subtitle The Modern Prometheus—the creation of intelligent life from lifeless matter becomes a modern scientific project.” p. 20
"In contrast to what many people think, religion and technology have always been connected in the history of Western culture.” p. 23
“While typically transhumanists have nothing to do with such religious cults and practices, clearly the idea of a technological singularity bears some resemblance to apocalyptic, eschatological, and doomsday narratives.” p. 27
Cap. 3 ALL ABOUT THE HUMAN
"There is a history of criticism and skepticism about the possibility of human-like AI.” p. 33
"We are meaning-making, conscious, embodied, and living beings whose nature, mind, and knowledge cannot be explained away by comparisons to machines.” p. 37
"Backed up by posthumanism, AI can free itself of the burden to imitate or rebuild the human and can explore different, nonhuman kinds of being, intelligence, creativity, and so on.” p. 44
Cap. 4 JUST MACHINES?
"Is an AI “just a machine”? Should we treat it differently than, say, a toaster or a washing machine?” p. 49
"Some argue that “mistreating” an AI is wrong not because any harm is done to the AI, but because our moral character is damaged if we do so.” p. 57
Cap. 5 THE TECHNOLOGY
"Who will have access to the technology and be able to reap its benefits? Who will be able to empower themselves by using AI? Who will be excluded from these rewards?” p. 77
"We should not forget the AI that already powers social media platforms, search engines, and other media and technologies that have become part of our everyday lives. AI is all over the place.” p. 79
Cap. 6 DON’T FORGET THE DATA (SCIENCE)
"We all produce data by means of our digital activities, for example when we use social media or when we buy products online.” p. 88
"Statistics used to be seen as a not very sexy field. Today, as part of data science and in the form of AI working with big data, it is hot. It is the new magic.” p. 95
Cap. 7 PRIVACY AND THE OTHER USUAL SUSPECTS
"AI may lead to new forms of manipulation, surveillance, and totalitarianism, not necessarily in the form of authoritarian politics but in a more hidden and highly effective way.” p. 101
"In a networked world, every electronic device or software can be hacked, invaded, and manipulated by people with malicious intentions.” p. 107
Cap. 8 A- RESPONSIBLE MACHINES AND UNEXPLAINABLE DECISIONS
"If AI is given more agency and takes over what humans used to do, how do we then attribute moral responsibility?” p. 110
Cap. 9 BIAS AND THE MEANING OF LIFE
"While problems of bias and discrimination have always been present in society, the worry is that AI may perpetuate these problems and enlarge their impact.” p. 126
"Should justice be blind and impartialor does justice mean creating an advantage for those who are already disadvantaged?” p. 133
"Automation powered by AI is predicted to radically transform our economies and societies, raising questions about not only the future and meaning of work but also the future and meaning of human life.”
Cap. 10 POLICY PROPOSALS
"The widely shared intuition that there is an urgency and importance in dealing with the ethical and societal challenges raised by AI has led to an avalanche of initiatives and policy documents.” p. 149
#In spite of cultural differences, it turns out that AI ethics policies are remarkably similar.” p. 158
"Ideas such as ethics by design or valuesensitive design can help to create AI in a way that leads to more accountability, responsibility, and transparency.” p. 163
Cap. 11 CHALLENGES FOR POLICYMAKERS
"Responsible innovation is not only about embedding ethics in design, but also requires taking into account the opinions and interests of various stakeholders.” p. 169
"AI ethics is not necessarily about banning things; we also need a positive ethics: to develop a vision of the good life and the good society.” p. 175
Cap. 12 IT’S THE CLIMATE, STUPID! ON PRIORITIES, THE ANTHROPOCENE, AND ELON MUSK’S CAR IN SPACE
"A human-centric approach is at least nonobvious, if not problematic, in light of philosophical discussions about the environment and other living beings.” p. 185
"While people in one part of the world struggle to gain access to fresh water, people in another part of the world worry about their privacy on the internet.” p. 188
"“Why worry about AI if the urgent problem is climate change and the future of the planet is at stake?”” p. 190
GLOSSARY

Outros destaques

"But the breakthroughs of artificial intelligence are not limited to games or the realm of science fiction. AI is already happening today and it is pervasive, often invisibly embedded in our day-to-day tools and as part of complex technological systems (Boddington 2017). Given the exponential growth of computer power, the availability of (big) data due to social media and the massive use of billons of smartphones, and fast mobile networks, AI, especially machine learning, has made significant progress. This has enabled algorithms to take over many of our activities, including planning, speech, face recognition, and decision making. AI has applications in many domains, including transport, marketing, health care, finance and insurance, security and the military, science, education, office work and personal assistance (e.g., Google Duplex1 ), entertainment, the arts (e.g., music retrieval and composition), agriculture, and of course manufacturing.” p. 3

"AI ethics is about technological change and its impact on individual lives, but also about transformations in society and in the economy.” p. 7

"there is a long history of thinking about humans and machines or artificial creatures, in both Western and non-Western cultures.” p. 17

"More generally, when AI and related science and technology use mathematics to abstract more pure forms from the messy material world, this can be interpreted as a Platonic program realized by technological means. The AI algorithm turns out to be a Platonic machine that extracts form (a model) from the (data) world of appearances.
Transcendence can also mean surpassing the human condition.” p, 24

"More generally, our evaluation of AI seems to depend on what we think AI is and can become, and on how we think about the differences between humans and machines.” p. 31

"D fus argued that the brain is not a computer and that the mind does not operate by means of symbolic manipulation. We have an unconscious background of commonsense knowledge based on experience and what Heidegger would call our “being-in-the-world,” and this knowledge is tacit and cannot be formalized.” p. 32

"computer programs don’t have intentionality, and genuine understanding cannot be generated by formal computation. As Boden (2016) puts it, the idea is that meaning comes from humans.” p. 36

"In the background of the discussion about AI are thus deep disagreements about the nature of the human, human intelligence, mind, understanding, consciousness, creativity, meaning, human knowledge, science, and so on. If it is a “battle” at all, it is one that is as much about the human as it is about AI.” p. 38

"We can then try to cross the modern divide between humans and nonhumans not via modern science or transhumanism, which in their way also see humans and machines not as fundamentally opposed, but via posthumanist thinking from the (post)humanities. This brings us to the third tension: between humanism and posthumanism. Against humanists, who are accused of having done violence toward nonhumans such as animals in the name of the supreme value of the human, posthumanists question the centrality of the human in modern ontologies and ethics. According to them, nonhumans matter too, and we should not be afraid of crossing borders between humans and nonhumans. This is an interesting direction to explore, since it takes us beyond the competition narrative about humans and machines.

Posthumanists such as Donna Haraway offer a vision in which living together with machines, and even merging with machines, is seen no longer as a threat or a nightmare, as in humanism, or as a transhumanist dream come true, but as a way in which ontological and political borders between humans and nonhumans can and should be crossed. “ p. 42

"Moreover, against the modern subject–object divide, postphenomenologists such as Peter-Paul Verbeek talk about the mutual constitution of humans and technology, subject and object. Instead of seeing technology as a threat, they emphasize that humans are technological (that is, we have always used technology; it is part of our existence rather than something external that threatens that existence) and that technology naturally mediates our engagement with the world. For AI, this view seems to imply that the humanist battle to defend the human against technology is misdirected. Instead, according to this approach, the human has always been technological and therefore we should rather ask how AI mediates humans’ relation to the world and try to actively shape these mediations while we still can: we can and should discuss ethics at the stage of AI development rather than complain afterward about the problems it causes.” p.45

"If an AI were to be more intelligent than is possible today, we can suppose that it could develop moral reasoning and that it could learn how humans make decisions about ethical problems. But would this suffice for full moral agency, that is, for human-like moral agency?” p. 50

"an algorithm or a combination of algorithms. An algorithm is a set and sequence of instructions, like a recipe, that tells the computer, smartphone, machine, robot, or whatever it is embedded in, what to do. It leads to a particular output based on the information available (input). It is applied to solve a problem. To understand AI ethics, we need to understand how AI algorithms work and what they do.” p. 70

"any “AI ethics” thus needs to be connected to more general ethics of digital information and communication technologies, computer ethics, and so on.
Another sense in which there is no such thing as AI in itself is that the technology is always also social and human: AI is not only about technology but also about what humans do with it, how they use it, how they perceive and experience it, and how they embed it in wider socialtechnical environments.” p. 80

"Machine learning refers to software that can “learn.” The term is controversial: some say that what it does is not true learning because it does not have real cognition; only humans can learn. In any case, modern machine learning bears “little or no similarity to what might plausibly be going on in human heads” (Boden 2016, 46). Machine learning is based on statistics; it is a statistical process. It can be used for various tasks, but the underlying task is often pattern recognition. Algorithms can identify patterns or rules in data and use those patterns or rules to explain the data and make predictions for future data.” p. 83

"the machine learning a gorithm finds rules or patterns that the programmer has not specified. Only the objective or task is given. The software can adapt its behavior to better match the requirements of the task.” p. 84

"Humans thus still play an important role at all stages and with regard to all these aspects, including framing the problem, data capture, preparation of the data (the data set the algorithm trains on and the data set it will be applied to), creating or selecting the learning algorithm, interpreting the results, and deciding what action to take (Kelleher and Tierney 2018).” p. 89

"An ethical use of AI requires that data are collected, processed, and shared in a way that respects the privacy of individuals and their right to know what happens to their data, to access their data, to object to the collection or processing of their data, and to know that their data are being collected and processed and (if applicable) that they are then subject to a decision made by an AI.” p. 98

"Some also worry that AI, by taking over cognitive tasks from humans, infantilizes its users by “rendering them less capable of thinking for themselves or deciding for themselves what to do” (Shanahan 2015, 170).“ p. 100

"The e cal problems discussed here can thus be seen as human vulnerabilities: technological vulnerabilities ultimately transform our existence as humans. To the extent that we become dependent on AI, AI is more than a tool we use; it becomes part of how we are, and how we are at risk, in the world.
Increased agency of AI, especially when it replaces human agency, also raises another ever more urgent ethical problem: responsibility.” p. 108

"the action must have its origin in the agent. This view also has a normative side: if you have agency and if you can decide, you should take responsibility for your actions. What we want to avoid, morally speaking, is someone who has agency and power but no responsibility. Aristotle also added another condition for moral responsibility: you are responsible if you know what you’re doing.” p. 111

"This is a problem for responsibility, since the humans who create or use the AI cannot explain a particular decision and hence fail to know what the AI is doing and cannot answer for its actions.” p. 117

"But can a machine “reason,” and in what sense can a technological system “use” or “represent” values at all? What kind of knowledge does it have? Does it have knowledge at all? Does it have understanding at all? And, as Boddington (2017) asks, can humans necessarily fully articulate their most fundamental values?” p. 122

"Whether or not AI can directly provide those r sons and explanations, humans should be able to answer the question: “Why?” The challenge for AI researchers is to ensure that if an AI is used for decision making at all, the technology is built in such a way that humans will be able as much as possible to answer that question.” p. 123

"Another problem that is both ethical and societal, and also specific to data science–based AI as opposed to other automation technologies, is the issue of bias. When an AI makes—or more precisely recommends—decisions, bias may arise: the decisions may be unjust or unfair to particular individuals or groups. Although bias may also arise with classic AI—say, an expert system using a decision tree or database that contains bias—the issue of bias is often connected to machine learning applications. And while problems of bias and discrimination have always been present in society, the worry is that AI may perpetuate these problems and enlarge their impact.” p. 125

"That being said, so far utopian ideas about leisure societies and other postindustrial paradises have not been realized. We have already had several waves of automation from the nineteenth century until now, but to what extent have the machines liberated and emancipated us?” p. 140

"However, the idea of developing “an AI ethics of the good life” and an AI ethics for the real world in general face a number of problems. The first is speed. (…) Second, given the diversity and plurality of views on this within societies and cultural differences between societies, questions about the good and meaningful life with technology may well be answered differently in different places and contexts, and in practice they will be subject to all kinds of political processes that may or may not end in consensus.” p. 143

"The idea of explainable AI or transparent AI is then that the actions and decisions made by AIs should be easily understood. As we’ve seen, this idea is difficult to implement in the case of machine learning that uses neural networks (Goebel et al. 2018).” p. 161

"our everyday ethics may not be a matter of fully articulate reasoning at all. Sometimes we respond to ethical problems without being able to fully justify our response (Boddington 2017). To borrow a term from Wittgenstein: our ethics is not only embodied but also embedded in a form of life. (…) We absolutely need methods, procedures, and operations. But these are not enough; ethics does not work like a machine, and neither do policy and responsible innovation.” pp. 172-173

"Whether or not stopping is always the best solution, the point is that we should at least have the space to ask the question and to decide. If this critical space is lacking, responsible innovation remains a fig leaf for doing business as usual.” p. 174

"Note also that policy and regulation are not only about banning things or making things more difficult; they can also be supportive, offering incentives, for example.
Furthermore, next to a negative ethics that sets limits, we also need to make explicit and elaborate a positive ethics: to develop a vision of the good life and the good society. While some of the ethical principles proposed above hint at such a vision, it remains a challenge to move the discussion in that direction. As previously argued, the ethical questions regarding AI are not just about technology; they are about human lives and human flourishing, about the future of society, and perhaps also about nonhumans, the environment, and the future of the planet (…) While in general liberal democracies are set up to leave such questions to individuals and are supposed to be “thin” about matters such as the good life (a political innovation that has stopped at least some kinds of wars and has contributed to stability and prosperity), in light of the ethical and political challenges we face it would be irresponsible to neglect the more substantive, “thick” ethical questions altogether. Policy, including AI policy, should also be about positive ethics.” pp. 174-175

"We need to ensure that, on the one hand, people with a humanities background become aware of the importance of thinking about new technologies such as AI and can acquire some knowledge of these technologies and what they do. On the other hand, scientists and engineers need to be made more sensitive to the ethical and societal aspects of technology development and use.” p. 178

"To assume that AI is neutral and to use it without understanding what one is doing contributes to such mindlessness and, ultimately, to the ethical corruption of the world.” p. 180

"Sometimes the concept of the Anthropocene is used to frame the problem. Coined by climate researcher Paul Crutzen and biologist Eugene Stoermer, this is the idea that we are living in a geological epoch in which humanity has dramatically increased its power over the Earth and its ecosystems, turning humans into a geological force.” p. 191

"Yet these scenarios would not only be authoritarian and violate human autonomy but would also centrally contribute to the problem of the Anthropocene itself: human hyper-agency, this time delegated by humans to machines, turns the entire planet into a resource and machine for humans.” p. 193

"We also face a risk of techno-solutionism in the sense that proposals for using AI to tackle environmental problems may assume that there can be a final solution to all problems, that technology alone can give the answer to our hardest questions, and that we can solve the problems entirely by use of human or artificial intelligence.” p. 200

"Such wisdom may well be informed by a stract cognitive processes and data analysis, but it is also based on embodied, relational, and situational experiences in the world, on dealing with other people, with materiality, and with our natural environment. Our success in tackling the big problems of our time will most likely depend on combinations of abstract intelligence—human and artificial—and concrete practical wisdom developed on the basis of concrete and situational human experience and practice—including our experience with technology. In whatever direction the further development of AI goes, the challenge to develop the latter kind of knowledge and learning is ours. Humans have to do it. AI is good at recognizing patterns, but wisdom cannot be delegated to machines.” pp. 201-202

logos%20cllc

Última atualização em 9 de abril de 2025