Difference between revisions of "User:Becha/AI is ecocide"
(→More Links) |
|||
Line 54: | Line 54: | ||
==More Links== | ==More Links== | ||
− | * | + | * https://themarkup.org/news/2023/07/06/ai-is-hurting-the-climate-in-a-number-of-non-obvious-ways |
− | https://themarkup.org/news/2023/07/06/ai-is-hurting-the-climate-in-a-number-of-non-obvious-ways | ||
* https://montrealethics.ai/algorithms-as-social-ecological-technological-systems-an-environmental-justice-lens-on-algorithmic-audits/ | * https://montrealethics.ai/algorithms-as-social-ecological-technological-systems-an-environmental-justice-lens-on-algorithmic-audits/ | ||
− | |||
* YouTube playlist with 10 interviews developing an AI critique over the past two years, which I made for Training the Archive. The interviewees include Maya Indira Ganesh, Anna Ridler, Alexa Steinbrück, Mar Hicks, Gabriel Pereira, Elisa Giardina Papa, Matteo Pasquinelli, Nick Couldry and Ulisses Mejias, Adam Harvey, and Magda Tyzlik-Carver. | * YouTube playlist with 10 interviews developing an AI critique over the past two years, which I made for Training the Archive. The interviewees include Maya Indira Ganesh, Anna Ridler, Alexa Steinbrück, Mar Hicks, Gabriel Pereira, Elisa Giardina Papa, Matteo Pasquinelli, Nick Couldry and Ulisses Mejias, Adam Harvey, and Magda Tyzlik-Carver. | ||
Line 70: | Line 68: | ||
exploitations creates the ground for a recomposition of resistance from | exploitations creates the ground for a recomposition of resistance from | ||
forms of relationality that still escape the algorithmic gaze.” ― Dan McQuillan | forms of relationality that still escape the algorithmic gaze.” ― Dan McQuillan | ||
+ | |||
+ | * “Ethical AI Requires Institutional and Structural Change”, Timnit Gebru https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change | ||
+ | |||
+ | * Mitigating System Bias: by Emily M. Bender & Batya Friedman https://aclanthology.org/Q18-1041.pdf | ||
+ | |||
+ | * Shut it all down: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ | ||
+ | |||
+ | * https://time.com/6256529/bing-openai-chatgpt-danger-alignment/ | ||
+ | |||
+ | * “Chatbot is not a human”, by Emily M. Bender: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html | ||
+ | |||
+ | * “Change happens through collective action”, by Timnit Gebru: https://news.berkeley.edu/2022/05/31/berkeley-talks-transcript-timnit-gebru/ | ||
+ | |||
+ | * On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender, Gebru, McMillan-Major, Schmitchell) https://dl.acm.org/doi/10.1145/3442188.3445922 | ||
+ | |||
+ | * ‘Any Sufficiently Advanced Neglect is Indistinguishable from Malice' https://afutureworththinkingabout.com/?p=5442 | ||
+ | |||
+ | * philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym “which stands for a world view of Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.” | ||
+ | alarmist fears / doomerism / rooted in a hierarchical , anthropocentric and zero-sum view of intelligence | ||
+ | |||
+ | * Timnit https://news.berkeley.edu/2022/05/31/berkeley-talks-transcript-timnit-gebru/ | ||
+ | |||
+ | * Collective Action! https://podcasts.apple.com/gb/podcast/understanding-tescreal-with-dr-timnit-gebru-and/id1610914569?i=1000617036014 | ||
+ | |||
+ | * Meredith https://www.theguardian.com/technology/2023/jun/11/signals-meredith-whittaker-these-are-the-people-who-could-actually-pause-ai-if-they-wanted-to | ||
+ | |||
+ | * https://www.un-ihe.org/people/staff/joyeeta-gupta https://futureearth.org/contacts/dr-wendy-broadgate/ | ||
+ | |||
+ | * https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey by Ted Chiang |
Revision as of 14:28, 3 October 2023
Contents
July 2023
- Vesna Manojlovic, July 2023
- BECHA@unciv.nl, becha.unciv.nl
AI is Ecocide
AI’s detrimental impact towards reaching global climate targets is ecocide.(1) The damage is severe, wanton, environmental. If not halted, it will be widespread and long-term.
As part of capitalist technological systems, AI is soaking up fossil fuels, water, rare metals, land, while externalising pollution. (2)
Socio-political harm caused by AI comes from the centralisation of power and wealth in the hands of a few corporations, (3) usage of amassed data for surveillance and disinformation, exploitation (4) of human labour for short-term profit.
Say NO to AI
Countering “AI-as-ecocide” requires intervention and healing on all levels: international legislative, collective-organisational, civil disobedience, community efforts, personal education.
Digitalisation must respect planetary boundaries. (5) We must refuse the production and operation of more digital devices, networking infrastructures, data centres. Energy and material consumption must be reduced. Extractivism, growth-oriented business models coupled with inequality must be replaced by economic policies that justly redistribute the benefits of technology (DeGrowth, eco-feminism, decolonizing, caring economies). Technical innovation must focus on repairing harm through the ethics of reciprocity and regeneration.
If we want justice and safety (6) within planetary limits, we can have no anthropocentric, hierarchical, individualistic AI.
Alternatives to Artificial “Intelligence”
Let us embrace wider consciousnesses. Empathic! Relational! Artistic! Emotional! Lateral! Through ecophilia and biophilia, let us extend love beyond our species. Let us acknowledge collective intelligence: swarms, reefs, hives; ecosystems like forests, rivers, deserts; creatures such as octopi, mycelia, squirrels (7).
I mourn our ongoing extinction, and I am hoping for (re)connection, reciprocity, thriving together in a distant future.
References
- 1. “Computing as Ecocide”, by Rob Comber and Elina Eriksson: https://limits.pubpub.org/pub/a8h46wqy/release/1
- 2. “The Climate Cost of the AI”, by Wim Vanderbauwhede https://labs.ripe.net/author/wim-vanderbauwhede/the-climate-cost-of-the-ai-revolution/
- 3. “AI’s biggest risk is the corporations that control them”, by Meredith Whittaker: https://www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them
- 4. “Will AI become >capital’s willing executioners<“, by Ted Chiang: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey
- 5. “Political Demands”, by “Bits & Bäume 2022” https://bits-und-baeume.org/assets/images/pdfs/Bits_und_Baeume_Political_Demands_english.pdf
- 6. “A just world on a safe planet: quantifying Earth System Boundaries”, by Johan Rockström, Joyeeta Gupta, et.al https://www.pik-potsdam.de/en/news/latest-news/a-just-world-on-a-safe-planet-first-study-quantifying-earth-system-boundaries
- 7. “Environmental Impact of Internet: Urgency, De-Growth, Rebellion”, by Vesna Manojlovic https://labs.ripe.net/author/becha/environmental-impact-of-internet-urgency-de-growth-rebellion/
Links
More about AI
- My slides from TBD hackers camp: https://wiki.techinc.nl/File:TBD_2023_Not_a_Camp_ecocide_SMALL.pdf
More Links
- YouTube playlist with 10 interviews developing an AI critique over the past two years, which I made for Training the Archive. The interviewees include Maya Indira Ganesh, Anna Ridler, Alexa Steinbrück, Mar Hicks, Gabriel Pereira, Elisa Giardina Papa, Matteo Pasquinelli, Nick Couldry and Ulisses Mejias, Adam Harvey, and Magda Tyzlik-Carver.
https://www.youtube.com/playlist?list=PLVR6MM6mHmBRtvYlP_bF6qTP3YomEwGqE
- Sanela Jahić – Under the Calculative Gaze: https://aksioma.org/under-the-calculative-gaze-book
“AI completes the enclosure that the autonomists called the social factory, where capitalist relations of power extend to the smallest corner of social reproduction. But the very generalisability of these algorithmic exploitations creates the ground for a recomposition of resistance from forms of relationality that still escape the algorithmic gaze.” ― Dan McQuillan
- “Ethical AI Requires Institutional and Structural Change”, Timnit Gebru https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change
- Mitigating System Bias: by Emily M. Bender & Batya Friedman https://aclanthology.org/Q18-1041.pdf
- Shut it all down: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
- “Chatbot is not a human”, by Emily M. Bender: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
- “Change happens through collective action”, by Timnit Gebru: https://news.berkeley.edu/2022/05/31/berkeley-talks-transcript-timnit-gebru/
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender, Gebru, McMillan-Major, Schmitchell) https://dl.acm.org/doi/10.1145/3442188.3445922
- ‘Any Sufficiently Advanced Neglect is Indistinguishable from Malice' https://afutureworththinkingabout.com/?p=5442
- philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym “which stands for a world view of Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.”
alarmist fears / doomerism / rooted in a hierarchical , anthropocentric and zero-sum view of intelligence
- Collective Action! https://podcasts.apple.com/gb/podcast/understanding-tescreal-with-dr-timnit-gebru-and/id1610914569?i=1000617036014