Difference between revisions of "User:Becha/AI is ecocide"
(→Articles) |
|||
Line 247: | Line 247: | ||
* Many events!!! https://ai-uk.turing.ac.uk/fringe-events/ | * Many events!!! https://ai-uk.turing.ac.uk/fringe-events/ | ||
− | = | + | =Amsterdam Vision= |
+ | |||
+ | [[File:amsterdam-AI-vision-english-image.png|300px|right]] | ||
+ | |||
+ | * November 2024 https://www.amsterdam.nl/innovatie/amsterdamse-visie-ai/ | ||
+ | ** English PDF https://assets.amsterdam.nl/publish/pages/1061246/amsterdam_visie_ai_wcag_engelse_versie.pdf | ||
+ | ** My input (March 2024) [[:File:amsterdam-AI-vision-english-image.png]] | ||
+ | |||
+ | |||
+ | =MISC= | ||
Subject: | Subject: | ||
Line 328: | Line 337: | ||
https://publicinterest.ai/homepage/ | https://publicinterest.ai/homepage/ | ||
− | |||
− | |||
− | |||
− | |||
− | |||
Latest revision as of 14:59, 28 November 2024
Contents
July 2023
- Vesna Manojlovic, July 2023
- BECHA@unciv.nl, https://becha.unciv.nl
Artificial Intelligence is Ecocide
AI’s detrimental impact towards reaching global climate targets is ecocide.(1) The damage is severe, wanton, environmental. If not halted, it will be widespread and long-term.
As part of capitalist technological systems, AI is soaking up fossil fuels, water, rare metals, land, while externalising pollution. (2)
Socio-political harm caused by AI comes from the centralisation of power and wealth in the hands of a few corporations, (3) usage of amassed data for surveillance and disinformation, exploitation (4) of human labour for short-term profit.
Say NO to AI
Countering “AI-as-ecocide” requires intervention and healing on all levels: international legislative, collective-organisational, civil disobedience, community efforts, personal education.
Digitalisation must respect planetary boundaries. (5) We must refuse the production and operation of more digital devices, networking infrastructures, data centres. Energy and material consumption must be reduced. Extractivism, growth-oriented business models coupled with inequality must be replaced by economic policies that justly redistribute the benefits of technology (DeGrowth, eco-feminism, decolonizing, caring economies). Technical innovation must focus on repairing harm through the ethics of reciprocity and regeneration.
If we want justice and safety (6) within planetary limits, we can have no anthropocentric, hierarchical, individualistic AI.
Alternatives to Artificial “Intelligence”
Let us embrace wider consciousnesses. Empathic! Relational! Artistic! Emotional! Lateral! Through ecophilia and biophilia, let us extend love beyond our species. Let us acknowledge collective intelligence: swarms, reefs, hives; ecosystems like forests, rivers, deserts; creatures such as octopi, mycelia, squirrels (7).
I mourn our ongoing extinction, and I am hoping for (re)connection, reciprocity, thriving together in a distant future.
References
- “Computing as Ecocide”, by Rob Comber and Elina Eriksson: https://limits.pubpub.org/pub/a8h46wqy/release/1
- “The Climate Cost of the AI”, by Wim Vanderbauwhede https://labs.ripe.net/author/wim-vanderbauwhede/the-climate-cost-of-the-ai-revolution/
- “AI’s biggest risk is the corporations that control them”, by Meredith Whittaker: https://www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them
- “Will AI become >capital’s willing executioners<“, by Ted Chiang: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey
- “Political Demands”, by “Bits & Bäume 2022” https://bits-und-baeume.org/assets/images/pdfs/Bits_und_Baeume_Political_Demands_english.pdf
- “A just world on a safe planet: quantifying Earth System Boundaries”, by Johan Rockström, Joyeeta Gupta, et.al https://www.pik-potsdam.de/en/news/latest-news/a-just-world-on-a-safe-planet-first-study-quantifying-earth-system-boundaries
- “Environmental Impact of Internet: Urgency, De-Growth, Rebellion”, by Vesna Manojlovic https://labs.ripe.net/author/becha/environmental-impact-of-internet-urgency-de-growth-rebellion/
Links
- UnCiv mailing list archive : https://lists.puscii.nl/wws/arc/uncivilization/2023-10/msg00000.html
- check out the community discussion in the thread!
- My slides from TBD hackers camp: https://wiki.techinc.nl/File:TBD_2023_Not_a_Camp_ecocide_SMALL.pdf
AI Links
- YouTube
- playlist https://www.youtube.com/playlist?list=PLVR6MM6mHmBRtvYlP_bF6qTP3YomEwGqE with 10 interviews developing an AI critique over the past two years, which I made for Training the Archive. The interviewees include Maya Indira Ganesh, Anna Ridler, Alexa Steinbrück, Mar Hicks, Gabriel Pereira, Elisa Giardina Papa, Matteo Pasquinelli, Nick Couldry and Ulisses Mejias, Adam Harvey, and Magda Tyzlik-Carver.
- Timnit Gebru - Eugenics and the Promise of Utopia through AGI https://youtu.be/P7XT4TWLzJw
Articles
- August 2024 -> https://disconnect.blog/what-comes-after-the-ai-crash/
- Abolishing AI: https://cryptpad.fr/file/#/2/file/ts0tiWAKkphB5yhEkd3bEhDv/
- "de"generative AI: https://www.wheresyoured.at/are-we-watching-the-internet-die/
- Sanela Jahić – Under the Calculative Gaze: https://aksioma.org/under-the-calculative-gaze-book
“AI completes the enclosure that the autonomists called the social factory, where capitalist relations of power extend to the smallest corner of social reproduction. But the very generalisability of these algorithmic exploitations creates the ground for a recomposition of resistance from forms of relationality that still escape the algorithmic gaze.” ― Dan McQuillan
- “Ethical AI Requires Institutional and Structural Change”, Timnit Gebru https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change
- Shut it all down: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
- “Chatbot is not a human”, by Emily M. Bender: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
- “Change happens through collective action”, by Timnit Gebru:
- ‘Any Sufficiently Advanced Neglect is Indistinguishable from Malice' https://afutureworththinkingabout.com/?p=5442
- philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym “which stands for a world view of Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.”
alarmist fears / doomerism / rooted in a hierarchical , anthropocentric and zero-sum view of intelligence
- Is Fear of AI Really Fear of Capitalism? Ezra Klein https://portside.org/2023-02-27/fear-ai-really-fear-capitalism
- "Kunstmatige intelligentie vreet stroom" (AI is devouring electricity) https://nos.nl/l/2477186
- “ChatGPT Is Consuming a Staggering Amount of Water” One argument for your proposal for the talk/BoF about challenges of AI: https://futurism.com/the-byte/chatgpt-ai-water-consumption
- More on AI: Stochastic Parrots Day: A Retrospective With 'Stochastic Parrots' Authors - DAIR-Tube
https://peertube.dair-institute.org/video-playlists/230fd526-231f-49b5-b9de-e4b04fa6f946
- General Purpose AI Poses Serious Risks, Should Not Be Excluded From the EU's AI Act | Policy Brief - AI Now Institute https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
- The Acronym Behind Our Wildest AI Dreams and Nightmares: https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/
- Longtermism and Eugenics: A Primer, by Émile Torres : https://www.truthdig.com/articles/longtermism-and-eugenics-a-primer/
- The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn, by Dave Troy
https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/
* https://anatomyof.ai
- _AI_Now_Institute_2017_Report_ https://ainowinstitute.org/publication/ai-now-2017-report-2
- #EDRi20 - 10 Apr 2023 - keynote by Meredith Whittaker, Signal Foundation https://www.youtube.com/watch?v=yBQ_2AqI7w0 : on the ramifications of the surveillance business model of AI.
- patriarchal perspective and AI : https://osf.io/preprints/socarxiv/jqxb6/ & https://www.researchgate.net/publication/370049262_Pygmalion_Displacement_When_Humanising_AI_Dehumanises_Women
- Upholding human rights in AI https://ecnl.org/news/council-europe-must-not-water-down-their-human-rights-standards-convention-ai
- “Meet the Future of AI: Countering Sophisticated & Advanced Disinformation” https://www.veraai.eu/posts/meet-the-future-of-ai-report
- A Degrowth Perspective on Artificial Intelligence - Analysing the Appropriateness of Machine Learning to a Degrowth Context https://www.research-collection.ethz.ch/handle/20.500.11850/622669
- Charlie Stross on SF & AI : https://www.antipope.org/charlie/blog-static/2023/11/dont-create-the-torment-nexus.html
- Meredith: Owned by Big Tech https://www.technologyreview.com/2023/12/05/1084393/make-no-mistake-ai-is-owned-by-big-tech/
- December 2023: https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/
- AI is a disaster (for the climate) https://www.theguardian.com/commentisfree/2023/dec/23/ai-chat-gpt-environmental-impact-energy-carbon-intensive-technology
- 2017: Ted Chiang "When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism." https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
- Defuturing Artificial Intelligence, Jonnie Penn: https://zagreb.degrowth.net/en/9_int_dg_conf/public/events/111
AI Pappers
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender, Gebru, McMillan-Major, Schmitchell) https://dl.acm.org/doi/10.1145/3442188.3445922
- Mitigating System Bias: by Emily M. Bender & Batya Friedman https://aclanthology.org/Q18-1041.pdf
- The growing energy footprint of artificial intelligence, Alex de Vries https://www.cell.com/joule/fulltext/S2542-4351(23)00365-3 // Published:October 10, 2023 DOI:https://doi.org/10.1016/j.joule.2023.09.004
- “Studying Up Machine Learning Data: Why Talk About Bias WhenWe Mean Power?" by @milamiceli @JulianPosada & Tianling Yang @Weizenbaum_Institut - a great commentary advocating to move "from bias research towards an investigation of power differentials that shape data." https://arxiv.org/abs/2109.08131
- The AI footprint (OECD) : Measuring the environmental impacts of artificial intelligence compute and applications
- "Artificial intelligence (AI) systems can use massive computational resources, raising sustainability concerns. This report aims to improve understanding of the environmental impacts of AI, and help measure and decrease AI’s negative effects while enabling it to accelerate action for the good of the planet. It distinguishes between the direct environmental impacts of developing, using and disposing of AI systems and related equipment, and the indirect costs and benefits of using AI applications. It recommends the establishment of measurement standards, expanding data collection, identifying AI-specific impacts, looking beyond operational energy use and emissions, and improving transparency and equity to help policy makers make AI part of the solution to sustainability challenges." Published on November 15, 2022
- https://www.oecd.org/publications/measuring-the-environmental-impacts-of-artificial-intelligence-compute-and-applications-7babf571-en.htm
- “Environmental media” in the cloud: The making of critical data center art https://journals.sagepub.com/doi/10.1177/14614448221149942
- Data Centers -> https://wiki.techinc.nl/Sustainability#Data_Centers
- "Ubuntu as an Ethical & Human Rights Framework for AI Governance” by Sabelo Mhlambi, July 8, 2020 https://carrcenter.hks.harvard.edu/files/cchr/files/ccdp_2020-009_sabelo_b.pdf
- Andrew A. Chien in "Communications of the ACM" , July 2023: "GenAI: Giga$$$, TeraWatt-Hours, and GigaTons of CO2" https://dl.acm.org/doi/10.1145/3606254
Women in AI
- Meredith Whittaker, President, Signal Foundation
- https://www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them
- Closing keynote at EDRi20 - 10 Apr 2023 - https://www.youtube.com/watch?v=yBQ_2AqI7w0
- _AI_Now_Institute_2017_Report_ https://ainowinstitute.org/publication/ai-now-2017-report-2
- https://www.theguardian.com/technology/2023/jun/11/signals-meredith-whittaker-these-are-the-people-who-could-actually-pause-ai-if-they-wanted-to
- Timnit Gebru
- https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender, Gebru, McMillan-Major, Schmitchell) https://dl.acm.org/doi/10.1145/3442188.3445922
- The Acronym Behind Our Wildest AI Dreams and Nightmares: https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/
- “Ethical AI Requires Institutional and Structural Change” https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change
- “I can't emphasize enough my disdain for the whole AGI scam. Used to centralize & consolidate power by the already rich & powerful while selling a techno-utopia they promise will come any day now, if we just let them continue to kill the environment, exploit labor & steal data ” January 2024, mastodon https://dair-community.social/@timnitGebru/111797704917983259
- Emily Bender
- https://analyticsindiamag.com/linguist-emily-m-bender-has-a-word-or-two-about-ai/
- https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
- Mitigating System Bias: by Emily M. Bender & Batya Friedman https://aclanthology.org/Q18-1041.pdf
- “Chatbot is not a human”, by Emily M. Bender: https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
- Joyeeta Gupta https://www.un-ihe.org/people/staff/joyeeta-gupta
- Wendy Broadgate https://futureearth.org/contacts/dr-wendy-broadgate/
- patriarchal perspective and AI : https://osf.io/preprints/socarxiv/jqxb6/ & https://www.researchgate.net/publication/370049262_Pygmalion_Displacement_When_Humanising_AI_Dehumanises_Women
Videos
Events
- Event: AI & Climate Crisis, March 2024: https://www.eventbrite.co.uk/e/ai-the-climate-crisis-tickets-853683930617
- Boxi Wu - Oxford Internet Institute https://www.oii.ox.ac.uk/people/profiles/boxi-wu/
- Fieke Jansen - Critical Infrastructure Lab https://www.criticalinfralab.net/
- Sebastián Lehuedé - King's College London https://www.kcl.ac.uk/people/sebastian-lehuede
- Patrick Brodie - University College Dublin https://people.ucd.ie/patrick.brodie
- Many events!!! https://ai-uk.turing.ac.uk/fringe-events/
Amsterdam Vision
- November 2024 https://www.amsterdam.nl/innovatie/amsterdamse-visie-ai/
- English PDF https://assets.amsterdam.nl/publish/pages/1061246/amsterdam_visie_ai_wcag_engelse_versie.pdf
- My input (March 2024) File:amsterdam-AI-vision-english-image.png
MISC
Subject: my AI BoF comments & links From: Vesna Manojlovic <BECHA@ripe.net> Date: 23/05/2023, 18:29 To: Daniel <dfk@ripe.net>, Chris Buckridge <chrisb@ripe.net>, Mirjam Kühne <mir@zu-hause.nl>
IMHO, AI has these problems (with links!)
- centralisation & capture the monopolies & Big Tech
Climate Impact
- wasteful use of energy & water for building & running the "AI"
- RIPE Labs article: https://labs.ripe.net/author/wim-vanderbauwhede/the-climate-cost-of-the-ai-revolution/
https://futurism.com/the-byte/chatgpt-ai-water-consumption
Ethics & Justice
- we have to consider the input of existing research by social scientists , artists & critical , on the ethics & justice of all the digital technologies, including AI
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
https://www.dair-institute.org
More on AI: Stochastic Parrots Day: https://peertube.dair-institute.org/video-playlists/230fd526-231f-49b5-b9de-e4b04fa6f946
_AI_Now_Institute_2017_Report_ https://ainowinstitute.org/publication/ai-now-2017-report-2
- EDRi20 - 10 Apr 2023 - Closing keynote by Meredith Whittaker, President, Signal Foundation
https://www.youtube.com/watch?v=yBQ_2AqI7w0
Meredith on the ramifications of the surveillance business model of AI.
"First, AI is not a technical term of art, it is a marketing term that has been applied to a hodgepodge of data-centric techniques. Second, the sudden shift to AI in the early 2010s had everything to do with tech industry consolidation and the resources at the heart of the surveillance business model. We see this clearly when we recognize that what was new about AI in the early 2010s was not new innovations in machine learning indeed the methods that were applied to prove ai's Newfound utility date from the 1980s. What was new were the significant amounts of available data used to train AI models and the power of the computational infrastructure available to conduct this training and calibration, resources concentrated in the hands of a few private tech companies care of the surveillance business model. So if we look at it from this perspective we see that AI's primary role has been to expand what can be done with the massive amounts of surveillance data collected and stored by large tech firms.
- colonialism & exploitation :
https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change
https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/
- patriarchal perspective and AI : https://osf.io/preprints/socarxiv/jqxb6/ & https://www.researchgate.net/publication/370049262_Pygmalion_Displacement_When_Humanising_AI_Dehumanises_Women
AI governance:
8 Apr 2023 NUCLEAR ARMS CONTROL VERIFICATION AND LESSONS FOR AI TREATIES
https://arxiv.org/pdf/2304.04123.pdf
https://publicinterest.ai/homepage/
Automatic content generation:
https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart