User:Becha/AI is ecocide

From Technologia Incognita
Jump to: navigation, search


Computing is Ecocide star graph.png
Computing is Ecocide table.png

July 2023


Artificial Intelligence is Ecocide


AI’s detrimental impact towards reaching global climate targets is ecocide.(1) The damage is severe, wanton, environmental. If not halted, it will be widespread and long-term.



As part of capitalist technological systems, AI is soaking up fossil fuels, water, rare metals, land, while externalising pollution. (2)

Socio-political harm caused by AI comes from the centralisation of power and wealth in the hands of a few corporations, (3) usage of amassed data for surveillance and disinformation, exploitation (4) of human labour for short-term profit.



Say NO to AI

Countering “AI-as-ecocide” requires intervention and healing on all levels: international legislative, collective-organisational, civil disobedience, community efforts, personal education. 



Digitalisation must respect planetary boundaries. (5) We must refuse the production and operation of more digital devices, networking infrastructures, data centres. Energy and material consumption must be reduced. Extractivism, growth-oriented business models coupled with inequality must be replaced by economic policies that justly redistribute the benefits of technology (DeGrowth, eco-feminism, decolonizing, caring economies). Technical innovation must focus on repairing harm through the ethics of reciprocity and regeneration. 



If we want justice and safety (6) within planetary limits, we can have no anthropocentric, hierarchical, individualistic AI.



Alternatives to Artificial “Intelligence”



Let us embrace wider consciousnesses. Empathic! Relational! Artistic! Emotional! Lateral! Through ecophilia and biophilia, let us extend love beyond our species. Let us acknowledge collective intelligence: swarms, reefs, hives; ecosystems like forests, rivers, deserts; creatures such as octopi, mycelia, squirrels (7).


I mourn our ongoing extinction, and I am hoping for (re)connection, reciprocity, thriving together in a distant future.

References

  1. “Computing as Ecocide”, by Rob Comber and Elina Eriksson: https://limits.pubpub.org/pub/a8h46wqy/release/1
  2. “The Climate Cost of the AI”, by Wim Vanderbauwhede https://labs.ripe.net/author/wim-vanderbauwhede/the-climate-cost-of-the-ai-revolution/
  3. “AI’s biggest risk is the corporations that control them”, by Meredith Whittaker: https://www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them
  4. “Will AI become >capital’s willing executioners<“, by Ted Chiang: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey
  5. “Political Demands”, by “Bits & Bäume 2022” https://bits-und-baeume.org/assets/images/pdfs/Bits_und_Baeume_Political_Demands_english.pdf
  6. “A just world on a safe planet: quantifying Earth System Boundaries”, by Johan Rockström, Joyeeta Gupta, et.al https://www.pik-potsdam.de/en/news/latest-news/a-just-world-on-a-safe-planet-first-study-quantifying-earth-system-boundaries
  7. “Environmental Impact of Internet: Urgency, De-Growth, Rebellion”, by Vesna Manojlovic https://labs.ripe.net/author/becha/environmental-impact-of-internet-urgency-de-growth-rebellion/


Screenshot 2024-01-19 at 16.17.22.png IMG 2407.jpg


Links



AI Links

  • YouTube
    • playlist https://www.youtube.com/playlist?list=PLVR6MM6mHmBRtvYlP_bF6qTP3YomEwGqE with 10 interviews developing an AI critique over the past two years, which I made for Training the Archive. The interviewees include Maya Indira Ganesh, Anna Ridler, Alexa Steinbrück, Mar Hicks, Gabriel Pereira, Elisa Giardina Papa, Matteo Pasquinelli, Nick Couldry and Ulisses Mejias, Adam Harvey, and Magda Tyzlik-Carver.
    • Timnit Gebru - Eugenics and the Promise of Utopia through AGI https://youtu.be/P7XT4TWLzJw


Articles

“AI completes the enclosure that the autonomists called the social factory,
where capitalist relations of power extend to the smallest corner of social
reproduction. But the very generalisability of these algorithmic
exploitations creates the ground for a recomposition of resistance from
forms of relationality that still escape the algorithmic gaze.” ― Dan McQuillan
  • philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym “which stands for a world view of Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.”
 alarmist fears / doomerism / rooted in a hierarchical , anthropocentric and zero-sum view of intelligence
  • More on AI: Stochastic Parrots Day: A Retrospective With 'Stochastic Parrots' Authors - DAIR-Tube

https://peertube.dair-institute.org/video-playlists/230fd526-231f-49b5-b9de-e4b04fa6f946

  • The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn, by Dave Troy

https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/

* https://anatomyof.ai 

AI Pappers

  • “Studying Up Machine Learning Data: Why Talk About Bias
WhenWe Mean Power?" by @milamiceli @JulianPosada & Tianling Yang @Weizenbaum_Institut - a great commentary advocating to move "from bias research towards an investigation of power differentials that shape data." https://arxiv.org/abs/2109.08131
  • The AI footprint (OECD) : Measuring the environmental impacts of artificial intelligence compute and applications
    • "Artificial intelligence (AI) systems can use massive computational resources, raising sustainability concerns. This report aims to improve understanding of the environmental impacts of AI, and help measure and decrease AI’s negative effects while enabling it to accelerate action for the good of the planet. It distinguishes between the direct environmental impacts of developing, using and disposing of AI systems and related equipment, and the indirect costs and benefits of using AI applications. It recommends the establishment of measurement standards, expanding data collection, identifying AI-specific impacts, looking beyond operational energy use and emissions, and improving transparency and equity to help policy makers make AI part of the solution to sustainability challenges." Published on November 15, 2022
    • https://www.oecd.org/publications/measuring-the-environmental-impacts-of-artificial-intelligence-compute-and-applications-7babf571-en.htm

Women in AI

Screenshot 2023-11-27 at 10.42.40.png


Videos

Events

Amsterdam Vision

Amsterdam-AI-vision-english-image.png


MISC

Subject: my AI BoF comments & links From: Vesna Manojlovic <BECHA@ripe.net> Date: 23/05/2023, 18:29 To: Daniel <dfk@ripe.net>, Chris Buckridge <chrisb@ripe.net>, Mirjam Kühne <mir@zu-hause.nl>

IMHO, AI has these problems (with links!)


- centralisation & capture the monopolies & Big Tech


https://www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them


Climate Impact


- wasteful use of energy & water for building & running the "AI"

https://futurism.com/the-byte/chatgpt-ai-water-consumption


Ethics & Justice


- we have to consider the input of existing research by social scientists , artists & critical , on the ethics & justice of all the digital technologies, including AI


https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

https://www.dair-institute.org

More on AI: Stochastic Parrots Day: https://peertube.dair-institute.org/video-playlists/230fd526-231f-49b5-b9de-e4b04fa6f946

https://anatomyof.ai

_AI_Now_Institute_2017_Report_ https://ainowinstitute.org/publication/ai-now-2017-report-2


  1. EDRi20 - 10 Apr 2023 - Closing keynote by Meredith Whittaker, President, Signal Foundation

https://www.youtube.com/watch?v=yBQ_2AqI7w0

Meredith on the ramifications of the surveillance business model of AI.

"First, AI is not a technical term of art, it is a marketing term that has been applied to a hodgepodge of data-centric techniques. Second, the sudden shift to AI in the early 2010s had everything to do with tech industry consolidation and the resources at the heart of the surveillance business model. We see this clearly when we recognize that what was new about AI in the early 2010s was not new innovations in machine learning indeed the methods that were applied to prove ai's Newfound utility date from the 1980s. What was new were the significant amounts of available data used to train AI models and the power of the computational infrastructure available to conduct this training and calibration, resources concentrated in the hands of a few private tech companies care of the surveillance business model. So if we look at it from this perspective we see that AI's primary role has been to expand what can be done with the massive amounts of surveillance data collected and stored by large tech firms.

- colonialism & exploitation :

https://hai.stanford.edu/news/timnit-gebru-ethical-ai-requires-institutional-and-structural-change

https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/


- patriarchal perspective and AI : https://osf.io/preprints/socarxiv/jqxb6/ & https://www.researchgate.net/publication/370049262_Pygmalion_Displacement_When_Humanising_AI_Dehumanises_Women


AI governance:


8 Apr 2023 NUCLEAR ARMS CONTROL VERIFICATION AND LESSONS FOR AI TREATIES https://arxiv.org/pdf/2304.04123.pdf

https://www.governance.ai

https://publicinterest.ai/homepage/


Automatic content generation:


https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart