Sustainable Synthetics & Inoculated Minds | STAKES
4/5 How will synthetic media affect the minds and brains of its consumers?
“New waves of general-purpose technologies and the continued diffusion of the Internet will enable expanding content creation. This could lead to the democratisation of opinions but could also lead to a mounting increase in biased content and mass disinformation. The expanding volume of information will lead to ambiguity, challenging the ability to distinguish fact and fiction. The weaponisation of public opinion through cognitive warfare involves creating narratives and fuelling emotions, perceptions and opinions; the human mind becomes the battlefield.”
-NATO Strategic Foresight Analysis, 2023
Evaluating Emerging Tactics
Previously, we established that synthetic matter's impact on a natural ecosystem depends on its composition and the fitness, or resilience, of the ecosystem it enters into. Synthetic media mirrors this analogy in its current and emerging forms - we can see it make neutral or positive contributions to the ecosystem akin to biodegradables (Interplay, Neural Shifts, some AI-Enhanced), invisibly cause chronic issues over time like POPs (AI-Enhanced, Dupes, McContent), and threaten acute and systemic destruction like Toxic Hazards (Nerve Agents). As part of our impact assessment, we also touched on the media ecosystem’s declining health and the impact this has had on industry adoption of generative AI to date.
At the heart of the ecosystem’s ill health lies a self-harming incentive structure and self-perpetuated demand for quantity, frequency, and intensity. This strains its resources and exhausts its creators. And yet, the media industry’s overwhelming instinct has been to point generative AI tools at chasing unsustainable incentives and demands rather than pursuing remedial innovations. (NB - Naturally, there are exceptions to the emerging norm, giving us reasons to be hopeful. We’ll cover what we can learn from them in our final post, Path.)
An evaluation of tactics utilised by media creators throughout their ongoing experimentation with generative AI reveals them to be continuations on familiar themes: the pursuit of cut-through and newness.
Even on this tactical track, responsible and beneficial applications of generative AI tools are achievable if risks are understood and actively mitigated. However, when these tools are merely utilised to boost the production of more quantity at an even higher frequency and level of intensity, it will result in deepening an existing fight for precarious resources - literally, given the vast energy cost inherent in producing and distributing digital content and running LLMs and figuratively, as the nature of attention consumers have to spend on it is finite.
A Note on Energy
Coinciding with synthetic media’s arrival, we’re seeing an increase in scrutiny towards the impact of the digital media supply chain on the environment. This has been extended to companies developing the foundation models, platforms and tools that will contribute to more voluminous output. Far up the generative AI supply chain, an even bigger resource issue is looming beyond the footprint of their voluminous output.
Data centers in the US propping up the cloud (generative AI’s backbone) already account for 2.8% of its energy use (Harvard Business Review, 2023), while its carbon footprint has been alleged to be bigger than the aviation industry’s (NB - research on this topic is recent and data differs, but consistencies are starting to emerge). While generative AI cannot solely be made accountable, ChatGPT “is already consuming the energy of 33,000 homes [and it is] estimated that a search driven by generative AI uses four to five times the energy of a conventional web search.” (Crawford, cited in Nature, 2024). By 2027, the AI sector could consume “the same as the annual energy demand of […] the Netherlands.” (De Vries, cited in The Verge, 2024) and “may be accountable for […] water withdrawal in 2027 which is more than the total annual water withdrawal of 4 – 6 Denmark or half of the United Kingdom.” (Li et al., 2024)
Impact mitigation through sustainable model training methods, development approaches, and the pursuit of alternative energy solutions are urgent - yet will only be prioritised if eyes firmly remain on the problem. It also requires regulatory pressure and ESG-committed businesses demanding better from their suppliers / technology partners.
Beyond Attention
As the benefits and risks associated with its current tactical applications have shown, generative AI will also profoundly alter the composition of media diets and patterns of consumption. Moreover, taking into account the influence media can have on minds and brains (as various scientific fields continue to reinforce, including psychology, media studies, and neuroscience), the impact of synthetic media will also extend to the psyches and cognition of its consumers.
Various use cases and new products validate the technology’s ability to produce enriching media experiences that can even support emotional wellbeing (‘medicinal media’). Yet when it is instructed to boost quantity, frequency and intensity, it also has the potential to perpetuate existing consumer harms linked to a dysfunctional media ecosystem and media diets high in malnutritious content.
These harms include:
Erosion of trust (in media institutions and facts) and shared reality (epistemic and moral)
Exploitation of attention (in terms of time spent with media and exposure to non-beneficial stimulus) and disadvantages (low economic means and media literacy)
Manipulation of emotion (through targeting mechanisms and deliberate contagions) and choice (influencing it and acting autonomously on behalf of the user)
Given the time spent on digital media daily comes in at around seven hours (global average; 16-64 year old Internet users; We Are Social, 2023), the potential for repeated exposure to a range of these harms on an average day is high, potentially even voluminous.
Moreover, repeated harm exposure can affect individuals on a neurobiological level. Overstimulation can cause dopamine dysregulation (mood swings, irritability, addiction), microstress can create cortisol spikes (increase in blood pressure, heart rates, anxiety), and loneliness can result in oxytocin deficiencies (social anxieties, inability to regulate emotions).
Below we’ll review existing media-specific psychological issues and explore how emerging signals related to synthetic media can compound them. (NB - in the interest of being balanced and pragmatic, a review of positive effects, drawing on theory and data from the field of positive media psychology, will follow in the final post.)
Ripples
The effects of poor, unbalanced media diets are not just confined to the inner lives of consumers. Building on previous research linking the health of democracy to that of society, the Brookings Institution’s working paper Brain Health-Directed Policymaking (2022) establishes the foundational role of brain health in this dynamic (spanning "cognitive, emotional, and motor dimensions” [WHO, 2022]).
The paper presents ongoing research into and debate around people’s reliance on digital tools and how these “may result in difficulties with attentional processing and control, [as well as how] ‘digitally impaired cognition’ could potentially undermine freedom of choice and agency” (Abbott et al., 2022). It also highlights that the media industry’s “process of ‘monetised attention’ has the potential to erode personal autonomy and self-efficacy further.” (Abbott et al., 2022). Together, they can “indirectly undermine three pillars of democracy: a well-informed population, resilience to foreign influence, and the capacity for effective public debates.” (Abbott et al., 2022).
Alongside many other studies (albeit not without contest), the paper also goes on to identify mis- and disinformation as negatively impacting brain health and mental wellbeing. Their ability to manipulate attitudes and behaviours has been linked to potential long-term cognitive impairment (Walter et al., 2019; Van Der Linden, 2023) - which, in turn, can make individuals more susceptible to being drawn to further mis- and disinformation (Piazza et al., 2017). This links to the theory that quality of mind determines the quality of media experience more generally - which has great significance when we consider the current state of global mental health.
Against a backdrop of ongoing culture wars, diminishing social cohesion, and faltering democracies, the authors of Meme Wars frame the real-world impacts of misinformation and disinformation in their thesis as “going from the wires to the weeds.” This transition implies that interactions online (in the wires) influence behavior in the real world (weeds) (Donovan et al., 2022). While the authors primarily focus on political memes and their role in politically motivated real-world events, their theory can also apply to smaller-scale media-related stressors affecting people’s relationships (e.g. spreading confusion, disconnection, division), lifestyle choices (e.g. filter face, fitspo, stay at home girlfriend), and consumption habits (e.g. excessive consumption, debt).
In its strategic foresight analysis (2023), NATO further expands on the threats from mis- and disinformation, linking the misuse of generative AI to an intent to “mobilize underlying societal tensions and create instability” in which “the human mind becomes the battlefield” (NATO, 2023). It also calls out a potential strategic shock involving “significant political or social unrest triggered by the uncontrolled spread of mis- / disinformation as a result of AI, big data and advanced large language models” (NATO, 2023).
Now what?
Akin to its ultra-processed counterpart in the food industry, when synthetic media’s production is driven by extractive or destructive intentions, it will play a significant part in deepening existing psychological harms linked to poor media diets. This will be compounded by access to high-quality, trustworthy sources moving further out of reach.
The gravity of synthetic media’s potential real-world ripple effects underlines the importance of designing responsible business practices around generative AI applications in the media industry. Interventions that lessen the harms of synthetic media on people’s minds and brains are also vital to consider as part of a holistic approach.
Next: A path towards regeneration & resilience