Tag: technology

  • The Case for Scope 3 Emissions

    The Case for Scope 3 Emissions

    The digital economy treats human attention as a raw material to be mined without limit. However, this extraction produces a specific, harmful byproduct: it breaks our focus, fractures shared reality, and wears down the patience required for moral reasoning. Unlike physical pollution, which damages the air or water, this cognitive waste damages our ability to think. While corporations are increasingly held accountable for their physical footprint through ESG criteria, we ignore the mental damage caused by the product itself. What we are witnessing is the intentional fragmentation and manipulation of citizenry to the point of inability to functionally contribute to democracy. In this paper, I argue that this damage is a hidden cost that must be pro-actively regulated within specifically designed metrics and cognitive impact protocols, specifically expanding Scope 3 regulations to include “Cognitive Emissions.”

    To understand why this regulation is necessary, we must look at what is being lost. While the market views attention as a commodity to be sold, ethical philosophy defines it as the foundation of morality. French philosopher Simone Weil argued that “attention is the rarest and purest form of generosity” (Weil 1951, 105); it is the ability to pause one’s own ego to recognize the reality of another person. When digital platforms take over this “Attentional Commons” (Crawford 2015, 12), they do not merely distract us; they dismantle the mental tools, specifically the capacities for self-regulation and deliberation, required to govern ourselves (Crawford 2015, 23). Without the capacity for sustained, coherent thought, the self becomes fragmented, losing the sense of stability required to be a responsible citizen (Giddens 1991, 92).

    This fragmentation is not accidental. It is the result of a specific design philosophy that makes apps as tailored and easy to use as possible. Using Daniel Kahneman’s distinction, modern algorithms keep users trapped in “System 1” (fast, instinctive thinking) while bypassing “System 2” (slow, logical thinking) (Kahneman 2011, 20). System 2 governs executive control, a high-metabolic function that relies on cognitive resistance to maintain neural integrity. Neurobiologically, this follows the principle of use-dependent plasticity: neural pathways responsible for complex reasoning strengthen through challenge and degrade through disuse (Mattson 2008, 1). When an algorithm molds users into passive consumption, it removes the necessary friction required to sustain these pathways, leading to a functional degradation of critical thought.

    Consequently, this process is invasive. By predicting our desires, algorithms bypass our will, seizing our attention before we can decide where to place it. While Shoshana Zuboff calls this “surveillance capitalism” (Zuboff 2019, 8), the mechanism is closer to a slot machine. As Natasha Dow Schüll observes, these interfaces use design loops to induce a trance-like state that overrides conscious choice (Schüll 2012, 166). This is built-in user manipulation. Neuroscience research on “Facebook addiction” confirms that these platforms activate the brain’s impulse systems while suppressing the prefrontal cortex, the part of the brain responsible for planning and control (Turel et al. 2014, 685). The result is a depletion of critical thought, creating a population that reacts rather than reflects.

    Regulating this harm means looking to the precedent of carbon reporting. The Greenhouse Gas Protocol’s “Scope 3” covers indirect emissions, including those generated by the use of sold products (WRI/WBCSD 2004, 25). I propose applying this exact reasoning to the digital economy: a tech company must be responsible for the mental effects of product use. They are destructive because they erode the basic cognitive foundations required for a functioning society (Habermas 1989, 27). If a platform’s design creates measurable addiction, radicalization, or a loss of attention span, these are “emissions” that incur a cost to humanity.

    To develop sustainable and reliable policies, we require new auditing metrics. We must calculate the speed at which content triggers emotional responses and establish a fragmentation index to measure how often an app interrupts deep work. This is a necessary (albeit complicated) metric given that regaining focus after an interruption takes significant time and energy (Mark 2008, 108). Furthermore, we must assess the deficit of serendipity, determining whether an algorithm narrows a user’s worldview or introduces the necessary friction of new ideas.

    Once measured, these emissions can be mitigated through cognitive impact protocols. This includes mandating friction by design to force users to pause and think. For example, when Twitter introduced prompts asking users to read articles before retweeting, “blind” sharing dropped significantly (Twitter Inc. 2020). This proves that simple friction can measurably reduce cognitive waste. Beyond individual features, firms must submit to independent audits to ensure their code promotes agency rather than addiction. Finally, just as carbon sinks absorb physical waste, digital firms should be mandated to fund zones (libraries, parks, and phone-free spaces) where our attention can recover.

    It can be argued that introducing friction impedes innovation and destroys value. This view, however, fails to account for the long-term liability of cognitive degradation. A market that incentivizes the depletion of user agency creates a systemic risk to the public. By implementing “Scope 3 Cognitive Emissions,” we operationalize the cost of this damage, forcing platforms to account for the mental impact of their design choices. We are currently operating with a dangerous separation between our technological power and our institutional controls (Wilson 2012, 7). Closing this gap requires a shift, moving away from design that exploits immediate impulse. We must engineer digital environments that protect, rather than degrade, the cognitive autonomy required for a free society.

    References
    Crawford, Matthew B. 2015. The World Beyond Your Head: On Becoming an Individual in an
    Age of Distraction. New York: Farrar, Straus and Giroux.

    Giddens, Anthony. 1991. Modernity and Self-Identity: Self and Society in the Late Modern Age.
    Stanford: Stanford University Press.

    Habermas, Jürgen. 1989. The Structural Transformation of the Public Sphere: An Inquiry into a
    Category of Bourgeois Society. Cambridge: MIT Press.

    Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

    Mark, Gloria, and Deepti Gudithala. 2008. “The Cost of Interrupted Work: More Speed and
    Stress.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
    107–110.


    Mattson, Mark P. 2008. “Hormesis Defined.” Ageing Research Reviews 7 (1): 1–7.
    Pigou, Arthur C. 1920. The Economics of Welfare. London: Macmillan.

    Schüll, Natasha Dow. 2012. Addiction by Design: Machine Gambling in Las Vegas. Princeton:
    Princeton University Press.

    Sunstein, Cass R. 2017. #Republic: Divided Democracy in the Age of Social Media. Princeton:
    Princeton University Press.

    Turel, Ofir, Qinghua He, Gui Xue, Lin Xiao, and Antoine Bechara. 2014. “Examination of Neural
    Systems Sub-Serving Facebook ‘Addiction’.” Psychological Reports 115 (3): 675–695.

    Twitter Inc. 2020. “Read before you Retweet.” Twitter Blog, September 24.

    United Nations Global Compact. 2004. Who Cares Wins: Connecting Financial Markets to a
    Changing World. New York: United Nations.

    Weil, Simone. 1951. “Reflections on the Right Use of School Studies with a View to the Love of
    God.” In Waiting for God, translated by Emma Craufurd, 105–116. New York: Harper & Row.

    Wilson, Edward O. 2012. The Social Conquest of Earth. New York: Liveright.
    World Resources Institute and World Business Council for Sustainable Development
    (WRI/WBCSD). 2004. The Greenhouse Gas Protocol: A Corporate Accounting and Reporting
    Standard, Revised Edition. Washington, DC: WRI/WBCSD.

    Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at
    the New Frontier of Power. New York: PublicAffairs.

  • Stop Asking AI Stupid Questions.  

    Stop Asking AI Stupid Questions.  

    Let’s be brutally honest. Most companies are getting AI wrong.

    We are seeing the friction of trying to fit a 21st-century sense into a 20th-century business model.

    They’re spending millions on a new technology and getting… faster spreadsheets? More polite chatbots? Marginally better marketing plans?

    The initial hype is clearly dead, and for many leaders, the ROI is looking dangerously thin.

    Why?

    Because we’re asking it stupid questions.

    We are treating the most powerful perceptual tool ever invented like a slightly-smarter, cheaper intern. We ask it to summarize reports, fetch data, and write boilerplate emails. We are focused on optimization.

    Not only is this incredibly inefficient, but it is a (borderline embarrassing) failure of imagination.

    Granted, this isn’t really our fault. Our entire mental model for business was forged in the Industrial Revolution. We think in terms of efficiency and inputs/outputs. We hire “hands” to do work, and we look for tools to make that work faster.

    So when AI showed up, we put it in the only box we had: the faster intern box.

    We ask it Industrial-Age questions: “How can you write my emails faster?” “How can you summarize this 50-page report?” “How can you optimize my marketing budget?

    These aren’t necessarily bad questions. They’re just lazy. They are requests for efficiency.

    They are limited, and they are all, at their core, asking how to do the same things, just faster. As long as we are only optimizing, we are missing the potential revolution entirely.

    The real breakthrough, I believe, will come when we stop seeing AI as an extension of our hands and start seeing it as an extension of our perception.

    Your intern is an extension of your existing ability. They do the tasks you assign, just saving you time. A new sense, like infrared vision, grants you an entirely new ability (thinking X-Men). It lets you see heat, a layer of reality that was completely invisible to you before.

    This is the shift we’re missing.

    Its true power isn’t in doing tasks we already understand, but in perceiving the patterns, connections, and signals we are biologically incapable of seeing. Humans are brilliant, but we are also finite. We can’t track the interplay of a thousand variables in real-time. We can’t read the unspoken sentiment in ten million data points.

    AI can.

    When you reframe AI as a sense, the questions you ask of it change completely. You stop asking about efficiency and you start asking about insight. You stop asking, “How can I do this faster?” and you start asking, “What can I now see that I couldn’t see yesterday?” For example, perceiving a hidden market anxiety.

    So what does this shift from intern to “sense” actually look like?

    It comes down to the questions we are asking. The quality of our questions is the limiting factor, not the technology (for the most part).

    Look at marketing. The old paradigm, the “intern question,” is focused on automating grunt work:

    The Intern Question: “AI, write 10 social media posts and five blog titles for our new product launch.”

    This is a request for efficiency. It gets you to the same destination a little faster.

    But the organ question is a request for perception:

    The Organ Question: “AI, analyze the 5,000 most recent customer support tickets, forum posts, and negative reviews for our competitor. What is the single unspoken anxiety or unmet need that connects them?”

    The first answer gives you content. The second answer gives you your next million-dollar product.

    Let’s look at strategy. The intern question is about summarization:

    The Intern Question: “AI, summarize the top five industry trend reports for the next quarter and give me the bullet points.”

    This is a request for compression. You’re asking AI to act as an assistant, saving you a few hours of reading.

    But the “Organ Question” is about synthesis and signal detection:

    The Organ Question: “AI, find the hidden correlations between global shipping logistics, regional political sentiment, and our own internal production data. What is the emergent, second-order risk to our supply chain in six months that no human analyst has spotted?”

    The first question helps you prepare for the meeting. The second question helps you prepare for the future.

    To summarize all of this word vomit, I believe that the AI revolution isn’t stalling; more so waiting. It’s waiting for us to catch up.

    What we are seeing is the friction of trying to fit a 21st-century sense into a 20th-century business model.

    We are all still pointing this new tech at our old problems; our inboxes, our reports, our slide decks. We are asking it to help us optimize the past, when its real ability is to help us perceive the future.

    The most critical skill for leadership in this new era will not be prompt engineering. It will be question design. Stop asking “How can AI do this job faster?” and start asking, “What new work becomes possible because we can now see in this new way?”

    So, ask yourself and your team: Are you using AI to get better answers to your old questions? Or are you using it to find entirely new questions?

  • Digital Citizenship in the Age of Misinformation

    Digital Citizenship in the Age of Misinformation

    Education for the 21st Century

    In an era where the digital realm has become our second home, the concept of citizenship has undergone a change. We have become denizens of a vast digital ecosystem. This shift demands a reimagining of education that goes beyond traditional paradigms, equipping future generations with the critical thinking skills needed to navigate the labyrinth of online information.

    It is now a time requiring a shift in our systems as a society. We currently have access to the largest bank of information and recorded perspectives of all time, but the majority of us are not using it effectively. Imagine the possibilities of a world where students are capable of untangling the often overwhelming complexities of online information, and using it as a tool.

    This is a necessary evolution of our education systems. The digital age has gifted us with unprecedented access to information, but it has also presented us with a double-edged sword. As MIT Sloan researchers discovered, digital literacy alone isn’t enough to stem the tide of misinformation [3]. We need to cultivate a new breed of digital citizens who not only consume information responsibly but also create and share it ethically.

    To achieve this, we must incorporate digital citizenship into the fabric of our education system. History classes where students don’t just robotically memorize dates, but dissect the anatomy of fake news, tracing its origins and understanding its viral spread. Science courses that not only teach the scientific method, but also explore how misinformation can distort public understanding of crucial issues like climate change or vaccination [1].

    I’ll take it a step further. What if we created immersive digital simulations where students could experience the real-world consequences of spreading misinformation? Imagine a virtual reality scenario where a student’s decision to share an unverified piece of news triggers a chain reaction, allowing them to witness firsthand the ripple effects of their digital actions [5]. This approach could potentially transform abstract concepts into tangible experiences, making the lessons of digital citizenship both memorable and impactful.

    Moreover, I believe we need to shift our focus from mere technical proficiency to ethical mastery. In a time where a single tweet can spark a global movement or tarnish a reputation irreparably, understanding the ethical implications of our online actions is paramount. We should be fostering empathy, teaching students to see beyond the screen and recognize the human impact of their digital footprint [2]. Users have to recognize they are interacting with people at the other end – not 1’s and 0’s.

    The challenge of online misinformation is a human one. And the solution lies not in algorithms/AI, but in nurturing discerning, ethical, and adaptable users.

    As we stand at the precipice of two paths, we have an opportunity to redefine education for the 21st century. By cultivating critical thinking skills, ethical awareness, and adaptability, we can empower the next generation to become not just consumers of technology, but masters of the digital world. It’s clear that we need to create an education system that doesn’t just keep pace with technological advancements but anticipates and shapes them; to empower them to shape the world of tomorrow.

    The future of society depends on it.

    Works Cited:

    AACSB. “Assessing Critical Thinking in the Digital Era.” AACSB, 6 June 2023, www.aacsb.edu/insights/articles/2023/06/assessing-critical-thinking-in-the-digital-era.

    Learning.com Team. “Digital Citizenship in Education: What It Is & Why it Matters.” Learning.com, 6 Feb. 2024, www.learning.com/blog/digital-citizenship-in-education-what-it-is-why-it-matters/.

    MIT Sloan. “Study: Digital Literacy Doesn’t Stop the Spread of Misinformation.” MIT Sloan, 5 Jan. 2022, mitsloan.mit.edu/ideas-made-to-matter/study-digital-literacy-doesnt-stop-spread-misinformation

    Columbia University. “Information Overload: Combating Misinformation with Critical Thinking.” CPET, 27 Apr. 2021, cpet.tc.columbia.edu/news-press/information-overload-combating-misinformation-with-critical-thinking.

    Colorado Christian University. “The Importance of Critical Thinking & Hands-on Learning in Information Technology.” CCU, www.ccu.edu/blogs/cags/category/business/the-importance-of-critical-thinking-hands-on-learning-in-information-technology/.

    University of Iowa. “Digital Literacy: Preparing Students for a Tech-Savvy Future.” University of Iowa Online Programs, 19 Aug. 2024, onlineprograms.education.uiowa.edu/blog/digital-literacy-preparing-students-for-a-tech-savvy-future.