Tag: technology

  • Stop Asking AI Stupid Questions.  

    Stop Asking AI Stupid Questions.  

    Let’s be brutally honest. Most companies are getting AI wrong.

    We are seeing the friction of trying to fit a 21st-century sense into a 20th-century business model.

    They’re spending millions on a new technology and getting… faster spreadsheets? More polite chatbots? Marginally better marketing plans?

    The initial hype is clearly dead, and for many leaders, the ROI is looking dangerously thin.

    Why?

    Because we’re asking it stupid questions.

    We are treating the most powerful perceptual tool ever invented like a slightly-smarter, cheaper intern. We ask it to summarize reports, fetch data, and write boilerplate emails. We are focused on optimization.

    Not only is this incredibly inefficient, but it is a (borderline embarrassing) failure of imagination.

    Granted, this isn’t really our fault. Our entire mental model for business was forged in the Industrial Revolution. We think in terms of efficiency and inputs/outputs. We hire “hands” to do work, and we look for tools to make that work faster.

    So when AI showed up, we put it in the only box we had: the faster intern box.

    We ask it Industrial-Age questions: “How can you write my emails faster?” “How can you summarize this 50-page report?” “How can you optimize my marketing budget?

    These aren’t necessarily bad questions. They’re just lazy. They are requests for efficiency.

    They are limited, and they are all, at their core, asking how to do the same things, just faster. As long as we are only optimizing, we are missing the potential revolution entirely.

    The real breakthrough, I believe, will come when we stop seeing AI as an extension of our hands and start seeing it as an extension of our perception.

    Your intern is an extension of your existing ability. They do the tasks you assign, just saving you time. A new sense, like infrared vision, grants you an entirely new ability (thinking X-Men). It lets you see heat, a layer of reality that was completely invisible to you before.

    This is the shift we’re missing.

    Its true power isn’t in doing tasks we already understand, but in perceiving the patterns, connections, and signals we are biologically incapable of seeing. Humans are brilliant, but we are also finite. We can’t track the interplay of a thousand variables in real-time. We can’t read the unspoken sentiment in ten million data points.

    AI can.

    When you reframe AI as a sense, the questions you ask of it change completely. You stop asking about efficiency and you start asking about insight. You stop asking, “How can I do this faster?” and you start asking, “What can I now see that I couldn’t see yesterday?” For example, perceiving a hidden market anxiety.

    So what does this shift from intern to “sense” actually look like?

    It comes down to the questions we are asking. The quality of our questions is the limiting factor, not the technology (for the most part).

    Look at marketing. The old paradigm, the “intern question,” is focused on automating grunt work:

    The Intern Question: “AI, write 10 social media posts and five blog titles for our new product launch.”

    This is a request for efficiency. It gets you to the same destination a little faster.

    But the organ question is a request for perception:

    The Organ Question: “AI, analyze the 5,000 most recent customer support tickets, forum posts, and negative reviews for our competitor. What is the single unspoken anxiety or unmet need that connects them?”

    The first answer gives you content. The second answer gives you your next million-dollar product.

    Let’s look at strategy. The intern question is about summarization:

    The Intern Question: “AI, summarize the top five industry trend reports for the next quarter and give me the bullet points.”

    This is a request for compression. You’re asking AI to act as an assistant, saving you a few hours of reading.

    But the “Organ Question” is about synthesis and signal detection:

    The Organ Question: “AI, find the hidden correlations between global shipping logistics, regional political sentiment, and our own internal production data. What is the emergent, second-order risk to our supply chain in six months that no human analyst has spotted?”

    The first question helps you prepare for the meeting. The second question helps you prepare for the future.

    To summarize all of this word vomit, I believe that the AI revolution isn’t stalling; more so waiting. It’s waiting for us to catch up.

    What we are seeing is the friction of trying to fit a 21st-century sense into a 20th-century business model.

    We are all still pointing this new tech at our old problems; our inboxes, our reports, our slide decks. We are asking it to help us optimize the past, when its real ability is to help us perceive the future.

    The most critical skill for leadership in this new era will not be prompt engineering. It will be question design. Stop asking “How can AI do this job faster?” and start asking, “What new work becomes possible because we can now see in this new way?”

    So, ask yourself and your team: Are you using AI to get better answers to your old questions? Or are you using it to find entirely new questions?

  • Digital Citizenship in the Age of Misinformation

    Digital Citizenship in the Age of Misinformation

    Education for the 21st Century

    In an era where the digital realm has become our second home, the concept of citizenship has undergone a change. We have become denizens of a vast digital ecosystem. This shift demands a reimagining of education that goes beyond traditional paradigms, equipping future generations with the critical thinking skills needed to navigate the labyrinth of online information.

    It is now a time requiring a shift in our systems as a society. We currently have access to the largest bank of information and recorded perspectives of all time, but the majority of us are not using it effectively. Imagine the possibilities of a world where students are capable of untangling the often overwhelming complexities of online information, and using it as a tool.

    This is a necessary evolution of our education systems. The digital age has gifted us with unprecedented access to information, but it has also presented us with a double-edged sword. As MIT Sloan researchers discovered, digital literacy alone isn’t enough to stem the tide of misinformation [3]. We need to cultivate a new breed of digital citizens who not only consume information responsibly but also create and share it ethically.

    To achieve this, we must incorporate digital citizenship into the fabric of our education system. History classes where students don’t just robotically memorize dates, but dissect the anatomy of fake news, tracing its origins and understanding its viral spread. Science courses that not only teach the scientific method, but also explore how misinformation can distort public understanding of crucial issues like climate change or vaccination [1].

    I’ll take it a step further. What if we created immersive digital simulations where students could experience the real-world consequences of spreading misinformation? Imagine a virtual reality scenario where a student’s decision to share an unverified piece of news triggers a chain reaction, allowing them to witness firsthand the ripple effects of their digital actions [5]. This approach could potentially transform abstract concepts into tangible experiences, making the lessons of digital citizenship both memorable and impactful.

    Moreover, I believe we need to shift our focus from mere technical proficiency to ethical mastery. In a time where a single tweet can spark a global movement or tarnish a reputation irreparably, understanding the ethical implications of our online actions is paramount. We should be fostering empathy, teaching students to see beyond the screen and recognize the human impact of their digital footprint [2]. Users have to recognize they are interacting with people at the other end – not 1’s and 0’s.

    The challenge of online misinformation is a human one. And the solution lies not in algorithms/AI, but in nurturing discerning, ethical, and adaptable users.

    As we stand at the precipice of two paths, we have an opportunity to redefine education for the 21st century. By cultivating critical thinking skills, ethical awareness, and adaptability, we can empower the next generation to become not just consumers of technology, but masters of the digital world. It’s clear that we need to create an education system that doesn’t just keep pace with technological advancements but anticipates and shapes them; to empower them to shape the world of tomorrow.

    The future of society depends on it.

    Works Cited:

    AACSB. “Assessing Critical Thinking in the Digital Era.” AACSB, 6 June 2023, www.aacsb.edu/insights/articles/2023/06/assessing-critical-thinking-in-the-digital-era.

    Learning.com Team. “Digital Citizenship in Education: What It Is & Why it Matters.” Learning.com, 6 Feb. 2024, www.learning.com/blog/digital-citizenship-in-education-what-it-is-why-it-matters/.

    MIT Sloan. “Study: Digital Literacy Doesn’t Stop the Spread of Misinformation.” MIT Sloan, 5 Jan. 2022, mitsloan.mit.edu/ideas-made-to-matter/study-digital-literacy-doesnt-stop-spread-misinformation

    Columbia University. “Information Overload: Combating Misinformation with Critical Thinking.” CPET, 27 Apr. 2021, cpet.tc.columbia.edu/news-press/information-overload-combating-misinformation-with-critical-thinking.

    Colorado Christian University. “The Importance of Critical Thinking & Hands-on Learning in Information Technology.” CCU, www.ccu.edu/blogs/cags/category/business/the-importance-of-critical-thinking-hands-on-learning-in-information-technology/.

    University of Iowa. “Digital Literacy: Preparing Students for a Tech-Savvy Future.” University of Iowa Online Programs, 19 Aug. 2024, onlineprograms.education.uiowa.edu/blog/digital-literacy-preparing-students-for-a-tech-savvy-future.