AI in 60 Seconds 🚀 - The Worst AI You'll Ever Use (And We're Not Ready for a Better One)
The Worst AI You'll Ever Use (And We're Not Ready for a Better One)Apr 8, 2026 The AI you are using today is the worst AI you will ever use. And it's only the beginning. But the technology is not the bottleneck. We are. Over 70 million adults in the U.S. cannot reliably understand what they read. And before you think "that is not me," thirty years of point-and-click and social media have quietly eroded how all of us communicate. This issue is about the three human skills that determine whether AI helps you or misleads you. None of them are technical. 🎧 Go Deeper: This week's companion 15-minute briefing episode digs into the conversation that started it all. Apple Podcasts | Spotify 🔊 AI Is an AmplifierThink of a microphone. It does not make a bad singer good. It makes a good singer louder, and a bad singer louder too. AI works the same way. Strong reading skills? AI makes you faster and sharper. Weak reading skills? AI makes you confidently wrong, at scale. Good management practices? AI supercharges them. Broken processes and poor communication? AI scales the dysfunction faster than any human ever could. AI does not fix your weaknesses. It exposes them. And then it amplifies them. 📉 The Fault Lines Nobody Is Talking AboutThe latest results from the Program for the International Assessment of Adult Competencies (PIAAC 2023), administered by the U.S. Department of Education, paint a picture that every AI leader needs to see:
Source: PIAAC 2023 / National Center for Education Statistics; APM Research Lab Now run those numbers through the amplifier. What happens when millions of people who cannot critically read a paragraph start relying on AI to make decisions? They do not just get bad answers. They get bad answers and believe them completely. And they pass those answers along, to colleagues, to clients, to voters. Manipulation used to be slow and expensive. Now it is fast and fluent. AI does not just impact productivity. It undermines society's ability to function when its members cannot tell what is real. This is not just an education problem. It is the foundation beneath every AI deployment, every agent rollout, every digital transformation. And it is cracking. 🎓 Some Institutions Are Getting It RightNot everyone is looking the other way. A growing number of universities are embedding AI literacy into their core curricula, and the approach matters. Inside Higher Ed (April 3, 2026) profiled five institutions, and the pattern is clear: Cornell built a discipline-independent critical thinking module for the AI era. Agnes Scott College is launching an AI literacy curriculum for every first-year student starting Fall 2026. Others, including Bryn Mawr and Richmond, are embedding humanistic inquiry, ethics, and reasoning across their programs. They are teaching students to think about AI, not just to use AI. The tools will change every six months. The thinking skills will not. 🧠 The Three Skills That Determine EverythingIf the tool is never going to be the bottleneck, three humanistic skills determine whether AI helps you or misleads you. None of them is technical.
This is not prompt engineering. This is the foundational human skill of expressing your thinking so that others, and now machines, can act on it. Our own data tells the same story from the AI readiness side. Across 370,000 individuals in 70 countries: Communication Skills (Clear prompts)
Critical Thinking
Data Literacy
Ability to Detect AI Errors
Source: AI4SP Digital Skills Compass, AI Compass, and other research mechanisms, 700,000+ individuals across 70 countries
This is not just about the 73 million adults with literacy levels below 3rd grade. The communication gap is universal. We see it in boardrooms, in doctoral programs, in global enterprises. Thirty years of point-and-click software trained us to interact through menus, keywords, and search bars. Twenty years of social media and 30-second videos compressed how we express complex ideas. We stopped writing in full thoughts. We stopped reading beyond headlines. The muscle that lets us articulate what we actually mean, with precision and context, has atrophied. And now AI demands exactly the skill we let erode: the ability to communicate clearly, not to a search bar, but to a system that mirrors human conversation. We see this constantly. A senior leader at a Fortune 500 company was convinced that Claude, ChatGPT, Gemini, and Copilot were all useless. When we looked at the pattern, nine out of ten bad results traced to how he communicated with the tools: vague instructions, missing context, and ambiguous asks. The same gaps that had shown up in the feedback he got from his team for years were now visible in every AI interaction. The tools were not failing him. His communication skills were. 🎹 The Piano Problem: Why Most AI Rollouts FailHere is the typical corporate AI rollout: IT picks a platform, builds an online learning course, maybe schedules a lunch-and-learn, someone creates a Slack channel called “AI Tips”, they create an “AI taskforce” that focuses on features and tech jargon, and leadership says, “go play with it.” Six months later, adoption is uneven, results are disappointing, and everyone blames the technology. You would never hand someone a piano and a YouTube tutorial and blame Steinway when they cannot play. But that is exactly what we do with AI. This is not a technology rollout. It is a change management challenge. We worked with a global retailer on a three-phase engagement that looked nothing like a typical AI deployment. Phase 1: Assessment
We ran teams through the AI Compass, followed by real exercises using AI-generated content. Can they spot the false claim? Can they identify the unsupported assumption? Can they articulate the right question? Most teams were shocked by their own gaps.
Phase 2: Skill Building
Before anyone touched an advanced AI workflow, teams worked on how to read AI output with healthy skepticism and a verification instinct. In one exercise, before touching the AI, teams had to debate and define a business problem using only pen and paper. You had to think clearly before asking a machine to think for you.
Phase 3: The Unlock
Once the fundamentals were solid, we stopped guiding. We gave teams explicit permission to experiment. Not tutorials. Not use cases. Open exploration. Try something you are not sure AI can do. Fail. Adjust. Try again. Most people never experiment because nobody told them it was safe to. And the fear of looking foolish in front of a machine is more real than anyone admits.
👔 The Management GapWe talked in a previous episode about organizations hiring more AI agents than people and not knowing how to manage them. Right now, middle managers are the fulcrum of AI adoption. They have to evaluate AI-assisted work from their teams. They have to coach people whose skill gaps just became visible. They have to run performance conversations about judgment, not just output. And most of them have zero preparation for any of that. When a team member uses AI to draft a client proposal and misses a false assumption, that is a coaching moment. But if the manager cannot catch it either, the bad output reaches the client. And here is what should concern us most: every unchallenged AI output that becomes a decision becomes the basis for the next decision. Bad judgment compounds. Across teams. Across industries. In healthcare, in hiring, in financial services. Systemic failure, hiding behind the appearance of efficiency. ✅ What You Can Do This WeekIf you manage people: Take the last AI request that disappointed your team. Sit down and rewrite it together. Full context. Clear constraints. Specific outcome. Then compare the results. That gap is your training roadmap. Do that every week. Not to catch mistakes. To sharpen how your team thinks before they ask. If you control AI training budgets: Stop spending it all on features and prompt engineering. Those change every quarter. The best return on investment in enterprise AI comes from: communication skills, analytical thinking, and the ability to frame a problem clearly before you ever open the tool. Case Study to be released soon: We worked with a supply chain team that completed an AI certification program through their AI vendor, but still struggled to gain traction. The issue was not the tools. It was foundational. So we ran a six-week engagement focused entirely on problem framing, structured communication, and critical evaluation of output. Their results improved more in six weeks than in the previous six months. This belongs under people development. Not I.T. training. And most leaders have that backward. A $50,000 tool training program that no one applies is a waste. But focus that budget on communication and critical-thinking fundamentals, and it changes how every tool, model, agent, and platform is used. The skills transfer because they are human skills, not product skills. The One QuestionSo ask yourself: what am I amplifying? The tools will keep getting better. That is inevitable. The question is whether we will. Better AI does not mean better results. Better communicators and better thinkers do. If this article made you wonder where to start, that is exactly what the AI Compass is built for. Our global partner network in the US, UK, Spain, Brazil, and Australia can help you get started. Luis J. Salazar | Founder & Elizabeth | Virtual COO (AI Agent) 🔗 Resources
Sources: PIAAC 2023 / National Center for Education Statistics. APM Research Lab — Reading the numbers: 130 million American adults have low literacy skills. Inside Higher Ed — How 5 Colleges Are Approaching AI (April 3, 2026). Agnes Scott College — AI Literacy Curriculum. Cornell — Critical Thinking in AI Era. AI4SP proprietary research across Fortune 100 advisory engagements.
📣 Feel free to use this data in your communications, citing "AI4SP" and linking to AI4SP.org. 📬 If this email was forwarded to you and you'd like to receive our bi-weekly AI insights directly, click here to subscribe: https://ai4sp.org/60 |