Essays

The Ethics of Keeping Up

On the obligation to learn AI, the compounding advantage of clear thinking, and why taking care of your brain has never mattered more.

·10 min read
Written by
Luke Anderson
Luke Anderson
Co-founder, CorridorIQ
Edited by
Zave Greene
Zave Greene
Co-founder, CorridorIQ
Share:

Hello blogging world. This is certainly a new space for me as most of my interests are more on the computational side of things and like many, English class is a terror. My name is Luke; I'm 19 years old and studying mechanical and aerospace engineering at UVA. As I hope to show, I have a genuine love of knowledge and make a deliberate effort to cultivate learning as a skill.

I think it only fitting to talk about AI right now, as an adopter when it came out my sophomore year of high school. I offer my opinion simply as someone who has been closely following AI ever since it's been remotely in the mainstream, nothing more and nothing less. I recognize that my perspective as someone who has been using AI as I mature may offer some unique insights to those peripheral on the timeline to when LLMs were released for us to use.


Axioms

The following are the axioms upon which I build my case that anyone wanting to differentiate themselves in the future has an obligation to learn AI today if they haven't started.

  1. The world will never return to a state of no AI usage. Even if the biggest critics of AI performance, adoptability, and intelligence are correct, the tools are never going away and eventually innovation will bring them to where proponents claim they are now. I believe them to be farther in advancement than most of us think, considering the use of AI now in military operations and it is beginning to recursively improve itself. It's equally horrifying and exciting.

  2. AI, like any tool of similar scale, has catastrophic use cases as it has eudemonic ones. Fire keeps us warm and burns cities. The printing press spread the Bible and propaganda. AI will cure diseases and it will be weaponized. This isn't a reason to avoid it but rather a reason to make sure the right people are proficient with it. The tools don't care who uses them (maybe AI will?). That means the people who do care have an obligation to be excellent with them, even as a defense mechanism concurrent with . But I'd truly go further than defense. If you have the capacity to learn these tools and use them to build something genuinely good that elevates the human condition, and you choose to sit it out, it's not being cautious. It treats stillness as virtue in a world where inertia is the real risk. Whatever side of the coin that you are on, acting in selflessness or selfishness, these tools allow you to profit through either.

  3. AI is a supplement to human thinking, not a replacement for it. This is the one pushback most people tell me when they see me use it for help with schoolwork. They see AI generating essays and writing code and think I don't have to think as hard anymore. That's exactly wrong. It amplifies what's already there, but it doesn't create something from nothing. If you don't have clear thinking, AI gives you faster unclear thinking. If you don't have good judgment, AI gives you more confidently bad judgment. The people who coast on AI-generated mediocrity will quickly plateau and frankly, it's obvious when you read a AI essay. The people who pair AI with genuinely sharp thinking will be nearly impossible to compete with.

  4. The quality of AI's output is largely tied to the quality of your communication. I only began to understand this recently, and I think it has massive implications. For the first time in history, there's a powerful tool where your ability to use it scales directly with how clearly you can articulate what you want. Not your technical skill. Not your credentials. Your communication. Learning to code was about syntax and perhaps elegance. Learning to use AI is about learning to think out loud clearly and think in systems. AI has made communication and articulacy much more potent than they used to be, and as someone who did competitive debate but is on the introverted side, it's really pushed me to places outside my comfort zone in the best way possible.


Speed and Obligation

Currently the key differentiator is speed. As the co-founder of CorridorIQ, that's the speed of innovation; but as a student, that's the speed of learning and adaptation. The maximization of knowledge and wisdom is found not by having all the answers memorized, but by knowing exactly where to go to find and learn them at scale. Even further, it's being willing to discard what you learned last week without ego when the tools change underneath you. Speed of knowledge acquisition keeps you plastic, and refining learning as a skill is broadly applicable, but particularly in the world of AI, it's about keeping up with the pace of LLM learning in and of itself.

I see an obligation to use the tools, talents, and opportunities presented in life to provide genuine value and good. You don't just get to bury your talents in the ground, it's irresponsible and complacency kills. The inflection point we're in with AI falls squarely into that obligatory belief to do good with what we've been given.

I recognize that this sounds like a strong claim from a 19-year-old engineering student, and I'm aware that plenty of people are doing extraordinary, meaningful work that has nothing to do with AI. My argument isn't that everyone needs to become a developer. It's that whatever your work is, these tools can probably make it reach further, and ignoring that has a cost that can quietly compound.

The rapidity of innovation means the window for any single competitive advantage is shrinking. What was learned last month is already depreciating. The person who mastered a framework three weeks ago isn't ahead of the person just starting, for a short time they still both are about to learn something new alongside each other. The only sustainable advantage is being someone who can learn at the speed the world is now moving.

As AI matures, the gap between experienced users and newcomers will widen. Right now, the playing field resets constantly because the tools keep changing. Eventually they'll stabilize, and the people who spent years building intuition for how to think alongside AI will have a compounding advantage that's much harder to close.


Taking Care of the Brain

If speed of learning is the differentiator, then the thing that does the learning is the most important thing you own.

The brain. Its clarity, its sharpness, its ability to focus, synthesize, make connections and abstractions is now what differentiates output. AI handles the grunt work. It handles the data cleaning, the research compilation, the formatting. What it can't handle is the thinking.

And this is one place where I feel it very rational to throw shade at AI use — if someone is using AI to generate their entire essay or solve that integral, this falls right into what I think could be a catastrophic use case. The offloading of cognitive load is in my view 100x worse than any doomscrolling or . The moment we allow AI to think for us is when I hope I'm in a cabin in the mountains counting snowflakes — because that temptation is only going to grow and unchecked will lead to atrophy.

Surprisingly, journaling became an amazing mechanism to combat this. It's been a game changer for me, and I ought to give credit where it is due: my cousin and co-founder Zave is the reason I began journaling at all. I observed how he drew things out, whiteboarded, sketched out his thoughts and how that correlated with his articulacy and focus. I learned and am still learning from him how to be brutally precise with actionability, measurability, and brevity at the center, not overthinking as I can be detrimentally talented at doing.

That standard is what made journaling useful for me and it produced something I didn't anticipate — my journal entries became remarkably effective inputs for AI. Because Zave pushed me to write with a higher level of precision, I ended up building a strong mechanism for compounding knowledge through an LLM without even intending to. The LLM remembers what I've told it, connects it to new problems, and surfaces patterns I wouldn't have seen alone. A human taught me a discipline, and AI revealed a dimension of it I hadn't expected, in turn making me take the discipline more seriously. But that loop only works if I've done the thinking first. Garbage in, garbage out, and the "garbage" is lazy thinking or AI thinking for me.

By formal extension, guarding your mind and being intentional about your environment has never carried such kairos as it does now. I find this to be a net positive, albeit I will note that I should be arriving at these conclusions without the existential projection of AI ruling the world. It ought to be noted however that it was the capability of these AI tools that pushed me to take these changes more seriously, and I'd rather get to the right place through an imperfect path than stay comfortable in the wrong one.


What I've Watched AI Do to Communication

I want to come back to that fourth axiom because I think it's the most underappreciated insight in this entire conversation.

To write a good prompt, I have to force myself to clarify my own thinking. What exactly am I trying to accomplish? What does the end state look like? What are the constraints? What have I tried that didn't work? Those are the same questions that make you effective in every professional and personal context I can think of. Again, these questions aren't exclusive to AI and any good mentor or hard problem forces them. But AI puts you through that process constantly as even a basic prompt requires you to define what you actually want.

The more I've worked with AI, the better I've gotten at articulating complex ideas to people (hopefully shown in this blog), not just to machines. The discipline of clear communication transfers everywhere. And the people who dismiss prompt engineering as a gimmick are missing the point. It's not really about the magic words, but rather going through the excruciating process of having to put my thoughts on paper in a decipherable way.

The distilled argument is that AI incentivizes maximum communication. It rewards precision. It punishes vagueness. And in doing so, has changed and reprioritized the skills that I am choosing to build.


Where I Think This is Heading

I don't think people are ready for how fast the next two years are going to move. Industries that have operated the same way for decades are about to be restructured by small teams with big ideas and AI-powered execution. The competitive advantage won't be capital or headcount, but instead speed of learning, clarity of thinking, and the communication skills to direct these tools with precision.

The tools are here. The window is open. Take care of your brain, think clearly, communicate precisely, and never stop learning. Supplement it with AI and by honest volume, failure is unlikely. The capacity to learn and build in this moment has moved from luxury to responsibility.

See the Live Migration Map

Create a free account and get instant access to real-time migration corridors, heat scores, and demographic data.