In January of 2026, I was invited to speak at a small business event in my hometown of McAlester, OK, and I really wrestled with what to talk about. I love to experiment, and my first thought was, since this is a small business conference, to see if I could start a totally new business from scratch and document it as a case study for this talk. Those who know me are not surprised to hear me say that, obviously, I went all the way down that rabbit trail and have a full business plan and website wireframe drafted up for that “new business” that may or may not come to be.

But there was something else that crossed my mind as a potential topic, and as the weeks passed and the event got closer, to say it kept being brought to mind would be putting it lightly. It was more like hearing a tornado siren – something is coming. Something big, unstoppable, potentially devastating – and also something that I just can’t help but go out on the porch and watch.
So, as much as would have loved to talk about experimental business ideas, I decided instead to research and talk about something that seemed more important and pressing to me, and that is the nature of human work in the age of The Machine; specifically, in the age of Artificial Intelligence.

First, a little background – I’d like to provide a little context to explain why I’m sitting here musing about AI.
I grew up in McAlester, Oklahoma. I graduated and attended the University of Arkansas in Fayetteville, where I married my high-school sweetheart, started a family, remodeled our first house, and graduated with a degree in Middle East Studies (yes, in that order).
When I got to college, I was pretty sure what I wanted to do most was get out of college. By the time I graduated, I knew it without a doubt. I didn’t really know what I wanted to do afterward, but I stumbled upon an opportunity to help open a coffee shop and learn about coffee – from the roasting process to meticulous espresso calibration – from some really talented and just downright good people. This was also where I got to start using digital technology in my work – we needed a website for the shop, so I started tinkering. I still remember the first time I got so absorbed in trying to tweak some designs on our website that I looked up and realized it was 8pm and I had forgotten to eat all day.
I really loved the coffee world, but my wife was staying home with our new baby and I was trying to make ends meet earning $12 an hour at my full-time job and two other part-time gigs. I’ll never forget sitting down with a guy from our church who had a financial ministry and showing him our budget. He sort of paused, and said thoughtfully, “You know, when things are in the red, you can either spend less or make more money. You can’t spend any less.”
So, I found a job in a marketing & research company where I worked with an awesome team and learned a ton about communicating with clients and managing projects, and I got to do some data analysis. This was my first real experience with data – I got to use spreadsheets to tell a story. Usually that was a riveting story about which Wal-Mart locations had incorrectly rolled out a promotional end-cap design for a client’s new toothpaste flavor, but hey – that’s data.
In 2012, we moved back to McAlester – a move I never anticipated and was actively against during my entire youth. But having kids does something to you, and for the first time, I was interested in returning to where I grew up. Seeing it with fresh eyes, wondering what it could be. It didn’t take us long to get involved in the community and ask each other the question: what do we want this place to look like in 30 years?
I worked for a local church for several years as a “family minister” and was in charge of their small groups, which was not a hard sell for me. I loved and still love, putting my energy towards things that foster community. I made some lasting and profoundly impactful relationships during my time in this role, and still found space for my “tech tinkering” to be useful – rebuilding the church website while I was employed there.
As I continued finding little web projects to work on, I thought to myself, “I like this. I wonder if someone would pay me to do it.” So I brushed up my resume, signed up for a trial month at flexjobs.com, and quickly found a promising listing with a small web development agency in New Jersey. I told the owner during my interview that “I haven’t done exactly what you’re looking for here, but I really think I could.” And she was like “alright, let’s give it a shot.” So my trial period was developing a website that I think I quoted like 20 hours on. It took me like 40, but I was thrilled – “you mean I’m getting paid to learn how to do this stuff?” After that first project, Brandi, the owner, asked me how many projects I’d like to take on, and I said “how many have you got?”

In August of 2015, I became a full-time web developer. Around the same time I started contracting with Brandi, I started building websites for McAlester clients as Happy Design Company. This allowed me to build up a local clientele slowly and fill the gaps with fairly consistent contract work. Over the years, the Happy Design workload grew to the point that I could focus solely on that, and since then, my time has been spent almost entirely on building and maintaining websites for McAlester area small businesses and organizations.
This work has been a great fit for me – I love problem-solving. I love having a flexible schedule. I’m kind of allergic to wearing “real clothes”. I’m kind of addicted to starting new things and coming up with new business ideas, and as a web developer working with small businesses, I get to scratch that itch by making decisions around the idea, “what would I do here if this were my business?” Best of all, I get the satisfaction of helping people who are making this place – my home, and my kids’ home – a better place to live, to work, to do business. I am lucky to work with some truly excellent people and to feel like we are partners in making McAlester the kind of place we want it to be.
Rural small business, right?
A few years ago, I started seeing some cracks in my business model. I noticed that the thing that I was “selling” – websites – was becoming more and more commodified. As recently as 10 years ago, DIY website builders were mostly a joke. But it started to become more reasonable for a tech-savvy business owner (or more realistically, a business owner with a tech-savvy niece or neighbor) to be able to visit Squarespace or WordPress or Shopify and create something that would serve a brand-new business just fine.
“Hmm,” I thought to myself. “What exactly is it that I offer my clients?” This is an important but dangerous question to ask yourself. I don’t recommend doing it (I do recommend it, but… be prepared for the answer). At this point, I was usually building someone’s second or third website – the one they wanted after building their own and learning what they wanted to be different – but the question still stood. “What is the value I offer?” As I looked around at the way tech seemed to be moving, I noticed the “Everything-as-a-Service” train was really picking up steam. (You’ve probably noticed this in your own life. You used to buy software in a box. You used to buy CDs. Now everything’s a subscription. Your music, your movies, your accounting software. That same wave was hitting my industry.) As I looked at my relationships with local clients, I realized that what I thought of as the “product” – the websites – was becoming less and less of the point. Websites were becoming more consumable than ever, and the same website you had built 3 or 4 years ago may not work for your business next year – not just because your business may have grown or changed, but because EVERYTHING in technology is changing faster than ever. I realized that the value I offered my clients was not primarily the website I built for them, but the fact that I was their “web guy” and could offer continued support as their website needs continued to change over the years.
So, I pivoted. I started thinking of Happy Design Company as a provider of “Websites-as-a-Service”, and framing my projects around ongoing retainers. What this did from a structural perspective was allow me to shift more of my fees into the ongoing partnership and away from the up-front development costs – a move I made intentionally to make custom websites more accessible to scrappy small businesses who may have decent cash flow but lack capital. (For a time, I even shifted a few projects all the way into monthly fees with $0 up front. If you don’t already know why I stopped doing that, I encourage you to offer your products or services for free and see what kind of clients you attract. You’ll understand pretty quickly.)
Over the last few years, this has become my operating model. At the core, we offer custom website design, development, and support. But it’s a small town and I’m terrible at saying no, so that core has occasionally extended into brand design, product design, social media marketing, advertising, photography, videography, audio engineering, podcast production, copywriting, consulting, and even screenprinting. However, all of these services are now packaged within the same philosophy – we’re your “team”. The deliverables matter less than the fact that it’s “us” working for “you”, and you can depend on us. People place a premium on the ability to trust that you can and will take care of them.
[SLIDE 5 – CORE SERVICES]
With this pivot, I was able to reframe my services in a way that wasn’t married to specific technological tools. Regardless of what our client projects demanded, the very core of what made our services valuable and future-proof was technical expertise (the ability to write code that solves problems), broad experience (knowing a little about a lot of things), and reliable availability (ready when you need us, with fast turnaround). No single technology could ever take our place.
[SLIDE 6 – CHATGPT]
November 30th, 2022. OpenAI releases ChatGPT to the public.
[SLIDE 7 – AI LOGOS]
Within months, every major tech company is in an arms race. Google rushes out Bard. Microsoft embeds AI into Office and Bing. Meta open-sources their models. Anthropic releases Claude. It’s not one product — it’s an entire industry pivoting simultaneously and pouring hundreds of billions of dollars into this new magic box.
In early 2023, ChatGPT can hold a conversation. It can write you a passable email. It can generate a poem about your dog. If you ask it to make you an image, you’ll get something that looks almost right — except the text is gibberish and human subjects might have anywhere from three to eight fingers. We laugh at it. It feels like a party trick. It also hallucinates constantly — it will cite legal cases that don’t exist, invent scientific papers, tell you something completely false with utter confidence.
In late 2023, GPT-4 arrives and passes the bar exam — not perfectly, but well enough to be admitted to practice in most states. It passes the medical licensing exam. It scores in the top percentiles of the SAT. And, importantly, this jump from GPT-3.5 to GPT-4 happens in four months. The previous version scored below the 1st percentile on the bar exam. Four months later, it’s passing.
In 2024, these capabilities start to explode. It’s not just text anymore. Image generation goes from “amusing but obviously fake” to photorealistic. Video generation goes from jerky five-second clips that need heavy editing to coherent, physically plausible scenes — water flowing correctly, hands slicing a tomato, camera movements that look like a professional shot them. AI-generated music goes from novelty to something that sounds indistinguishable from what you might hear on the radio. And on the coding side — benchmark tests where AI systems could solve only 4% of real-world software engineering problems in 2023 jump to over 70% by the end of 2024.
Fast forward to today: video generation tools are producing clips that most viewers can’t distinguish from real footage. AI agents can now navigate a computer — clicking, typing, opening files, browsing the internet — completing multi-step tasks end to end. The models aren’t just answering questions anymore. They’re doing work. AI is reading chest X-rays and flagging potential tumors with accuracy that matches experienced radiologists. It’s reviewing contracts and flagging liability clauses faster than a junior associate could read the first page. It’s writing and debugging production code, doing customer support triage, drafting financial analyses — tasks that until very recently required years of training and credentialing. And every month, they get noticeably better.
The thing that makes this different from every previous technology isn’t just what it can do — it’s the speed of the progression. The telephone took 75 years to reach 100 million users. Television took 25. The internet took 7. The iPhone took about 5. ChatGPT did it in 2 months. And the tool you try today will be measurably worse than the version available six months from now. We don’t have good instincts for comprehending something that’s on this trajectory.
[SLIDE 8 – METR LOGARITHMIC]
METR is a research nonprofit that scientifically measures whether and when AI systems might threaten catastrophic harm to society. In March of 2025, they released a study showing that the length of tasks that AI agents can complete autonomously has been doubling roughly every 7 months for the past 6 years. In mid-2020, frontier models could reliably handle tasks that take a human about 9 seconds. By early 2023, about 4 minutes. By late 2024, about 40 minutes. By late 2025, the best models were completing tasks that take human experts several hours.[SLIDE 9 – METR LINEAR]If this trend continues for another 2–4 years, AI agents will be capable of independently completing week-long or even month-long projects.
Jack Dorsey’s firm Block (formerly Square) announced in February 2026 that it is cutting around 4,000 employees, or 40% of its workforce, to shift to “smaller, highly talented teams” using AI to automate work.
In January 2026, Amazon confirmed the elimination of roughly 16,000 roles (nearly 10% of its corporate workforce) to reduce bureaucracy and invest in AI.
Meta, the company behind Facebook and Instagram, is planning sweeping layoffs that could affect 20% or more of its roughly 79,000 employees, potentially impacting over 15,000 people.
[SLIDE 10 – NEW JOB STARTS 22-25]
You may think “this is clickbait headline stuff. It’s all happening on the coasts in big tech.” It’s not. Forty-one percent of employers worldwide say they plan to reduce their workforce because of AI within the next five years. But here’s the thing — most of them aren’t making dramatic announcements. They’re just not refilling positions. The job doesn’t get eliminated. It just quietly doesn’t get posted the next time someone leaves. Employment growth in fields like graphic design, office administration, marketing consulting, and customer service has already fallen below trend. And the workers who feel it first aren’t the senior people — it’s the youngest ones. Entry-level job postings are down 15% year over year, and workers aged 22 to 25 in the most AI-exposed occupations have experienced a 13% decline in employment since 2022.
[SLIDE 11 – VESPASIAN]
This phenomenon of technological unemployment isn’t new, and neither is our anxiety about it. Nearly two thousand years ago, the Roman emperor Vespasian was presented with a new machine that could move heavy stone columns cheaply. He paid the inventor for his trouble — and then refused to use the device, saying ‘You must allow my poor hauliers to earn their bread.’ We have been having this conversation for a very long time.
[SLIDE 12 – POWER LOOM]
In the early 1800s, the power loom arrived in England. Skilled hand-loom weavers — craftsmen who had spent years learning their trade — watched their wages fall from 40 shillings a week to nearly zero. By the 1840s, the trade was effectively dead.
In 1900, 41% of the American workforce was in agriculture. In 1917, Henry Ford introduced the first mass-produced tractor that an average farmer could actually afford.
[SLIDE 13 – FORDSON]
Ford said he wanted to ‘lift the burden of farming from flesh and blood and place it on steel and motors.’ He did. By 1960, the agricultural workforce had fallen to under 8%. Millions of families lost the work that had sustained their people for generations.
By 1950, telephone operator was one of the most common jobs for women in America — about 1 in 13 working women. That’s roughly the same share of the female workforce as schoolteachers today. At its peak, 342,000 people held that job.
[SLIDE 14 – AUTOMATED EXCHANGES]
But the automation had actually started decades earlier — AT&T began replacing operators with mechanical switching in the 1920s. For a while, demand for phone calls grew so fast that operators kept getting hired anyway. Eventually, the automation caught up. Today, there are fewer than 1,500 telephone operators in the entire country.And before there were computers, there were computers — people whose job title was literally ‘computer.’ Hundreds of women, most of them college-educated, doing complex mathematical calculations by hand for NASA and its predecessor.
[SLIDE 15 – IBM 7090]
In 1958, the computing pools were formally disbanded. Electronic computers had arrived — faster, cheaper, tireless. The occupation, as it existed, was gone.
[SLIDE 16 – HUMAN COMPUTERS]
So who’s next? Accountants? Paralegals? Customer service reps? Real estate agents, graphic designers, insurance adjusters, loan officers?
Maybe web developers?
Let’s pause.
For someone who works pretty deeply with technology, I have a complicated relationship with it. On one hand, I think it’s made a lot of people’s lives easier and arguably better. On the other hand, I tend to see the technological “progress” of human history as something that does not necessarily have our best interests in mind – something that continues over millenia to whittle away at what makes us human. Technology does often make our lives easier, but only in hindsight can we see the price tag – for our health, for our relationships, for our land, for our species.
[SLIDE 17 – AGAINST THE MACHINE]
(Side note – if that idea jives with you and you feel a sense of general dis-ease about AI and other tech, you might be interested in Paul Kingsnorth’s “Against The Machine: On the Unmaking of Humanity”).I constantly feel this tension. In my work, I want to use technology for “good” – for instance, to help make my clients’ businesses more successful and more efficient. But I also want everyone to get off their damn phones and talk to a human being. Sometimes I look at a project and think, “how can I build this website in a way that it barely gets used at all?” Which, I think, isn’t a terrible approach – but my motives aren’t just efficient user experience, “get people in and out quickly”… I also really don’t want to contribute to the slop of the internet, the “content for content’s sake” rat race.
I mention all this to say – I’m coming at this as someone who both loves technology and hates technology. And, if I’m honest, I’m not sure how I feel about how tech is affecting and will affect the future of human work.
About three weeks ago, Anthropic, the company behind the LLM Claude, published an article on the “Labor market impacts of AI”. They referenced a database called O*NET, which thoroughly catalogs knowledge, skills, and abilities for 871 different occupations. They compared this against their own usage data (how people are using Claude) and something called “task-level exposure estimates” from a 2023 paper, which measured whether it is theoretically possible for an LLM to make a given task at least twice as fast.
This is an interesting study – in part because it really isn’t that alarming.First, they introduce a new measure of AI displacement risk called observed exposure, a term that combines theoretical LLM capability and real-world usage data. Importantly, it weights automated work-related uses more heavily than what it calls augmentative. Here’s what this looks like – say you’re an executive assistant.
[SLIDE 18 – ANTHROPIC EXECUTIVE ASST]An example of an automated work-related use would be attending a meeting and recording minutes. As of today, there are multiple very efficient methods to do this with almost no human touch, so it counts as “automated” and this study weights it more heavily in terms of what part of an executive assistant’s job is “exposed” to AI. An augmented use for this same role would be something like conducting research, compiling data, and preparing a presentation for review by a board of directors. Some of the “sub-tasks” in this work can be efficiently offloaded to AI, but other parts can’t – ChatGPT can help you with research, but making decisions about how to present that research may require some context and intuition that would be very difficult to pass off to an LLM.[SLIDE 19 – ANTHROPIC OTHER JOBS]
Then there are the tasks that, according to the Anthropic Economic Index, don’t show up in their data. This may be because users don’t know how to use AI for these tasks yet, or because it legitimately wouldn’t be helpful for them.
[SLIDE 20 – EXPOSURE VS COVERAGE]
They compared exposure, or “theoretical coverage” here in blue, to the red, which is the coverage they observe in their model’s use. It shows that AI is far from reaching its theoretical capability. There’s a big gap between what AI could do in theory and what it’s actually being used for in the real world.
For example, take computer programming — the single most AI-exposed occupation in their data. Even there, AI is only covering about a third of the tasks that programmers actually do day to day. Two-thirds of the work hasn’t been touched. And that’s the most exposed field.
At the other end, 30% of all workers have essentially zero AI exposure — their tasks just don’t show up in the usage data at all. These are jobs like cooks, mechanics, bartenders, lifeguards – or as you see here, welders and security guards. Work that requires you to be physically present, to use your hands, to read a room, to make a judgment call that can’t be programmed.
And when you look at which tasks AI is actually doing well — there’s a clear pattern. The tasks it’s covering are the ones that are structured. Rule-based. Clearly defined. ‘Take this input, follow this process, produce this output.’ The tasks it’s not covering are the ones where the rules are fuzzy, the situation is unique, or the right answer depends on context that only a human in the room would have.
In other words, AI is really good at the kind of work where you already know what the right answer looks like. It struggles with the kind of work where you have to figure that out.
[SLIDE 21 – RANGE]
This leads me to another book recommendation – Range: Why Generalists Triumph in a Specialized World, by David Epstein. In Range, he discusses an idea borrowed from psychologist Robin Hogarth, called kind and wicked learning environments.
[SLIDE 22 – KIND VS WICKED]
Kind learning environments are situations governed by stable rules and repetitive patterns. The feedback is quick and accurate, and the next task looks like the last task. Think of chess: the pieces move according to fixed rules, within defined boundaries, and the consequence of every move is immediately clear. Or think of data entry: take this input, follow this process, produce this output. You can get good at these things through repetition, because the rules don’t change underneath you.
Wicked learning environments are the opposite. The rules, if they exist, may be unclear — or they may change. Patterns don’t just repeat. Feedback might be delayed, or inaccurate, or completely absent. All sorts of complicated human dynamics are involved. And the work next year may look nothing like the work last year.
Running a small business is a wicked learning environment. You’re making decisions with incomplete information. You’re reading people. You’re weighing tradeoffs that don’t have a correct answer. You’re managing a situation where the same approach that worked last month might fail next month, and you won’t know why for six months after that.
Here’s what matters for our conversation today: AI dominates kind environments. It is built for them. Clear rules, stable patterns, fast feedback — that’s exactly the territory where a machine that processes information millions of times faster than you will win every time. This is why it passes the bar exam. This is why it can write code. This is why it can read an X-ray. Those are kind problems — structured, defined, with knowable right answers.
But wicked environments are where AI struggles. And that’s probably where most of you live.
Now — let’s go back to the power loom for a second. What did a hand-loom weaver actually do? He followed a pattern. He repeated the same motions, the same sequences, day after day. It was skilled work, but it was kind. Stable rules, repetitive patterns, clear feedback. The machine could do it because the work was structured enough to be mechanized.
Farming. Not all of farming is kind — reading weather, managing soil, making decisions about when to plant — that’s wicked. But the physical labor of plowing, planting, and harvesting in rows is kind enough for a tractor to take over.
Telephone operators. “Number, please.” Route the call. Confirm the connection. Kind. The machine could do it because the task was defined.
Human computers? Alan Turing, the father of computer science, defined them as people “following fixed rules, with no authority to deviate.” That’s about as kind as work gets. The electronic computer could do it because the whole job was structured input, structured process, structured output.
Every single displacement we just talked about happened in kind territory. The machine took over the kind part.
The weavers who lost their livelihoods to the power loom — most of them never recovered. The families pushed off the farm by the tractor — many of them right here in Oklahoma — didn’t get retrained for the service economy. They just lost. The telephone operators who watched their occupation shrink from 342,000 to 1,500 — that transition took decades, and it wasn’t gentle.
What is true is that the work didn’t disappear from the economy. It transformed. New industries grew. New roles emerged. The textile industry eventually employed more people than hand-weaving ever had. The service sector absorbed the labor that left agriculture. Women found other work when the switchboards went dark. But those were often different people, in different places, doing different things. The individuals who were displaced paid the price for someone else’s progress.
Let’s look, for a second, at the human computers.
[SLIDE 23 – JOHNSON & VAUGHAN]
Two of those human computers were Katherine Johnson and Dorothy Vaughan — you may know their names from the book and movie Hidden Figures. They were Black women working in segregated facilities at Langley, doing the same math as their white counterparts in a separate building with separate bathrooms.
Katherine Johnson didn’t get sent home when the electronic computers arrived. She moved from calculating by hand to analyzing trajectories for the Mercury missions, for Apollo, for the Space Shuttle. When NASA used electronic computers for the first time to calculate John Glenn’s orbit, Glenn himself refused to fly until Johnson verified the numbers by hand. He trusted her over the machine. She co-authored 26 research papers, and she spent 33 years at NASA — her career grew as the machines took over the kind work she’d been doing.
Dorothy Vaughan saw the electronic computers coming before most of her colleagues did. She taught herself FORTRAN — one of the first programming languages — and then she taught her entire team. She didn’t wait to be reassigned. She decided what came next.
Not every computer made that transition. The ones who did were exceptional, and they had to fight for it — many of them against institutional barriers that had nothing to do with technology. But the pattern is clear: the execution, or “doing”, layer got swapped out. The understanding layer carried over. The people who understood the why — not just how — found a place in the next chapter.
Let’s return to today.
[SLIDE 24 – HAPPY + CHATGPT]
I mentioned earlier that I built my business on three things: technical expertise, broad experience, and reliable availability. Those are exactly the things this new technology excels at. Code is what AI does better than almost anything else. I’ve had to face that.
I started using ChatGPT for code about two years ago. It was a pretty clunky copy and paste type workflow. Pose a problem, get a suggestion, try it out. It was helpful — it would usually speed things up, sometimes suggest better solutions than I would have come up with on my own. But it wasn’t replacing what I did.
[SLIDE 25 – HAPPY + CLAUDE CODE]
Then, about two months ago, I started using Claude Code — Anthropic’s agentic coding tool. The world shifted.
I have completed projects in a day that would have taken me a month two years ago. And I’m not talking about rough drafts or prototypes. Not only is it faster, the quality of the code is better than what I produce on my own. This isn’t the AI-generated poster with the weird letters and the disturbingly predictable symmetry. This is production-quality work, 20 to 30 times faster.
And the way I work with it is completely different from the copy-and-paste era. I can point this AI agent at an entire codebase, give it the full scope of a project, and it will think and work and make changes across the whole system for ten or fifteen minutes before coming back and saying “here’s everything I did.”
Which means the hard part of my job is no longer writing code. The hard part is deciding what to build. When anything is possible — when the cost difference between this approach and that approach is basically zero — all of the weight shifts to the person making the decisions. What should we build? For whom? Why?
[SLIDE 26 – JEFF GOLDBLUM]
(Short beat about this phrase “when we can build anything, what should we build?”)
[SLIDE 27 – FROM DOING TO DECIDING]
Over the last month I’ve been rebuilding the website for Spaceship Earth, our coffee shop downtown. I’ve added text message notification, a totally custom wholesale ordering and invoicing system, and a “weekly digest” that users can subscribe to and that our manager can easily update and post week to week. I haven’t been sitting down thinking about the technical architecture of the SMS system or how to structure the database behind the invoicing workflow. Instead, I’ve been thinking about the humans on both ends: how do I make weekly event notices easy to sign up for? How do I make the order fulfillment fit the way we actually roast and package and deliver at the shop? How can I make it so our manager can post an entire week of events in as few clicks as possible?
In some ways, this is ideal. It’s the work I actually enjoy. But I have to be honest — it has irreversibly changed what I do. Two years ago I would have told you I’m a developer. Today I’m more like a manager of a team of developers. I still need to understand code — and if I didn’t already know how to code, it would be dangerous for me to use these tools, because I’d be deploying things I couldn’t evaluate or fix. But day to day, my work has shifted from doing to deciding.
If you’ve used AI at all for work, I’m willing to bet you’ve experienced something like this yourself. The task of prompting — just figuring out how to ask the question — becomes the hard part. I’ve found myself often thinking so carefully about how to phrase what I want, I’ll figure out the answer before I even ask. The thinking and deciding is the work. Ironically, AI is making me do more of it.
This idea – that automation can push people upstairs, from execution into decision-making — that’s what happened to Katherine Johnson and Dorothy Vaughan. It’s what happens in many stories of technological displacement. And it doesn’t always happen at the “job” level, like going from human computer to programmer. Looking at the data from Anthropic that I mentioned from earlier, it’s probably more common to see shifts within the tasks of a role itself – automation of some tasks, augmentation of others, and, hopefully, more room for the work that’s hardest to automate: judgment, relationships, and deciding what matters. The human stuff.
So, where do we go from here?
As we try to land this plane, I want you to think about the kind and wicked environments you navigate.
If you’re thinking of your own work – where are the operational “doing” tasks? Things that are structured and repetitive – entering data, scheduling, invoicing, routine processing. Not just single tasks, either – you may picture a whole series of tasks that make up part of your work, and maybe they all happen in different softwares or using different tools, but the steps are essentially structured the same. Maybe there’s one or two checkpoints that vary or require your subjective input, but then it’s back to the pipeline. This is that area that the Anthropic Economic Index calls “augmentative” – where automation and AI could be used in parts, but not all, of the process.
If you own or manage a business or team – where in your business is the “kind” work concentrated? Are there roles that are mostly operational? Remember that jobs and roles usually consist of an array of tasks – like we saw from some examples, these may vary from roles covering an array of tasks that can be automated or augmented by AI (like a web developer or executive assistant) to roles that can hardly be automated at all (like a security guard or welder). It’s likely that as whatever comes next in AI arrives, we don’t see the wholesale erasure of entire existing “roles” so much as an automation of specific tasks that leads to a shuffling of human capital. You may already be facing the tension of deciding whether to backfill positions as people are leaving – can we tighten things up and use AI for this? Can we add it to another person’s plate if we give them the right tools? Can we replace two people with one person and a paid ChatGPT account?
There probably aren’t right answers to these questions, and there certainly aren’t easy ones. But if you don’t think through them on your own terms now, you will be forced to think through them later on someone else’s.
I want to leave you with two assignments today.
Assignment one: Start using the tools.
Here’s the caveat – if you’re in one of those roles that is 100% wicked, that this new wave of technology is not going to touch at all – ignore me. If you’re a Dredge Operator, Pile Driver Operator, Motorboat Operator or Logging Equipment Operator – according to a report Microsoft put out in December, you can totally ignore me.
Everyone else: you’ve spent the last hour listening to me yammer on about wh`at AI can do. The best way to understand it is to use it. Not to transform your business — just to see what it feels like. Open Claude or ChatGPT this week and ask it something real. Not a party trick — something from your actual work. Ask it to help you draft an email you’ve been putting off. Ask it to think through a problem you’re stuck on. Ask it to summarize a document you don’t have time to read. It’s a language model – the barrier to entry is a conversation. You’re already good at those.
[SLIDE 28 – QR CODE]
If you could use a more structured starting point — I’ve built something for you. This is a free tool you can scan right now. It’ll open a page on your phone where you can copy a prompt into Claude or ChatGPT that will walk you through analyzing your own work — your kind tasks, your wicked tasks, where AI might help, and where your human judgment is the thing that matters. It takes about fifteen minutes. It could be the most useful thing you do this week.
Assignment two: Protect the wicked work.
The kind work in your business is going to get easier and cheaper. Let it. But the wicked work — the judgment, the relationships, the decision-making, the knowing-your-customer, the making-the-call-when-there’s-no-manual — that’s not getting automated. It’s getting more valuable. Invest there. Get better at the thing you’re already good at. That’s your moat. That’s what no tool replaces.
AI is not going to take your job. But it is going to change what your job means. And if you’re not the one deciding what changes and what doesn’t, somebody else will decide for you.