Two realities are reverberating with leaders I’m connecting with: AI is moving deeper into the everyday fabric of work, and many of the people doing that work are carrying more responsibility, more experience, and more fatigue than our systems ever anticipated.
For this month’s “Humans Over the Loop” leadership article, we’ll look at what it takes to design and lead in a world shaped by AI. Not at the level of catchy slogans and familiar constructs, but at the nexus where humans become the differentiator—deciding whether, when, and how critical choices are made; which decisions must stay human and which can be AI‑assisted; and how we’ll measure whether technology is actually serving people rather than quietly steering them. The piece includes a vignette about capturing the know‑how of an aging manufacturing workforce while embracing AI, rather than letting either slip through our fingers.
This month’s “Humans In the Loop” management article explores how managers, clinicians, operators, and finance leaders can use AI to see more clearly, steer more confidently, and anticipate problems earlier—without turning their organizations into surveillance machines or black boxes. The healthcare vignette in that piece will feel familiar to anyone balancing budgets, flow, and trust on a daily basis.
Alongside these, I’ll share the next chapter of Hope in an Age of Disillusionment and as a follow-up to Elevate Hope: From Mindset to Framework to Economy, we’re including a section on ‘There’s an app for that’.
Together, these pieces explore a simple question: How will humans lead and manage better in this world of AI while building, deploying, and living alongside increasingly capable systems?
Humans Over the Loop: Leading in a World Shaped by AI
Humans over the loop is becoming a practical way to lead in an AI‑shaped, rapidly evolving world: if we do not keep people at the center, deployment decisions and cost pressures will quietly start to set the terms for us.
From “in the loop” to “over the loop”
For roughly a decade, “humans in the loop” has been the default phrase for safe AI: a person checks outputs and can stop or correct the model when needed. That framing worked when AI showed up as a single tool in one workflow or team.
The reality in 2024–2026 looks different. Many companies now use AI across multiple functions—customer service, marketing, HR, operations—and are layering in agents that can route tickets, draft content, and update systems without a human explicitly asking. In that setting, “Who signs off on this output?” is no longer enough. The better question becomes, “Who is shaping the system this model lives inside—its purpose, guardrails, and incentives?”
Humans over the loop names that upstream work. Leaders define which decisions must stay human, which can be AI‑assisted, and which can be automated within strict boundaries. They ask for concrete metrics: not just “model accuracy,” but “override rate on sensitive decisions,” “time saved per case,” and “impact on employee and customer satisfaction.” Teams then keep adjusting how AI fits into real work so the technology remains a powerful tool, not the quiet author of decisions.
AI is scaling faster than purpose
In many surveys, a majority of organizations now report using AI in at least one business function, but only a minority say they feel mature in how they govern, measure, and integrate it. On the ground, that looks like dozens of pilots and tools, but fuzzy answers to “What is this here to do for people in this context?”
A humans‑over‑the‑loop mindset forces a shift from “Where can we automate?” to more grounded questions:
Where are teams chronically overloaded or stuck?
Where are error rates, cycle times, or burnout scores trending in the wrong direction?
Where would a 20–30% reduction in administrative time unlock more listening, care, or creativity?
In practice, that might mean setting a target like: “Reduce documentation time for clinicians by 30%, while maintaining or improving patient satisfaction and clinical error rates.” Or: “Use AI routing to cut average response times by 25% without lowering first‑contact resolution.” Purpose becomes something you can instrument and track, not just a statement on a slide.
Workforce, work, and what people actually do best
Demographic data points in the same direction. Many countries are seeing a rising share of older adults, increasing pressure on health and care systems, and persistent labor shortages in frontline roles. In health care and manufacturing, for example, burnout and vacancy rates have become board‑level issues, not just HR topics.
If we respond only with cost‑cutting, we risk amplifying the worst of both trends: over‑reliance on automation and under‑investment in human connection and expertise. Humans over the loop invites different design questions and metrics.
Vignette: a manufacturer with an aging workforce
Imagine a mid‑sized manufacturing company where over a third of the skilled technicians are within ten years of retirement. For years, their know‑how has lived in people’s heads and in dog‑eared notebooks.
The leadership team decides to take a humans‑over‑the‑loop approach to AI and knowledge capture. They set three explicit goals:
Cut machine downtime by 20% over two years.
Capture at least 70% of critical troubleshooting recipes from senior technicians before they retire.
Improve injury and incident rates by reducing rushed, last‑minute fixes.
They pair senior technicians with younger colleagues and simple AI tooling on tablets. As the pairs troubleshoot, they talk through their reasoning while the system helps structure and tag steps, symptoms, and root causes. Supervisors review patterns monthly: which fixes worked, which recipes were reused, where the AI suggestions helped, and where they confused. The AI is there to organize and surface options; humans remain the authors and final editors of the playbook.
Over time, they begin to see concrete shifts: fewer repeated breakdowns, shorter time‑to‑repair, and more junior technicians able to handle complex issues with confidence. The metric that lands hardest in the boardroom is not just reduced downtime; it is the visible transfer of knowledge from an aging workforce into a living, human‑curated system.
A more grounded way to use powerful tools
Most teams are not asking for a grand AI strategy. They are trying to solve concrete problems: call hold times, backlogs, error rates, rework, burnout. Humans over the loop offers a simple pattern:
Let AI surface options, patterns, and suggestions.
Make sure humans closest to the work help design and tune tools.
Define explicit “red lines” where a human must review or decide.
Measure not just efficiency, but experience and quality.
In a customer‑service context, that could mean: “Allow AI to draft responses, require human review for all high‑risk categories, target a 20% drop in average handle time, and track changes in customer satisfaction and complaint escalation rates.” In operations, it might be: “Use AI for forecasting and scheduling, but require human sign‑off when forecasts deviate more than a set percentage from historical norms.”
Try this experiment
If you are leading a function or business unit, here is a concrete experiment:
List three decisions in your area that must stay human for the next three years.
List three areas where you would welcome AI support, as long as humans remain clearly over the loop.
For each, write down one metric you would use to tell if the technology is helping or quietly distorting what matters.
Share that list with your leadership team or peers and see whether you agree. Show that as a human over the loop, you are putting language, structure, and practice around the choices you are already facing.
Find out more about FountainBlue’s Humans Over the Loop micro training modules.
Humans In the Loop: Leveraging AI to Better See, Steer, and Anticipate
Humans in the loop is about using AI so you can see more, decide better, and manage people, technologies, and processes with more clarity and care, not less.
Why humans in the loop matters
Many organizations now report using AI in multiple business functions, but far fewer say it is deeply embedded in how they actually run the business day to day. That shows up as a gap between “we have tools” and “we have changed how we manage.”
Humans in the loop matters in that gap. The goal is not to double‑check every output. It is to be present at the moments where judgment, context, and trade‑offs truly matter, and to be able to trace how AI‑supported decisions are being made.
You can see the difference in what teams choose to monitor. Instead of only tracking “model accuracy” or “tickets closed,” they look at:
Override rates on high‑impact recommendations.
Escalation patterns and near‑miss incidents.
The distribution of impact across customers, employees, or communities, not just averages.
AI can highlight patterns and surface anomalies; humans still decide which ones are important and what to do next.
Managing people with better insight, not more control
When managers hear that AI can “show what is really happening,” many worry about sliding into surveillance. A humans‑in‑the‑loop approach uses the same visibility differently.
Consider a support organization where AI shows that one team is handling significantly more complex cases than others, with longer handle times and higher escalation rates. A manager in the loop does not jump straight to discipline. Instead, they:
Validate the pattern with the team.
Ask what is driving the complexity—process, product, training, or something else.
Decide with the team whether to adjust training, staffing, or routing rules.
Track how changes affect resolution times, escalations, and satisfaction scores over the next month or quarter.
The metrics become conversation starters, not control levers: “What changed? What are you seeing? What would help?” The human role is to translate signals into learning and support.
Overseeing technologies and processes with more confidence
AI is now embedded in forecasting, routing, scheduling, pricing, and more. That can increase speed and consistency, but it also adds opacity. Humans in the loop is one way to stay confident that systems are doing what you think they are doing.
In a supply‑chain or operations context, that might look like:
Defining thresholds where a human must review a recommendation—for example, any change that alters inventory targets by more than a certain percentage, or any routing decision that increases delivery times beyond a set window.
Reviewing monthly dashboards that show forecast error, exception volumes, and drift over time.
Sampling decisions where the system overrode a previous human pattern and checking downstream impact on cost, service, and risk.
Those checks become part of the operating rhythm, not an emergency measure.
Vignette: a healthcare operations and finance executive team
Picture a regional health system where the COO and CFO are under pressure to reduce operating costs while improving patient flow and clinician experience. Over the past year, they have added AI into scheduling, bed management, and revenue‑cycle workflows, but results feel uneven and trust is fragile.
They decide to set up a monthly “Humans in the Loop” review for a few critical workflows:
In scheduling, they track metrics like “percentage of AI‑proposed schedules accepted as‑is,” “number of manual overrides,” and “clinician satisfaction with schedules.”
In bed management, they monitor “average time from discharge readiness to actual discharge” and “number of times staff override AI bed assignments due to clinical or family needs.”
In revenue cycle, they watch “AI‑flagged claims versus human‑flagged claims,” “appeal success rates,” and “write‑off trends.”
Each month, a small cross‑functional group—operations, finance, nursing, and IT—reviews the data and a handful of real cases. Where they see useful patterns, they adjust thresholds or rules. Where they see concerning trends, they slow down and ask, “What are we missing?” Over six months, they begin to see tangible shifts: fewer last‑minute scheduling crises, more predictable discharge patterns, improved cash flow—and, importantly, rising trust scores from clinicians and staff about “how AI shows up in my job.”
Keeping people, tech, and process connected
Some of the most useful humans‑in‑the‑loop work happens where people, process, and technology meet. AI might flag that a handoff between two teams is creating a spike in delays or rework. Humans then look at whether the process still makes sense: do roles, incentives, and information flows support the outcome we want?
In practice, this can look like small, repeatable loops:
Try a new routing rule.
Watch key indicators for two to four weeks.
Bring together the people affected to interpret the data.
Decide together whether to lock it in, roll it back, or adjust again.
Over time, organizations that run these loops steadily tend to build more confidence and fewer surprises. They are still experimenting with AI, but in a way that keeps humans connected to what is happening and why.
A quieter, steadier way to run with AI
Humans over the loop is about setting purpose and boundaries. Humans in the loop is about staying close enough to the work that you can see how people, technologies, and processes are actually behaving together, and adjusting with intention.
A small challenge if you are an operations or finance leader:
Choose one workflow where AI is already in play.
Define one “human in the loop” metric to track for the next month—overrides, escalations, or a satisfaction score.
Commit to one short review conversation where you look at the pattern with the people closest to the work.
Notice what you learn and what changes when you are managing with Humans In the Loop.
For more information about FountainBlue’s Humans in the Loop micro training modules, e-mail us or visit fountainblue.biz/training.
There’s an app for that!
When I’m on consulting calls, reading industry analyses, or developing scenario plans for client projects, I’m often struck by the sheer complexity of the challenges we all face. There are so much change, so many shifting tides, so much uncertainty.
As a naturally positive, hopeful, and logical problem-solver, I like to untangle threads, isolate variables, factor in relevant data, and help clients make clear projections and confident decisions.
And in this age of AI, I often find myself thinking: ‘there must be an app for that!’
If there isn’t, there are tools, right? And since I’m a quick learner (and married for decades to an engineer), maybe I can build one myself!
That’s how my Seven Toolkit for Resilient and Agile Organizations came to life, as a featured framework in my book Hope in an Age of Disillusionment. The toolkit empowers clients to explore 125 practical strategies drawn from our strategy, marketing, and program toolkits, and apply them directly to their organizational needs.
But that was just the beginning.
The question of how best to leverage AI—to seize opportunities while addressing real challenges—comes up constantly. So I developed an app featuring 60 AI use cases built around ten key opportunities, such as better decision support, faster innovation cycles, and stronger operational responsiveness, as well as ten critical challenges, including compliance complexity, cybersecurity risk, workflow alignment, and explainability and trust, spanning 17 industries.
Another common question is who should lead the charge—a leader, a manager, or the AI system itself. Or, as many ask it: should we Operate, Rebuild, or Lean into AI?
You guessed it—I’ve built apps for those questions and many more.
Curious? Bring me a challenge, and I’ll show you how an app, framework, or decision tool can help you move forward—starting with a no-obligation consulting call.
Chapter 6: The Unfiltered Feed (2005-2010)
The System, Unmasked (The Times)
The years 2005 through 2010 were defined by a sharp duality. The exhilarating technological expansion of Web 2.0 occurred simultaneously with the existential betrayal of the 2008 Financial Crisis.
This crisis served as a brutal unmasking: the financial system was built on opacity and unaccountability. The stark clarity was that the “have-nots” were forced to endure the reckless actions of financial elites. This economic betrayal fueled pervasive disillusionment.
Against this backdrop of broken trust, the digital world created its own crisis. Visual social platforms began to reward artifice, creating an epidemic of curated perfection that eroded authenticity.
The ultimate crisis was the psychological realization that the public was trapped in systems that profited from systemic deception. The urgency for uncompromised truth became an imperative.
The Entrepreneur’s Perspective (The Voice of Naia)
As a half-Jamaican, half-Chinese American living in Brooklyn in 2005, I felt constantly suspended between worlds. My days were spent inside Manhattan studios where perfection was manufactured frame by frame, and my nights belonged to Brooklyn’s raw creative spaces where artists refused to apologize for the truth. That tension sharpened my mission long before I knew I had one.
The financial crisis hit my family hard. My parents’ retirement savings evaporated because of decisions made by people who would never feel the consequences. Watching them return to work in the same media ecosystem that rewarded artifice felt like a quiet betrayal. I decided that whatever I built next would fight for authenticity.
Brooklyn became the crucible for that commitment. It was more than diverse—it was united by a shared hunger for honesty. Photographers in loft studios, coders working at kitchen tables, musicians, designers, activists, and small business owners were all wrestling with versions of the same challenge: how do you stay real in a world that profits from the opposite?
My early attempts at a tech solution were clumsy. I created filters meant to “enhance” truth, but all they did was produce softer illusions. A photographer tested one of my prototypes and told me, “Naia, this is just a nicer lie.” That feedback stung, but it helped me rise up to protect the cause.
The insight emerged slowly and soon became our rallying cry: people did not need help expressing truth — they needed protection from distortion.
That shift in understanding drew the Brooklyn community together, not as scattered helpers but as a shared movement. Each person brought something important. Photographers showed us where manipulation crept in. Coders explored ways to make images tamper-evident. Designers shaped an interface that respected identity. Activists grounded the work in ethics. It no longer felt like I was gathering volunteers; it felt like a community shaping something we all needed.
We named the system Clarus, because its purpose was clarity—to preserve what was real at the moment of capture.
To make that promise real, our coder collective built a simple but powerful safeguard. The moment a photo was taken, Clarus created a unique digital fingerprint based on its exact pixel pattern—a kind of tamper‑evident seal for images—and stored that fingerprint inside the file itself. If even one pixel changed, the fingerprint no longer matched.
The system was lightweight enough for our tiny Brooklyn setup to run, yet strong enough to reveal any attempt to retouch or filter a protected image. It made honesty verifiable — a technical backbone for the truth we were trying to defend.
Our first measurable win came at a neighborhood arts event. Clarus reduced disputes about image tampering by ninety-two percent. But the real victory was the feeling in the room—artists, organizers, and residents realized they could finally see themselves without distortion.
Momentum grew through our #GETREAL campaign, which began as a simple tag and quickly turned into a rallying cry across Brooklyn. The community saw Clarus not as a product, but as a statement of values.
A Manhattan fashion shoot gave us our first industry test. The models loved it, relieved to be portrayed as themselves. The creative director called it “the first honest lens” she had used in years.
Investors still dismissed us, saying: “No one pays for authenticity”, “People want illusions”, and “The tech isn’t strong enough.”
But Brooklyn believed.
And I believed in Brooklyn.
Clarus wasn’t the invention of a single founder. It was a community choosing honesty over performance—and building technology strong enough to defend it.
The Mentor’s Intervention (The Voice of Mr. Arthur Sterling, Retired Advertising Executive)
For twenty years, my career centered on crafting the flawless external images that dominated the media. I knew exactly how to engineer the illusion—how to polish, soften, distort, and perfect until the truth disappeared. When the financial markets collapsed in 2008, I retired disillusioned not just with the industry but with myself. I felt a quiet guilt for having helped institutionalize the very artifice that now plagued the culture. The perfection I once celebrated had become a hollow currency, and I could no longer pretend I hadn’t been part of the machine that built it.
A photographer I’d known for years introduced me to Naia. He was impressed with how Clarus’s invisible digital signature protected his work. I immediately saw common ground with Naia. We shared a fierce frustration with the phony facades that substituted for authenticity and vowed to make a difference.
I proposed a new path: pivoting from selling small content behind the #GETREAL campaign to licensing the core image verification infrastructure to advertising agencies and major brand studios in my network. This meant a new scope, traversing the nation’s creative centers—from Broadway to Hollywood.
Naia was skeptical. She was proud of her team’s progress, and questioned whether the message could remain true working with higher-profile customers and bigger projects.
Naia consulted with her team and together, they committed to the shift, but wanted to start small and transparently commit to their founding principles, as personified by the #GETREAL campaign. I was in full agreement, for different reasons.
Better Together
Naia and Mr. Sterling approached the partnership from very different worlds — she carried the moral clarity of Brooklyn’s creative community, and he carried a lifetime of experience shaping the flawless images that defined Manhattan’s advertising industry. Their goals looked different, but at the core, each of them understood the human cost of distortion.
Their first task was to translate Clarus from a grassroots movement into something that major creative centers could adopt without losing its integrity. That required navigating real tension. Sterling’s contacts expected fast delivery and polished presentations, while Naia insisted on transparency and staying loyal to the mission. More than once, Naia pushed back when an agency asked for “a softer version of the truth” or wanted Clarus to be optional. Sterling stepped in, not to override her, but to help articulate why the guardrails mattered.
The next step was to strengthen Clarus for high-volume environments. Naia’s coder collective worked with Sterling’s longtime production partners, testing Clarus on advertising shoots, film stills, and editorial photography. Early pilots showed that the system could verify the authenticity of thousands of images with less than a two percent error rate — a breakthrough for creative houses that struggled to track edits across large campaigns.
Momentum grew quickly. Sterling used his network to open doors in New York and Los Angeles, while Naia leveraged the community credibility Clarus had earned in Brooklyn. Some agencies embraced the technology immediately. Others resisted, worried that verified authenticity might disrupt established workflows or reveal the extent of their digital retouching. Each concern forced the team to refine their onboarding, simplify training, and clarify where Clarus added value rather than friction.
The breakthrough moment came when a major PR firm piloted Clarus for a celebrity-led social impact campaign. Clarus verified every released photo, ensuring that the public saw unaltered images. Engagement rates were 38 percent higher than the agency’s standard campaigns, and customer trust scores rose significantly. For the first time, a large client publicly credited authenticity as a competitive advantage.
The movement that began in Brooklyn now had national recognition. Clarus became the preferred verification layer for agencies that wanted to protect their content, safeguard their talent, and rebuild trust with audiences. The blend of Sterling’s industry reach and Naia’s unwavering mission created a powerful engine for change.
Together, they proved that authenticity was not a limitation or a trend. It was a strategic asset — one with measurable value in an industry built on perception. Clarus scaled because it stayed anchored in the community that created it and because the partnership honored the belief that truth should not have to fight alone.
Hope is Authenticity:
The only authenticity that scales is the truth you dare to live.
Each FountainBlue coaching and consulting client will receive their choice of one of the following complementary gifts:
a regular or workbook edition of Hope in an Age of Disillusionment
access to FountainBlue’s Hope Toolkit Companion web app
one HOTL or HITL micro training session
For more information, visit our web site, see our author page, or email us now.







