Tiago C. Peixoto’s Post

🚨Help needed: human-in-the-loop hype check The more I think about it, the more I suspect that blanket calls for “human-in-the-loop” in AI for public services are a first-world comfort blanket. In places where there are no doctors, teachers, or caseworkers, “looping in a human” often just means there is no loop at all. My hunch: the value of “the loop” depends on context. Sometimes it saves lives, other times it just slows things down. Yet it keeps being sold as a universal must-have, usually by people who never had to wait in line for basic services, but who still shape policy far more than those who are systematically excluded from service provision. So here’s my ask: help me pressure-test this. Am I being unfair? Or is “human-in-the-loop” just a way for elites to reassure themselves, when the only loop they’re really in is talking to each other at AI-for-Good conferences? That is why I am keen to hear from others: are there rigorous counterfactuals or experimental studies showing when human oversight truly improves outcomes, and when it simply adds friction? And how much does this depend on context and task type? (And a long BTW: on AI bias, I am particularly interested in cases where AI introduces more bias than humans. Otherwise, the comparative advantage may still be with the machine.) So: what am I missing? Literature tips, counterexamples, or data that would make me less skeptical are most welcome. Because at the end of the day, “human-in-the-loop” sounds very different when you’re a high-flying AI civil servant, activist, or advocate with private health insurance; than when you’re a citizen waiting hours or days in line for a basic service.

Human in the loop only matters if that human has the expertise, agency, and incentive to cause friction. Most often, the human is the liability sponge/cover for the automated decision and its error rate. Great example is Israeli “human in the loop” for automated target selection - each human got/spent 20 seconds and had no specific basis to contest most decisions.

I suppose things are much easier legally if there’s a human soomewhere in there to blame when smt goes wrong 🤷 However, and perhaps counter-intuitively, a human in the loop can make an AI BPO much cheaper and thus feasible. Humans are very versatile (process) nodes, and they can simplify AI pipelines significantly; 100% automation vs 90% automation of a business process can differ an order of magnitude in terms dev & deploy costs 😎

Like
Reply

I may be a bit simple here, but at work I oversee regulatory decisions that are just about operations. We have supervisors for people who do that work now. If we had an AI thing doing that work, it stands to reason they would have a supervisor too so as to ensure quality, training and accountability. My other case would be more from a product management perspective. Every digital thing that gets launched that doesn't have a product team behind it always winds up degrading. Assuming the product team is human, that means humans in the loop in that way as well

At least from a legal perspective, you need a human in the loop to be responsible for any liability an AI algorithm may cause. Who signs off on AI decisions? You cannot bring an algorithm to court.

My POV — “human in the loop” is critically important, but should be treated as a design decision, not a moral imperative. To draw an analogy: We know that visual design and branding are important to a good user experience, but we also easily recognize we should compromise those things when we have to deliver a service in low bandwidth environments.

Ai is mostly a statistical model relying on data in data out with a black box in the center. It is not intelligent nor can it actually “learn and apply” You need human in the loop with domaine expertise to both feed it the correct info and understand if it is giving you useful results of slop

Like
Reply

Really helpful law review article shared with me when I reshared your post (h/t & thanks Derek Slater ) https://scholarship.law.vanderbilt.edu/vlr/vol76/iss2/2/

I always like this article in relation to this challenge. I guess the title says enough: "Just like I thought!" https://onlinelibrary.wiley.com/doi/full/10.1111/puar.13602

Smth I've not seen in the comments: It also maintains the existence of the bureaucrat's discretion, thus guaranteeing his private sphere of power in that process. It's also about not loosing control from where the magic happens.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories