LinkedIn respects your privacy LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads ) on and off LinkedIn. Learn more in our Cookie Policy . Select Accept to consent or Reject to decline non-essential cookies for this use. You can update your choices at any time in your settings . Accept Reject Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join now Tiago C. Peixoto’s Post Tiago C. Peixoto 4mo Report this post 🚨Help needed: human-in-the-loop hype check The more I think about it, the more I suspect that blanket calls for “human-in-the-loop” in AI for public services are a first-world comfort blanket. In places where there are no doctors, teachers, or caseworkers, “looping in a human” often just means there is no loop at all. My hunch: the value of “the loop” depends on context. Sometimes it saves lives, other times it just slows things down. Yet it keeps being sold as a universal must-have, usually by people who never had to wait in line for basic services, but who still shape policy far more than those who are systematically excluded from service provision. So here’s my ask: help me pressure-test this. Am I being unfair? Or is “human-in-the-loop” just a way for elites to reassure themselves, when the only loop they’re really in is talking to each other at AI-for-Good conferences? That is why I am keen to hear from others: are there rigorous counterfactuals or experimental studies showing when human oversight truly improves outcomes, and when it simply adds friction? And how much does this depend on context and task type? (And a long BTW: on AI bias, I am particularly interested in cases where AI introduces more bias than humans. Otherwise, the comparative advantage may still be with the machine.) So: what am I missing? Literature tips, counterexamples, or data that would make me less skeptical are most welcome. Because at the end of the day, “human-in-the-loop” sounds very different when you’re a high-flying AI civil servant, activist, or advocate with private health insurance; than when you’re a citizen waiting hours or days in line for a basic service. 44 48 Comments Like Comment Share Copy LinkedIn Facebook X Sean Martin McDonald 4mo Report this comment Human in the loop only matters if that human has the expertise, agency, and incentive to cause friction. Most often, the human is the liability sponge/cover for the automated decision and its error rate. Great example is Israeli “human in the loop” for automated target selection - each human got/spent 20 seconds and had no specific basis to contest most decisions. Like Reply 5 Reactions 6 Reactions Vid Štimac 4mo Report this comment I suppose things are much easier legally if there’s a human soomewhere in there to blame when smt goes wrong 🤷 However, and perhaps counter-intuitively, a human in the loop can make an AI BPO much cheaper and thus feasible. Humans are very versatile (process) nodes, and they can simplify AI pipelines significantly; 100% automation vs 90% automation of a business process can differ an order of magnitude in terms dev & deploy costs 😎 Like Reply 1 Reaction David Hume 4mo Report this comment I may be a bit simple here, but at work I oversee regulatory decisions that are just about operations. We have supervisors for people who do that work now. If we had an AI thing doing that work, it stands to reason they would have a supervisor too so as to ensure quality, training and accountability. My other case would be more from a product management perspective. Every digital thing that gets launched that doesn't have a product team behind it always winds up degrading. Assuming the product team is human, that means humans in the loop in that way as well Like Reply 1 Reaction 2 Reactions Marcello Barisonzi 4mo Report this comment At least from a legal perspective, you need a human in the loop to be responsible for any liability an AI algorithm may cause. Who signs off on AI decisions? You cannot bring an algorithm to court. Like Reply 1 Reaction 2 Reactions Dan Munz 4mo Report this comment My POV — “human in the loop” is critically important, but should be treated as a design decision, not a moral imperative. To draw an analogy: We know that visual design and branding are important to a good user experience, but we also easily recognize we should compromise those things when we have to deliver a service in low bandwidth environments. Like Reply 3 Reactions 4 Reactions Connor Drexler 4mo Report this comment Ai is mostly a statistical model relying on data in data out with a black box in the center. It is not intelligent nor can it actually “learn and apply” You need human in the loop with domaine expertise to both feed it the correct info and understand if it is giving you useful results of slop Like Reply 1 Reaction Marci Harris 4mo Report this comment Really helpful law review article shared with me when I reshared your post (h/t & thanks Derek Slater ) https://scholarship.law.vanderbilt.edu/vlr/vol76/iss2/2 / Like Reply 3 Reactions 4 Reactions Colin van Noordt 4mo Report this comment I always like this article in relation to this challenge. I guess the title says enough: "Just like I thought!" https://onlinelibrary.wiley.com/doi/full/10.1111/puar.13602 Like Reply 2 Reactions 3 Reactions Agueda Quiroga 4mo Report this comment We are hosting a free, open, InnovateUS workshop exactly on this topic!: What does it mean, in practice, keeping a human in the loop, specially in public sector: https://www.linkedin.com/posts/innovateus-govlab_youve-heard-it-many-times-the-importance-activity-7364328674004738048-IWqC?utm_source=share&utm_medium=member_desktop&rcm=ACoAAELP3ssBH6H3A1Pi5N297WqDXg63p04ExOU Like Reply 1 Reaction 2 Reactions Alexandre Gomes 4mo Report this comment Smth I've not seen in the comments: It also maintains the existence of the bureaucrat's discretion, thus guaranteeing his private sphere of power in that process. It's also about not loosing control from where the magic happens. Like Reply 1 Reaction See more comments To view or add a comment, sign in More Relevant Posts When 8,050 followers 2mo Report this post If your insurer is using AI to decide your claims, shouldn't you be able to use AI to find insurance that actually works for you? Insurers are increasingly using AI algorithms to process claims and prior authorization requests. The problem? Only 1 in 500 claim denials are appealed - and when they are, over half get overturned. Three in five physicians say they're concerned that AI is increasing prior authorization denials. The system is designed to wear you down. The appeal process takes time, money, and expertise most people don't have. Meanwhile, the AI keeps denying. At When, we're using technology differently. Instead of using it to deny coverage, we're using it to help people navigate their coverage options during transitions - whether that's losing a job, aging out of a parent's plan at 26, or bridging to Medicare. California just passed a law requiring human oversight for AI-based coverage denials. That's progress. But we think the bigger shift is giving people tools to make better choices in the first place - before they're stuck fighting a faceless algorithm. The technology exists. The question is whether we use it for people or against them. #ReadyForWhen #AI #HealthInsurance #COBRA #Benefits https://lnkd.in/gdTmkt5A California Law Blocks Health Insurers From Denying Claims Through AI governing.com 11 Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Nitesh Kumar 3mo Report this post #GenAI . Chatbot + Insurance Dataset. What if you could ask questions about your data in plain English, and get answers instantly — with insights, summaries, or 1 word answers? I recently built a prototype chatbot that interacts with a health insurance dataset 📊 (age, BMI, smoking status, children, location, etc. of 50 individuals) and lets you ask: 👉 "Summarize the factors impacting the Insurance price" 👉 "What's the average premium for smokers vs non-smokers?" 👉 "How does BMI affect insurance cost?" 🚀 Why this matters: This kind of chatbot bridges the gap between technical data and non-technical users — making analytics accessible, conversational, and instant. Convert your numbers into Story. Use GenAI to Create story behind the numbers within seconds. Upload any file, PDF, Excel, txt etc and start interacting with your file. …more 46 4 Comments Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in First Report Managed Care 554 followers 3mo Report this post 🚀 Implementing AI in Health Insurance: Best Practices for Success Artificial intelligence (AI) is transforming every corner of the health insurance landscape—from claims processing to member care. But to realize its full potential, health plans need a strategic, scalable approach to AI implementation. In this final article of our AI series, Deepan Vashi outlines key steps payers should take to maximize value and minimize risk, including: ✅ Developing a “big picture” AI strategy that connects solutions across the value chain ✅ Starting small and benchmarking success to build confidence ✅ Building strong data, governance, and ethics frameworks ✅ Securing employee buy-in and ensuring transparency with stakeholders Learn how to set your organization up for long-term AI success in health care. 👉 Read the full article: https://lnkd.in/eBPNk99a #HealthInsurance #ArtificialIntelligence #HealthcareInnovation #DataStrategy #HealthPlans #AIinHealthcare #ManagedCare #DigitalTransformation #HealthTech #FRMC #FirstReportManagedCare How Health Insurers Can Implement AI Tools hmpgloballearningnetwork.com Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Vernielle Dicko 2mo Report this post 🔥 The insurance industry just got its ChatGPT moment. Integrity just launched "Ask Integrity®" - an AI that instantly answers complex Medicare questions for agents. Here's why this is a game-changer: ❌ Before: Agents spent hours researching plan details, copays, and deductibles ✅ Now: Real-time answers in seconds ❌ Before: Clients waited days for plan comparisons ✅ Now: Instant side-by-side analysis ❌ Before: Agents missed opportunities due to information gaps ✅ Now: AI predicts which clients might want to switch coverage The platform doesn't just answer questions - it provides: → Automated call summaries → Client background insights → Predictive analytics for better targeting → Year-over-year plan change analysis This is exactly what I've been saying: AI won't replace insurance agents, but agents who use AI will replace those who don't. The 2026 Medicare Annual Enrollment Period is about to be completely different. Agents who adapt now will dominate. Those who wait will be left behind. How is AI already changing your industry? Integrity Enhances Revolutionary "Ask Integrity®" Platform with Transformational, AI-powered Plan Insights prnewswire.com Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Chesley Gaddis, CHCC 2mo Report this post 🤖 Here’s a real example of where AI got it wrong. A client of ours recently received a 30% increase on their health insurance renewal after a tough claims year. They came back with an AI-generated stat, and asked if we could use that to negotiate it down. ⚠️ The problem? AI doesn't know their claims data, risk profile, or loss ratios, all of which actually drive pricing. 💭 AI can be a great thought partner, but it’s a tool, not a truth. 🧭 It can generate ideas, but not insight. It’s a great co-pilot, not an autopilot. 💼 When the stakes are high — whether it’s taxes, legal advice, or employee benefits — the value isn’t in the data. It’s in the discernment. #employeebenefits #healthinsurance #consulting 8 8 Comments Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Baratunde Thurston Baratunde Thurston is an Influencer 2mo Report this post It’s been a rare week where AI made me less cynical. This week’s Life With Machines digest is… kinda hopeful? Read in full https://lnkd.in/gSHGzEWj Not “sing kumbaya with the robots” hopeful but “we still have a chance to choose our own future” hopeful. Here’s what stood out 👇 1️⃣ Parental Controls Progress (with a catch) OpenAI launched “parental controls,” but right now, they’re mostly PR. Kids can still bypass them by not inviting their parents to link or using a burner account or using ChatGPT without an account at all! Still, this moment matters because it demonstrates that pressure is working to adjust company behavior, so MORE PRESSURE. And if you care about how young people are shaping their digital futures, check out my collab with Young Futures Org who are funding real, youth-informed AI solutions. 2️⃣ Pinterest and Instagram Let You Turn Down the Slop! Pinterest just launched a tuner so you can adjust how much AI-generated content you see. Instagram’s testing something similar. You can’t opt out completely, but you can steer which is movement toward choice. Also: shoutout to BlueSky and the AT Protocol, which literally lets you choose your own algorithm. That’s closer to what digital democracy could look like rather than diluted digital feudalism. 3️⃣ Fight AI with AI Counterforce Health is using AI to help people appeal health insurance denials. Our country made healthcare a business so health insurers profit by NOT helping us live. Now AI can generate custom appeal letters in minutes, saving people time, money, and sanity. The future I want isn’t one where AI disappears. It’s one where we decide how much of it we let in and how we use it. #AI #TechForGood #DigitalAgency #LifeWithMachines #YoungFutures #HumanConnection #AIethics Parental Controls, Turning Down Slop, & Fighting Insurance Companies with AI newsletter.lifewithmachines.media 29 2 Comments Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Harshanath Poluri 2mo Report this post Every health insurance claim carries more than data — it carries time, effort, and trust. And too often, that time is lost in slow approvals, manual validation, and administrative loops that hold healthcare back from moving faster. We saw the problem not as a process issue, but as a thinking issue. So we decided to rebuild the system — not to automate it, but to make it intelligent. Our research paper, “AI-Powered Health Insurance Claim Automation Using Generative AI Agents and Workflow Orchestration,” introduces a framework that transforms how TPAs handle claims — turning what once took 7 days into less than 24 hours. 🧠 Reads like a human: OCR + LLMs interpret complex medical and billing data with 96% accuracy. ⚙️ Thinks like an analyst: LangChain-based logic validates policies, detects anomalies, and ensures transparent decisions. 🤖 Acts like a team: n8n orchestrates the entire workflow from hospitals to insurers — seamlessly and autonomously. The results were more than just efficiency: ✅ 4.6× faster claim processing ✅ 90% lower manual workload ✅ Enhanced decision accuracy and transparency But the true impact goes beyond numbers — it’s about returning focus to what truly matters: care, clarity, and trust. This work represents a step toward healthcare systems that learn, adapt, and evolve — systems that think with us, not just for us. Grateful to Prasad Anumula PMP®, CISM(Q), LSSBB and RGESIndia for their mentorship, vision, and belief in innovation that moves beyond code into meaningful change. Because the goal isn’t just to build smarter technology — it’s to build a smarter, faster, and more human world. 🌍 #AI #Innovation #GenerativeAI #HealthTech #Automation #LangChain #DigitalHealth #n8n #RGESIndia #Research #Healthcare #FutureOfWork 14 1 Comment Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Jonathan Klonowski, PhD 3mo Edited Report this post Push-back against AI automation, especially in America, is rational and justified. In a country where many people live paycheck-to-paycheck, where most cannot fund a $1,00 emergency , where many do not have an emergency fund to last them even 3 months, where 1 in 5 have crippling medical dept, where 67% have student loan dept of 10k+ (average 37k), where their healthcare is tied to an employer, and where there is nearly no social safety net and the current politicians are reducing it even further, people have reasonable fear having their jobs supplanted by automation. Automation often times causes skill polarization, whereby the only jobs left are low wage "menial" jobs and specialized jobs for those with the highest skill-sets. Such swim or sink to the bottom environment is destructive to society; is there a world where people have the luxury to feel bold enough to take the risk of trying a new career, upskilling, and approaching change with an open mind? Technologists have to realize that they must address the societal impacts of what they work on and their technology is shaped by the interests of those with the money to fund their intellectual escapades. Remember, technology is not some objective pursuit - foremost, society shapes technology. Interesting article that spurred these thoughts, haha: https://lnkd.in/eKtaSAZm Why Patients Are Flooding Emergency Rooms time.com 3 Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in Lic. Jose de Jesus Rodriguez Jr. 2mo Report this post Life Insurers + AI: The Race Is On… But Are We Building Responsibly? I just read this insightful piece on InsuranceNewsNet: Life insurers rush to take advantage of AI’s potential It’s clear that AI is transforming the insurance world — from underwriting and claims to client service and product design. Yet, as someone who lives at the intersection of faith, tech & policy, I can’t help but ask: “Will AI advance economic dignity — or widen the gap between Main Street and Wall Street?” The article points out that 90% of insurers are exploring AI, but only 22% have made it to production. That tells me: the tech is here, but the trust isn’t built yet. In my practice, I see real potential for AI to simplify, not replace, the human touch — helping small businesses and families get the right coverage faster, fairer, and with greater transparency. 💭 I’d love to hear from you: How are you seeing AI reshape your corner of insurance or financial services? What safeguards or ethical frameworks are you using to keep “the human” in the loop? Any success stories (or cautionary tales) you’ve seen in deploying AI responsibly? Let’s make sure this revolution serves everyone — not just the algorithms. #Insurance #AI #EthicalTech #FinancialPlanning #EconomicDignity #MainStreetFinance #InsurTech Life insurers rush to take advantage of AI's potential https://insurancenewsnet.com 2 Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in 9,112 followers 549 Posts 7 Articles View Profile Connect More from this author How to Guarantee AI Failure: A Field Guide for the Well-Meaning Senior Official Tiago C. Peixoto 2w The Inference Divide: The Inequality No One Is Talking About Tiago C. Peixoto 2mo From Copilots to Complexity: When Generative AI Meets the Public Sector Tiago C. Peixoto 6mo Explore related topics Real-Life Examples Of AI In Customer Service Value of Human Expertise in AI The Value of Human Experience in the Age of AI Explore content categories Career Productivity Finance Soft Skills & Emotional Intelligence Project Management Education Technology Leadership Ecommerce User Experience Recruitment & HR Customer Experience Real Estate Marketing Sales Retail & Merchandising Science Supply Chain Management Future Of Work Consulting Writing Economics Artificial Intelligence Employee Experience Workplace Trends Fundraising Networking Corporate Social Responsibility Negotiation Communication Engineering Hospitality & Tourism Business Strategy Change Management Organizational Culture Design Innovation Event Planning Training & Development Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Sign in to view more content Create your free account or sign in to continue your search Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy .