In Zach Goldfine’s view, the fact that it was taking approximately 100 days for veterans to get word that help was coming to process their benefits claims – let alone the time to actually receive help – was unconscionable.
But that was the reality before the Department of Veterans Affairs launched its Content Classification Predictive Service Application Programming Interface last year. More than 1.5 million claims for disability compensation and benefits were submitted annually, and 65%-80% of them came via mail or fax. And 98.2% of attempts to automate the language in those claims were failing.
“A veteran can write whatever, however they think about the injury they suffer. So they might say, ’My ear hurts and there’s a ringing noise constantly’ … but VA doesn’t give a benefit for ‘my ear hurts and I have ringing constantly’ – it gives a benefit or gives a monthly payment for ‘hearing loss,’” said Goldfine, deputy chief technology officer for Benefits at VA. “So the problem was that veterans were facing an extra five-day delay in getting the decision on their benefits because there was this back-up of claims at that portion of the process where it required a person to make that translation.”
After deploying the API, Goldfine said almost overnight the number of successfully automated claims tripled and it saved VA millions of dollars from reduced time needed to translate claims. This year VA once again reduced wait times with rapidly deployed chatbot to field questions about COVID-19, including what facilities were open.
This was one example of successful artificial intelligence at agencies highlighted during the Impact Summit Series: Artificial Intelligence, presented by the General Services Administration’s Office of Technology Transformation Services on Thursday. Goldfine said the API case at VA illustrates how AI can make employees’ jobs easier, rather than render those employees obsolete – a common fear of organizations and managers wary of implementing AI in their offices. Talk to staff and expect to hear different concerns between managers and lower-level workers doing these rote tasks.
“We all have parts of our job that we don’t like, whether that be like a million emails that we have to respond to or entering certain data elements when we need to take leave – there are always monotonous parts of our jobs no matter what our job is and I would guess most folks wish we could automate those away when and where possible,” he said. “It was manual data reentry in a scenario where, many of them are veterans themselves. They’d rather be spending their time doing things you’d think they’d rather be spending their time doing, like talking with veterans on the phone, understanding what’s going on, getting them information they need.”
Goldfine also stressed building a multidisciplinary team to implement AI, get buy-in and consider the human impacts. System designers, product managers, user researchers, data scientists, software engineers and even policy experts are all critical.
Alka Patel, head of AI Ethics Policy for DoD’s Joint Artificial Intelligence Center, brings a background of engineering and law to her role. She said good engineering principles of design, development, deployment and use are combined with consideration of risk management and government-corporate compliance.
Once DoD adopted its AI ethics principles in February – after two years of work leading up to that – Patel’s responsibility was to “take those higher-level words and definitions and actually make them tactical.”
When it comes to ethical AI, her advice to agencies was to start now and use any existing AI strategy as a framework. And although it may be impossible to predict every scenario or ethical quandary that can arise, some things will probably stay firm, like principles, while testing and evaluations processes are more susceptible to change as technology evolves.
Seeing ethics as an enabler of AI, rather than a hindrance, is the better mindset. In addition, simply stating in an award that contractors must comply with DoD AI ethics principles can help from a signaling standpoint, but Patel was skeptical that it will result in desired objectives.
“I’m very sensitive to ‘dictating’ what those requirements need to be done from an agency perspective. I think that’s a conversation that needs to happen mutually with our contractors or at least have some insight,” she said. “We need to be not so prescriptive but we need to be flexible but still have the fidelity of the content and the criteria we are expecting from the contractors themselves.”
"open" - Google News
November 20, 2020 at 06:03AM
https://ift.tt/330CVo5
Agencies advised to approach AI from an open, collaborative mindset - Federal News Network
"open" - Google News
https://ift.tt/3bYShMr
https://ift.tt/3d2SYUY
Bagikan Berita Ini
0 Response to "Agencies advised to approach AI from an open, collaborative mindset - Federal News Network"
Post a Comment