Future Retirement Success
  • Politics
  • Business
  • Investing
  • Stocks
  • Politics
  • Business
  • Investing
  • Stocks

Future Retirement Success

Politics

Minority groups sound alarm on AI, urge feds to protect ‘equity and civil rights’

by June 21, 2023
June 21, 2023
Minority groups sound alarm on AI, urge feds to protect ‘equity and civil rights’

The growing use of artificial intelligence will likely lead to biased and discriminatory outcomes for minorities and disabled people, several groups warned the federal government this week.

The National Artificial intelligence Advisory Committee, an interagency group led by the Commerce Department, held a public hearing online Tuesday aimed at informing policymakers about how the government can best manage the use of AI. Panelists were told by most of the witnesses that bias and discrimination are the biggest fears for the people they represent.

Patrice Willoughby, vice president of policy and legislative affairs at the NAACP, told panelists that technology has already been used as a means to disenfranchise and mislead voters, and said her group worries about AI for the same reason.

‘In that context, we have great concerns about the rise and use of AI in its different platforms,’ she said. ‘The rise of AI and its ease of use really causes us great concern.’

Frank Torres, who works on civil rights issues at the Leadership Conference on Civil and Human Rights, told the panel that the government must take charge and create mechanisms to ensure AI systems are developed and used in ways that promote ‘equity and civil rights.

‘To realize the full potential of this technology, we have to address and make sure that it’s truly free from bias, that it doesn’t result in discriminatory outcomes,’ he said. ‘It has to be centered on equity and civil rights.’

Several observers have warned that AI has the potential to create discriminatory outcomes because it uses data sets that are biased. Using ‘bad data’ could lead AI systems to reject loans or make other decisions that perpetuate that bias.

In the advisory committee discussion, National Fair Housing Alliance President and CEO Lisa Rice said past technology has shown that bias is something the government needs to be wary of as it regulates AI.

‘Every algorithmic system we’ve analyzed — credit scoring systems, risk-based pricing, automated underwriting, insurance scoring, digital marketing, tenant screening selection, facial recognition, automated valuation and other models — generate bias and inflict untold harm on consumers,’ she said. ‘Machines have simply mimicked and replaced human bias.’

JudeAnne Health, national programs director for the Hispanic Technology & Telecommunications Partnership, said her group is worried about AI based on biased data that may ‘exacerbate existing inequalities.’ She suggested rigorous oversight by the federal government that should be aimed squarely at fighting bias.

‘Does that oversight include a diverse group of stakeholders, backgrounds, languages, educations?’ she asked. ‘Not just scientists, but sociologists, community members.’

Emily Chi, a senior director at Asian Americans Advancing Justice, said her group is also worried about discrimination that’s ‘already embedded in the data.’ When asked for an example of bias in AI, Chi said some content moderation systems are failing to remove content that’s discriminatory or ‘harmful’ to Asian Americans, and said systems need to be in place to ensure these sorts of oversights are corrected.

‘Going directly to the people who are most vulnerable, who are most oppressed in our society, and understanding how these technologies affect their day to day lived experiences,’ Chi said.

Another witness said the government needs to pay attention to how AI might discriminate against disabled people.

‘We are only going to see the full potential of tech for disabled people realized in all aspects of life if that technology, including AI, is built from the start to be accessible and free from ableism,’ said Maria Town, president and CEO of the American Association of People with Disabilities.

‘While many assume that automated employment decision tools are free from bias because the human element is removed, at AAPD we have seen repeatedly that AI-based tools reflect the preferences of the people who make them and of the larger society that these tools exist within,’ she said.

The advisory committee has held a handful of hearings and will hold a few more over the next week aimed at collecting input that will feed into the Biden administration’s process for determining when and how to regulate AI. Just this week, the White House said several efforts to set federal rules for AI would be coming in the next few weeks, and Congress is also examining how to regulate AI.

Pete Kasperowicz is a politics editor at Fox News Digital.

This post appeared first on FOX NEWS
0
FacebookTwitterGoogle +Pinterest
previous post
Jill Biden criticizes pro-life states days before anniversary of SCOTUS decision to overturn Roe v. Wade
next post
FRONTRUNNER FATIGUE: Americans already weary of 2024 presidential race share who they are supporting

You may also like

Trump team fires back after Dem senator declares...

January 7, 2025

Andrew Cuomo blasts Manhattan DA Bragg’s Trump probe:...

March 26, 2023

Hamas terror attack exposes Al Jazeera for what...

March 8, 2024

Russia interfering in 2024 election to help Trump,...

July 10, 2024

Republicans’ SPR bill leaves Democrats squirming over oil...

January 25, 2023

Former USAID official warns China is already looking...

February 18, 2025

US hits ISIS camps in Syria, killing nearly...

October 30, 2024

Pause in US foreign aid has UN in...

March 6, 2025

Democrat Cherelle Parker wins Philadelphia’s mayoral primary

May 17, 2023

Kansas Gov. Kelly to begin 2nd term, Trump-allied...

January 9, 2023

    Get free access to all of the retirement secrets and income strategies from our experts! or Join The Exclusive Subscription Today And Get the Premium Articles Acess for Free

    By opting in you agree to receive emails from us and our affiliates. Your information is secure and your privacy is protected.

    Recent Posts

    • TIMELINE: Inside the evolving relationship between Trump and Musk from first term to this week’s fallout

      June 7, 2025
    • Deadly drone wars are already here and the US is horribly unprepared

      June 7, 2025
    • Week Ahead: NIFTY’s Behavior Against This Level Crucial As The Index Looks At Potential Resumption Of An Upmove

      June 7, 2025
    • FLASHBACK: Musk accused Trump, GOP leaders of not wanting to cut spending — here’s where they said they would

      June 7, 2025
    • ‘Right down the line’: Medicaid reform in ‘big, beautiful bill’ divides lawmakers by party

      June 7, 2025
    • FAST distribution and IA

      June 7, 2025

    Categories

    • Business (8,152)
    • Investing (2,019)
    • Politics (15,562)
    • Stocks (3,135)
    • About us
    • Privacy Policy
    • Terms & Conditions

    Disclaimer: futureretirementsuccess.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

    Copyright © 2025 futureretirementsuccess.com | All Rights Reserved