Government


It's been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government's use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits," Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use "do not endanger the rights and safety of the American people." They have until December 1 to make sure they have in place "concrete safeguards" to make sure that AI systems they're employing don't impact Americans' safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of" certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in "a workplace, school, housing, transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and "replicating a person’s likeness or voice without express consent."

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to "establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they're using. "Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed," Harris said. 

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won't harm the public or government operations. If an agency can't disclose specific AI use cases for sensitivity reasons, they'll still have to report metrics

Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)
ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency's use of AI. "This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use," Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity," OMB Director Shalanda Young told reporters. "When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services."

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss

The White House lays out extensive AI guidelines for the ...


It's been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government's use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits," Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use "do not endanger the rights and safety of the American people." They have until December 1 to make sure they have in place "concrete safeguards" to make sure that AI systems they're employing don't impact Americans' safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of" certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in "a workplace, school, housing, transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and "replicating a person’s likeness or voice without express consent."

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to "establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they're using. "Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed," Harris said. 

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won't harm the public or government operations. If an agency can't disclose specific AI use cases for sensitivity reasons, they'll still have to report metrics

Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)
ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency's use of AI. "This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use," Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity," OMB Director Shalanda Young told reporters. "When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services."

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss

The White House lays out extensive AI guidelines for the ...







Uber and Lyft plan to end operations in Minneapolis after the city council voted to increase driver pay. The council passed an ordinance on the issue last week. On Thursday, it voted to overrule a mayoral veto of the measure.

The new rules stipulate that ridesharing companies need to pay drivers at least $1.40 per mile and 51 cents per minute (or $5 a ride, whichever is higher) whenever they're ferrying a passenger. Tips are on top of the minimum pay. According to the Associated Press, the council passed the ordinance to bring driver pay closer to the local minimum wage of $15.57 an hour.

However, Uber and Lyft say they'll end services in the city before the pay rise takes effect on May 1. Lyft says the increase is "deeply flawed," citing a Minnesota study indicating that drivers could meet the minimum wage and still cover health insurance, paid leave and retirement savings at lower rates of $1.21 per mile and 49 cents per minute. “We support a minimum earning standard for drivers, but it should be done in an honest way that keeps the service affordable for riders," spokesperson CJ Macklin told The Verge.

An Uber spokesperson told the publication that the company was disappointed by the council's choice to "ignore the data and kick Uber out of the Twin Cities,” putting around 10,000 drivers out of work. They noted Uber's confidence that by working with drivers, drivers and legislators, “we can achieve comprehensive statewide legislation that guarantees drivers a fair minimum wage, protects their independence and keeps rideshare affordable.”

However, Minnesota Governor Tim Walz last year vetoed a bill to boost wages for Uber and Lyft drivers, citing concern over the state becoming one of the most expensive places in the country for ridesharing. Other jurisdictions have mandated minimum driver pay for ridesharing services, including New York City, where the rate starts at about $18 per hour.

If Uber and Lyft follow through on their threat to quit Minneapolis, that could make it harder for people (particularly folks with disabilities and those who can't afford a car of their own) to get around. The rise of ridesharing has upended the taxi industry over the last decade or so. As such, a Minneapolis official says there are now just 39 licensed cab drivers in the city, a significant drop from 1,948 licensed drivers in January 2014.

Meanwhile, some upstart ridesharing companies are looking to move in and take over from Lyft and Uber. Empower and Wridz, for instance, have shown interest in starting operations in Minneapolis. Both companies ask drivers to pay a monthly subscription fee to use their platforms and find riders. In return, drivers keep the entire fare.

This article originally appeared on Engadget at https://www.engadget.com/uber-and-lyft-are-quitting-minneapolis-over-a-driver-pay-increase-180041427.html?src=rss

Uber and Lyft are quitting Minneapolis over a driver pay ...