Aurora wants to have policy on artificial intelligence in place by end of year

The city of Aurora is looking to have a policy governing its use of artificial intelligence by the end of the year.

Michael Pegues, the city’s chief information officer, said recently he would “like to have something in place” by the end of 2024.

To do so, the city will likely look at what other cities across the country have done with policies for artificial intelligence, known as AI, or in particular, Generative or Gen AI.

The city is looking at programs developed by Seattle and Boston, and what private industry is doing. Pegues said city officials also will solicit opinions from the public.

“We’re not going to reinvent the wheel,” Pegues said. “But we also want to get input from the community. The guidelines have to be fair, equitable and transparent.”

The city recently held a workshop on AI applications facilitated by IDC Research, Inc. and attended by Information Technology officials from both the private and the public sector, education officials, public officials and members of the public safety sector.

While developing AI policies is new, people attending the workshop agreed it is important. According to an IDC Research survey, some 80% of respondents said their overall understanding of AI is low to moderate, but 75% said Gen AI could have a great or at least medium affect on their work.

Gen AI is defined as computers generating new content using previously created content. When prompted, it creates an artifact that has a likeness to the original data.

Artificial intelligence is not new, according to Pegues. Some date it as far back as the 1950s, when British mathematician Alan Turing developed the first computing machines and envisioned the possibility of AI. He even developed the Turing Test, to assess whether a machine thinks like a human.

In 1956, during the Dartmouth Conference, mathematician John McCarthy actually coined the phrase artificial intelligence to describe the practice of creating human-like machines.

In subsequent decades, advances have been made in the AI area, from the first natural language processing in the 1960s to deep learning and reinforcement learning in the 2010s, officials said.

And now, continual advancements have led to ethical considerations about the impact AI has on society.

Those ethical considerations are what stand at the center of any public policy involving the use of Gen AI, officials said.

Pegues talks about using it to search databases or technical documents in the case of Freedom of Information Act requests. It could be a timesaver for city workers, he said.

At the same time, there is a fear “of losing that humanity” in documents generated by Gen AI, Pegues said.

After the workshop, Alex Alexandrou, Aurora’s chief management officer, said the city has to identify what AI technologies “we could leverage to improve not only public safety but how we … provide what our residents expect from their local government.”

Pegues said it could help with public safety documents, but he also warned against using AI in a predictive way.

“We don’t want to try to predict,” he said. “But we do want to use it to sift through lots of data.”

There also is fear that using Gen AI could add to what is already a problem in the world of technology – the digital divide. It becomes another technology that some have access to, and others do not.

Michelle Williams Clark, Aurora’s Equity, Diversity and Inclusion director, pointed out after the workshop that another issue with Gen AI is bias. How it creates is based on what data it is given.

“A couple of things I know is that there’s going to be bias going into the data set and bias going back out,” she said.

Ruthbea Yesner, vice president of Government Insights for IDC Research, said after the workshop that Gen AI crosses through all areas of the community, government, education and even workforce training.

“So, it’s not really an IT issue only,” she said.

slord@tribpub.com

Related posts