Digital Technology

World needs to focus more on AI security and safety, says US science envoy

World needs to focus more on AI security and safety, says US science envoy
United States science envoy Rumman Chowdhury (left) discussed the areas that need to be addressed in the development and use of AI with CNA’s Yasmin Jonkers.

SINGAPORE: The world has been “hyper focused” on training artificial intelligence data models and how laws surrounding the technology are written, said the United States’ first science envoy for AI on Thursday (Nov 7)

On the other hand, attention to testing AI for security and safety, as well as how laws can be enforced, is lacking, said Dr Rumman Chowdhury.

Testing is important to create a product that is useful, she told CNA’s Yasmin Jonkers in a wide-ranging interview.

“Whether you are a government, a private entity (or) a startup, you need to do the appropriate testing. Otherwise, we cannot feel comfortable putting this product out in the world,” she said.

She added that enforceability of laws is an area jurisdictions have been struggling with in the tech space since the emergence of social media companies.

She noted that fining such companies for transgressions may not be effective because they have funds at their disposal. Preventing their use would also not be an ideal situation as citizens would be upset, she added.

“That enforceability (and) accountability mechanisms are worth negotiating and understanding, as much as the development of the laws, rules and policies,” she said.

“Usually, people have been hyper focused on what are the laws we write and how we write them and there’s not enough focus on how we can actually enforce these laws,” she said.

However, she added that AI is not the first industry to grapple with the idea of fair and responsible use, noting that other sectors like banking and healthcare also deal with these issues. Governments already have structures in place that could be applicable, she added.

“I do think that it (AI) is such a confusing technology (because it is all-encompassing) and frankly, it’s one of the first times we face a technology that seems better or smarter than us.

“Things will be daunting and intimidating, but we have successfully created structures like this before, and I’m confident we’ll be able to do it again.”

AI BIAS AND DISCRIMINATION

Other aspects of AI that need to be discussed are bias and discrimination, said Dr Chowdhury. 

“I think what’s quite important for a lot of people … is to understand that this technology will not be as ubiquitous as we think it is, unless we deal with kitchen table issues like bias and discrimination,” said the data scientist who runs an organisation providing ethical AI solutions.

She noted that AI models are trained on the data of the internet, which is the data of the Western world. 

“Thinking through applications in Asia will require us thinking through what the data and the sources and the biases are that could be very Western-focused, or just not appropriate for use in this region,” she said.

“What we’re trying to build AI models for, for example, is improving agricultural techniques. What will happen if these models are producing biased output because they don’t understand the crops in the region or the language that’s being spoken?”

She added that the priorities that are being pushed forwards globally surrounding standards and use cases are largely driven by only a few countries.

This means that solutions to improve crop yields because of climate, for instance, are not top of mind.

Dr Chowdhury noted that these are problems faced by farmers in nations that do not have a seat at the same table, such as Bangladesh – where her family is originally from – and Vietnam.

“It’s really critical that we focus on what is useful for people and the rest of us … broadband access, connectivity, mobile access, all of these things are important in a population being engaged in the AI future,” she said.

Funding is largely coming from industry, which in turn puts it in the position of agenda setter, noted Dr Chowdhury. She added that philanthropies need to play a bigger role.

BUILDING AN AI TALENT POOL

Amid the rapid evolution of AI that has emerged as a challenge for even major financial institutions, the world is experiencing a scarcity in technology talent, Drew Propson, head of technology and innovation in financial services at the World Economic Forum said in a separate interview with CNA’s Olivia Marzuki.

Speaking on the sidelines of the Singapore FinTech Festival on Thursday, Propson noted that new tools are being developed every day, and knowing whether to use them or to build a product in-house can be a dilemma.

“You need to then have a large talent pool that can help you with those analyses of the new tools, and be thoughtful about what you’re building yourself,” she said.

Partnership among the public and private sectors and academic institutions is key to building a large talent pool, which will ultimately make the financial system stronger, she added.

The foundation to creating a generation of competent AI practitioners is working towards an empathetic and understanding society, said Dr Chowdhury.

Too often, technology is focused on building optimisation in a bubble without taking into account that the real world is a “beautifully messy place”, she said.

“As we think about what educational programmes are needed, how can we build more ethics into computer science education, more computer science into these social sciences, and really merge the two?” she asked.

“What we need is people collaborating across disciplines and respecting each other’s disciplines to understand how to drive value for this technology, for human beings as a whole.”

Dr Chowdhury added however that there is a “long way to go to get value for everybody”.

“It requires a lot of intentional development and building things like government investment into the right kinds of startups, the right kinds of companies,” she said.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *