Using AI for social good in Singapore and Asia-Pacific
Google is partnering UN body in funding the creation of a
research network and bringing together diverse players to tap the benefits of
AI in the region.
Verily, a subsidiary of Alphabet, partnered the National
Environment Agency on Project Wolbachia, which releases male sterile
Wolbachia-infected mosquitoes to mate with female Aedes aegypti mosquitoes and
cut their numbers.
A DOZEN years ago, I heard a presentation by the great
Swedish statistician Hans Rosling, who worked in the field of data
visualisation. Rosling dreamed of a dashboard for crises around the world.
"We have dashboards for cars," he said, "but we don't have dashboards
for the most important problems facing mankind."
Today, that dashboard is within our grasp. We're producing
ever more powerful computers and advancing new methods for them to process
information. These tools are beginning to help us understand the crisis going
on around us. They also help us to identify patterns in order to prepare for,
ameliorate, and perhaps even prevent crises that are going on. Crises of
illnesses, natural disasters, and sustainability. We're at the point where AI
is starting to dramatically improve humanity's ability to solve the sort of
problems Rosling was thinking of.
For centuries, people have used technology to solve our
problems, but we've also had to manage risks and challenges. To insist that the
use of new technology bears no risks is to deny the march of human progress
itself. Electricity can be used to power appliances, but it can also start
unintended fires. That doesn't mean we stop using electricity. It means we use
it more responsibly. The AI issue facing us today is essentially the same one
that confronted our ancestors figuring out how to use electricity or fire. How
do we get the good stuff from AI while guarding against its ill-effects?
First, the development of AI must be inclusive. Many
technology companies have work to do in growing a more inclusive and diverse
workforce, including Google. And the tools that they use to build AI should
also be available for third-party innovators to use responsibly, in ways that
benefit society. TensorFlow, our open source machine learning framework, is
freely available to all.
We are also committed to the responsible use of data and
technology. Over the years, our teams have emphasised this overriding priority
in developing AI and other advanced technologies. We wanted to develop an
ethical charter to guide our technology development internally, and share our
values externally. This year, we announced a set of AI Principles that
constitute our ethical charter for AI and other advanced technologies at
Google.
These principles guide our decisions on what types of
features to build and research to pursue. As one example, facial recognition
technology has benefits in areas like new assistive technologies and tools to
help find missing persons, with more promising applications on the horizon.
However, like many technologies with multiple uses, facial recognition merits
careful consideration to ensure its use is aligned with our principles and
values, and avoids abuse and harmful outcomes. We continue to work with many
organisations to identify and address these challenges, and unlike some other
companies, Google Cloud has chosen not to offer general-purpose facial
recognition APIs before working through important technology and policy
questions.
The first principle on our list of AI Principles is that the
technology we're developing must be socially beneficial. AI is already
integrated into many of our global apps and services to assist people in daily
life. Apps like Google Translate, for instance, help people communicate across
language barriers. But beyond the good in making life a little easier and more
convenient, AI can also be used to solve bigger problems. In Asia Pacific, our
technology is put to use helping forecast floods in India, conserving
endangered bird populations in New Zealand and countering illegal fishing in
Indonesia.
We recognise that there are many great ideas that don't
materialise for lack of resources. That's why we've also launched the Google AI
Impact Challenge, an open call for non-profit organisations, social enterprises
and research institutions around the globe to share their ideas for using AI to
solve society's challenges. We'll help transform the best ideas to action with
coaching from Google's AI experts and Google.org grant funding from a US$25
million pool.
The development of AI also needs to include people who
aren't computer scientists, developers or researchers. Building AI for social
good means involving all of society in deciding what social good means. The
partnership of governments is especially important because of their critical
role in providing public goods and regulating industries.
Here in Singapore, we've commenced on an exciting project to
use AI to prevent the spread of dengue at its cause. Verily, a subsidiary of
our parent company Alphabet, partnered the National Environment Agency on
Project Wolbachia, which releases male sterile Wolbachia-infected mosquitoes to
mate with the female Aedes aegypti mosquitoes and effectively reduce their
population. They employed computer vision algorithm and artificial intelligence
to improve the mosquito-rearing process, by increasing the accuracy and time
needed to sort the sex of infected mosquitos by hundreds of times. This greatly
increased the efficiency of the programme to help alleviate the problem of
dengue in Singapore.
Finally, the right governance frameworks need to be in
place. The development of AI has to be guided by frameworks that enable
technological innovation to grow, while also promoting responsible development
and applications that have a positive impact on society. To address all of
society's concerns rather than the narrow priorities of a particular
constituency, these frameworks have to emerge from collaborative processes that
include government, academia, civil society and industry.
Several Asia Pacific countries are well advanced in
developing a governance framework for the development of AI. As a region,
however, Asia Pacific lacks a regular and institutionalised collaborative
process to consider this issue. As part of our partnership with UNESCAP, we're
funding the creation of an Asia Pacific AI for Social Good Research Network
with a grant. This network will bring together leading academics from the
Association of Pacific Rim Universities to produce research on AI for social
good as well as governance frameworks. It will also be a forum for researchers
to discuss these issues with government, civil society and the private sector.
We hope AI for Social Good Research Network grows into a
collaborative ecosystem for Asia Pacific stakeholders over how AI will be
deployed. The issue of how AI will be developed and used is much too important
to leave in the hands of any single actor. It is up to all of us to make sure
we are engaged in deciding how to responsibly develop this technology,
mitigating risks of misuse, while harnessing its potential.
- The
writer is senior vice-president, global affairs, at Google
No comments:
Post a Comment