Artificial Intelligence is here, and here to stay. At the most basic level, AI denotes the application of (self-learning) algorithms to large data sets. For years now, AI has aroused both fear and excitement, yet its ultimate impact will be determined by us and the governance frameworks we build.
AI Localism, a term coined by Stefaan Verhulst and Mona Sloane, refers to the actions taken by local decision-makers to address the use of AI within a city or community. AI Localism has often emerged because of gaps left by incomplete state, national or global governance frameworks.
AI Localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability.
The speed and scale at which AI is deployed is an uncharted territory, and policy is not caught up. Our insufficiency of knowledge and regulation have led to mistrust, and irresponsible behavior. The concerns are valid, but their legitimacy should spur us not to cast aside or reject AI. Instead, these concerns can be used to understand its potentials and pitfalls and develop new forms of governance.
Mapping the current state of AI Localism around the world. Providing rigorous insight in an emerging phenomenon.
Analyzing what works, and what doesn't and why? Developing an evidence-base of AI Localism.
Guiding local leaders towards responsible AI implementation and governance innovation. Training the next generation of AI leaders at the local level.
To fight against COVID-19, companies and cities alike have quickly developed or repurposed AI technologies, such as temperature, mask, and social distance detection or smartphone tracking to monitor disease spread. While these uses seem crucial for the current global crisis, implementation roadblocks have exposed the gaps in AI regulation. This comparative review of current practices worldwide seeks to gain a better understanding of successful AI Localism in the context of COVID-19 as to inform and guide local leaders and city officials towards best practices.LEARN MORE
With national innovation strategies focused primarily on achieving dominance in artificial intelligence, the problem of actually regulating AI applications has received less attention. Fortunately, cities and other local jurisdictions are picking up the baton and conducting policy experiments that will yield lessons for everyone.
Together with the NYU Center Responsible AI, The GovLab will seek to develop a interactive repository and a set of training modules of Responsible AI approaches at the local level. The NYU Center for Responsible AI is a comprehensive laboratory that is building a future in which responsible AI is the only kind accepted by society. Its Applied Research Lab is conducting use-inspired research and building open-source tools and frameworks for responsible AI, equitable data-sharing, and increased transparency of socio-technical systems. Its Talent and Education Program is developing standardized curricula to educate computer science and data science students, current practitioners in the workforce, and members of the public about responsible AI. Its AI for Good startup program tackles societal problems that are otherwise overlooked in pursuit of broad capital market opportunities.LEARN MORE
Use the form below to share examples of AI Localism or to express your interest in collaborating with us on the AI Localism program.
* indicates required fields
Thank you for your submission!