{"id":1007,"date":"2020-11-01T17:29:52","date_gmt":"2020-11-01T11:59:52","guid":{"rendered":"https:\/\/projects.itforchange.net\/digital-new-deal\/?p=1007"},"modified":"2020-12-30T09:21:13","modified_gmt":"2020-12-30T03:51:13","slug":"imagining-the-ai-we-want-towards-ai-constitutionalism","status":"publish","type":"post","link":"https:\/\/projects.itforchange.net\/digital-new-deal\/2020\/11\/01\/imagining-the-ai-we-want-towards-ai-constitutionalism\/","title":{"rendered":"Imagining the AI We Want: Towards a New AI Constitutionalism"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"1007\" class=\"elementor elementor-1007\" data-elementor-settings=\"[]\">\n\t\t\t\t\t\t<div class=\"elementor-inner\">\n\t\t\t\t\t\t\t<div class=\"elementor-section-wrap\">\n\t\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-d25c129 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"d25c129\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5f67b00\" data-id=\"5f67b00\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f76ee57 elementor-nav-menu__align-center elementor-nav-menu__text-align-center elementor-nav-menu--dropdown-tablet elementor-nav-menu--toggle elementor-nav-menu--burger elementor-widget elementor-widget-global elementor-global-976 elementor-widget-nav-menu\" data-id=\"f76ee57\" data-element_type=\"widget\" data-settings=\"{&quot;layout&quot;:&quot;horizontal&quot;,&quot;submenu_icon&quot;:{&quot;value&quot;:&quot;fas fa-caret-down&quot;,&quot;library&quot;:&quot;fa-solid&quot;},&quot;toggle&quot;:&quot;burger&quot;}\" data-widget_type=\"nav-menu.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t<nav migration_allowed=\"1\" migrated=\"0\" role=\"navigation\" class=\"elementor-nav-menu--main elementor-nav-menu__container elementor-nav-menu--layout-horizontal e--pointer-double-line e--animation-slide\"><ul id=\"menu-1-f76ee57\" class=\"elementor-nav-menu\"><li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-home menu-item-1048\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/\" class=\"elementor-item\">Home | Essays<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1047\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/interviews\/\" class=\"elementor-item\">Interviews<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-post menu-item-1714\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/2021\/01\/25\/foreword-a-digital-new-deal-as-if-people-and-planet-matter\/\" class=\"elementor-item\">Foreword<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1716\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/pdf\/\" class=\"elementor-item\">PDF Version<\/a><\/li>\n<\/ul><\/nav>\n\t\t\t\t\t<div class=\"elementor-menu-toggle\" role=\"button\" tabindex=\"0\" aria-label=\"Menu Toggle\" aria-expanded=\"false\">\n\t\t\t<i aria-hidden=\"true\" role=\"presentation\" class=\"eicon-menu-bar\"><\/i>\t\t\t<span class=\"elementor-screen-only\">Menu<\/span>\n\t\t<\/div>\n\t\t\t<nav class=\"elementor-nav-menu--dropdown elementor-nav-menu__container\" role=\"navigation\" aria-hidden=\"true\"><ul id=\"menu-2-f76ee57\" class=\"elementor-nav-menu\"><li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-home menu-item-1048\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/\" class=\"elementor-item\" tabindex=\"-1\">Home | Essays<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1047\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/interviews\/\" class=\"elementor-item\" tabindex=\"-1\">Interviews<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-post menu-item-1714\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/2021\/01\/25\/foreword-a-digital-new-deal-as-if-people-and-planet-matter\/\" class=\"elementor-item\" tabindex=\"-1\">Foreword<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1716\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/pdf\/\" class=\"elementor-item\" tabindex=\"-1\">PDF Version<\/a><\/li>\n<\/ul><\/nav>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-16e7353 elementor-widget elementor-widget-spacer\" data-id=\"16e7353\" data-element_type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-dc3351f elementor-hidden-tablet elementor-hidden-phone elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"dc3351f\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-688acd6\" data-id=\"688acd6\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d8a54ff elementor-widget elementor-widget-text-editor\" data-id=\"d8a54ff\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;none&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: center;\"><strong>Imagining the AI We Want: Towards a New AI Constitutionalism<\/strong><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1c1d384 elementor-widget elementor-widget-text-editor\" data-id=\"1c1d384\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;none&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: center;\">Jun-E Tan<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-3445c26 elementor-hidden-desktop elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"3445c26\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-4997c8a\" data-id=\"4997c8a\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4f8ab2d elementor-widget elementor-widget-text-editor\" data-id=\"4f8ab2d\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;none&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: center;\"><strong>Imagining the AI We Want: Towards a New AI Constitutionalism<\/strong><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-507619a elementor-widget elementor-widget-text-editor\" data-id=\"507619a\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;none&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: center;\">Jun-E Tan<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-650f2a6 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"650f2a6\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-04024e5\" data-id=\"04024e5\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-14c45ce\" data-id=\"14c45ce\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-1c049b7 elementor-invisible elementor-widget elementor-widget-text-editor\" data-id=\"1c049b7\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeIn&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>Artificial intelligence (AI) technologies promise vast benefits to society but also bring unprecedented risks when abused or misused. As such, a movement towards AI constitutionalism has begun, as stakeholders come together to articulate the values and principles that should inform the development, deployment, and use of AI. This essay outlines the current state of AI constitutionalism. It argues that existing discourses and initiatives centre on non-legally binding AI ethics that are overly narrow and technical in their substance, and overlook systemic and structural concerns. Most AI guidelines and value statements come from small and privileged groups of AI experts in the Global North and reflect their interests and priorities, with little or no inputs from those affected by these technologies. This essay suggests three principles for an AI constitutionalism rooted in societal and local contexts: viewing AI as a means instead of an end, with an emphasis on clarifying the objectives and analyzing the feasibility of the technology in providing solutions; emphasizing relationality in AI ethics, moving away from an individualistic and rationalistic paradigm; and envisioning an AI governance that goes beyond self-regulation by the industry, and is instead supported by checks and balances, institutional frameworks, and regulatory environments arrived at through participatory processes.<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-589fca0 gallery-spacing-custom elementor-hidden-tablet elementor-hidden-phone elementor-invisible elementor-widget elementor-widget-image-gallery\" data-id=\"589fca0\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeIn&quot;}\" data-widget_type=\"image-gallery.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div class=\"elementor-image-gallery\">\n\t\t\t<div id='gallery-1' class='gallery galleryid-1007 gallery-columns-3 gallery-size-full'><figure class='gallery-item'>\n\t\t\t<div class='gallery-icon portrait'>\n\t\t\t\t<img width=\"1414\" height=\"2000\" src=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2.png\" class=\"attachment-full size-full\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2.png 1414w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-212x300.png 212w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-724x1024.png 724w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-768x1086.png 768w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-1086x1536.png 1086w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-17x24.png 17w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-25x36.png 25w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design2-34x48.png 34w\" sizes=\"100vw\" \/>\n\t\t\t<\/div><\/figure><figure class='gallery-item'>\n\t\t\t<div class='gallery-icon portrait'>\n\t\t\t\t<img width=\"1810\" height=\"2560\" src=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-scaled.jpg\" class=\"attachment-full size-full\" alt=\"\" loading=\"lazy\" aria-describedby=\"gallery-1-1032\" srcset=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-scaled.jpg 1810w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-212x300.jpg 212w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-724x1024.jpg 724w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-768x1086.jpg 768w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-1086x1536.jpg 1086w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-1448x2048.jpg 1448w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-17x24.jpg 17w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-25x36.jpg 25w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-34x48.jpg 34w\" sizes=\"100vw\" \/>\n\t\t\t<\/div>\n\t\t\t\t<figcaption class='wp-caption-text gallery-caption' id='gallery-1-1032'>\n\t\t\t\tIllustration by Jahnavi Koganti\n\t\t\t\t<\/figcaption><\/figure><figure class='gallery-item'>\n\t\t\t<div class='gallery-icon portrait'>\n\t\t\t\t<img width=\"1414\" height=\"2000\" src=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3.png\" class=\"attachment-full size-full\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3.png 1414w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-212x300.png 212w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-724x1024.png 724w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-768x1086.png 768w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-1086x1536.png 1086w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-17x24.png 17w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-25x36.png 25w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/Untitled-design3-34x48.png 34w\" sizes=\"100vw\" \/>\n\t\t\t<\/div><\/figure>\n\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6c2c129 elementor-hidden-desktop elementor-widget elementor-widget-image\" data-id=\"6c2c129\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-image\">\n\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img width=\"525\" height=\"743\" src=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-724x1024.jpg\" class=\"attachment-large size-large\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-724x1024.jpg 724w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-212x300.jpg 212w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-768x1086.jpg 768w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-1086x1536.jpg 1086w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-1448x2048.jpg 1448w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-17x24.jpg 17w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-25x36.jpg 25w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-34x48.jpg 34w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/IMG_4322-scaled.jpg 1810w\" sizes=\"100vw\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Illustration by Jahnavi Koganti<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-6525ba0\" data-id=\"6525ba0\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6d485b5 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6d485b5\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-efc6bea\" data-id=\"efc6bea\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-c5d8788\" data-id=\"c5d8788\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6233683 elementor-invisible elementor-widget elementor-widget-text-editor\" data-id=\"6233683\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeIn&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: left;\"><span lang=\"en-US\">1. Introduction<\/span><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7cba93b elementor-widget elementor-widget-text-editor\" data-id=\"7cba93b\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>The ability of machines to learn from the past and make predictions about the future promises vast improvements to our individual and collective lives. With artificial intelligence (AI), we are able to rapidly detect patterns and anomalies in data, discover new insights, and inform decision-making. Better public health and transportation, more efficient services and increased accessibility, climate change mitigation and adaptation, etc. are part of a long list of the potential benefits of AI.<\/p>\n<p>Governments and companies, eager to deploy and employ these technologies, often cite these potential benefits to frame the adoption of AI as a matter of inevitable progress. The possibilities of \u2018AI for good\u2019 are endless, we are told, as long as we provide the machines with enough data to churn. The technology is neutral, we are assured, and AI experts are working on perfecting these systems, complete with ethical considerations, so that negative impacts are minimized. Yet, as more AI-enabled systems are rolled out and adopted, accounts of unintended consequences and intentional abuse continue to accumulate at an alarming pace. Cautionary tales of the unintended consequences of AI abound \u2013 machines exacerbating racial biases,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-1\" href=\"#article-footnote-1007-1\">1<\/a> exam grading algorithms turning out to be hugely erroneous,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-2\" href=\"#article-footnote-1007-2\">2<\/a> and automated social protection schemes failing society\u2019s most vulnerable, leading to death by starvation in extreme cases.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-3\" href=\"#article-footnote-1007-3\">3<\/a> Then there are egregious cases of intentional abuse \u2013 state and non-state actors leveraging AI capabilities to surveil entire populations,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-4\" href=\"#article-footnote-1007-4\">4<\/a> manipulate voter behavior,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-5\" href=\"#article-footnote-1007-5\">5<\/a> or produce highly realistic manipulated audio-visual content (also known as deepfakes) that can undermine the foundations of trust in society.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-6\" href=\"#article-footnote-1007-6\">6<\/a><\/p>\n<p>Amidst these promises and anxieties, a movement towards AI constitutionalism has begun in recent years, as stakeholders from the market, state, and civil society put forth visions of what ethical AI should constitute and how these technologies should be governed. By AI constitutionalism, we mean the process of norm-making or the articulation of key values and principles which guide the design, construction, deployment, and usage of AI technologies. The concept is inspired by the more established body of work on digital constitutionalism, defined by Dennis Redeker and his colleagues as \u201ca constellation of initiatives [including declarations, magna cartas, charters, bills of rights, etc.] that have sought to articulate a set of political rights, governance norms, and limitations on the exercise of power on the Internet\u201d,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-7\" href=\"#article-footnote-1007-7\">7<\/a><a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-8\" href=\"#article-footnote-1007-8\">8<\/a><a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-9\" href=\"#article-footnote-1007-9\">9<\/a>which are not only important for political and symbolic reasons, but also for shaping laws and regulations in the digital era.<\/p>\n<p>Indeed, the process of shaping norms is exceedingly important as it entails a reckoning with our collective values. Norms are a sort of moral compass that guide us towards an imagined future. Especially in the context of AI, a nascent technology whose direction and implications are not yet fully known, some big picture questions need to be discussed. What are our goals and principles as a society? Where do we draw the line between possible trade-offs and values that are sacred and must be protected at all costs? What behaviors do we reward or sanction? And depending on the answers to these questions, what types of AI should we build (or not build) to aid our progress as a civilization?<\/p>\n<p>In this essay, I outline the current state of AI constitutionalism, and provide arguments about why existing discourses and initiatives in this space will not lead us towards a future that is cognizant of human dignity and sustainable development. Based on these arguments, I imagine a new AI constitutionalism that imbues technological discourses with socio-political relevance, thus opening up discussions rooted in specific applications and contexts. Finally, I put forth three principles that should guide future initiatives in AI constitutionalism:<\/p>\n<p>1) AI must be viewed as a \u2018means\u2019 instead of an \u2018end\u2019, <br \/>2) AI ethics must emphasize relationality and context, and <br \/>3) AI governance must go beyond self-regulation by the industry.<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-017b03a elementor-invisible elementor-widget elementor-widget-text-editor\" data-id=\"017b03a\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeIn&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: left;\"><span lang=\"en-US\"><strong>2. AI ethics: Why it is not enough<\/strong> <\/span><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0d7a640 elementor-widget elementor-widget-text-editor\" data-id=\"0d7a640\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>In the last five years, the area of AI ethics has become increasingly active, with stakeholders at various levels and in different geographic locations issuing policy statements or guidelines on what ethical AI is or should be. Together, these provide a fertile ground for analyzing the underlying priorities and assumptions that mark the current state of AI constitutionalism and shape the character of norm-making in the field.<\/p>\n<p>Anna Jobin and her colleagues at ETH Zurich gathered at least 84 institutional reports or guidance documents on ethical AI in their 2019 analysis of the global landscape of AI ethics guidelines and principles.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-10\" href=\"#article-footnote-1007-10\">10<\/a> Most of these documents come from private companies (22.6 percent), government agencies (21.4 percent), academic and research institutions (10.7 percent), and intergovernmental or supranational organizations (9.5 percent). Prominent examples at the government level include the OECD AI Principles and the European Commission\u2019s Ethics Guidelines for Trustworthy AI. Corporations, civil society, and other multistakeholder groups have also come up with their own non-legally binding positions and manifestos. Examples include Google\u2019s AI principles,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-11\" href=\"#article-footnote-1007-11\">11<\/a> the Universal Guidelines for Artificial Intelligence developed by The Public Voice,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-12\" href=\"#article-footnote-1007-12\">12<\/a> the Tenets of Partnership on AI to Benefit People and Society,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-13\" href=\"#article-footnote-1007-13\">13<\/a> and the Beijing AI Principles.<\/p>\n<p>There is some convergence in the values or principles that emerge as paramount in these ethical AI guidelines and statements. In Jobin and her colleagues\u2019 analysis, the most commonly articulated principles are those of transparency, justice and fairness, non-maleficence (causing no harm), responsibility, and privacy. Six others appear less frequently, and in the following order: beneficence (promoting good), freedom and autonomy, trust, dignity, sustainability, and solidarity. However, despite the convergence in the values that are prioritized by existing AI policy documents, the picture becomes increasingly complex when we look beyond the terms themselves, and focus on their interpretation and implementation. At this point, some divergence or lack of consensus begins to emerge.<\/p>\n<p>Most articulations on AI ethics tend to focus on narrow technical problems and fixes. An evaluation by Thilo Hagendorff from the University of T\u00fcbingen<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-14\" href=\"#article-footnote-1007-14\">14<\/a> of 22 ethical AI guidelines, finds that the most popular values (such as accountability, explainability, and privacy) tend to be the easiest to operationalize mathematically, while the more systemic problems are overlooked. These systemic problems, Hagendorff suggests, include the weakening of social cohesion (through filter bubbles and echo chambers, for instance), the political abuse of AI systems, environmental impacts of the technology, and trolley problems (in which there is no clear decision on which choice is more ethical; for instance, having to choose between killing a pedestrian or the driver of an autonomous vehicle). Moreover, very little attention is paid to the ethical dilemmas plaguing the industry itself \u2013 the lack of diversity within the AI community or the invisible and precarious labor that goes into enabling AI technologies, such as dataset labeling and content moderation.<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b038d3c elementor-blockquote--skin-border elementor-blockquote--button-color-official elementor-widget elementor-widget-blockquote\" data-id=\"b038d3c\" data-element_type=\"widget\" data-widget_type=\"blockquote.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<blockquote class=\"elementor-blockquote\">\n\t\t\t<p class=\"elementor-blockquote__content\">\n\t\t\t\tTechnology is framed as an inevitable step towards progress; its application is taken for granted regardless of the context. In other words, being ethical only entails \u201cbuilding better\u201d; \u201cnot building\u201d is not an option.\t\t\t<\/p>\n\t\t\t\t\t<\/blockquote>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-bb0e9ae elementor-widget elementor-widget-text-editor\" data-id=\"bb0e9ae\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>Discussions on AI ethics are also based on certain assumptions and framings \u2013 \u201cmoral backgrounds\u201d according to Daniel Green and his colleagues<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-15\" href=\"#article-footnote-1007-15\">15<\/a> \u2013 which set the scope and direction of AI constitutionalism. Green and his colleagues\u2019 critical review of seven high-profile value statements in ethical AI finds that the discourse is in line with conventional business ethics but sidesteps the imperatives of social justice and considerations of human flourishing. Technology is framed as an inevitable step towards progress; its application is taken for granted regardless of the context. In other words, being ethical only entails \u201cbuilding better\u201d; \u201cnot building\u201d is not an option. Furthermore, scrutiny of the ethicality of AI technologies is restricted to the design level, and does not extend to the business level. A design-level approach to ethical AI, for instance, looks only at reducing the racial bias of facial recognition software, without questioning the ethics of deploying this technology for mass surveillance in the first place. Another implicit assumption is that ethical design is the exclusive domain of experts within the AI community (for instance, tech companies, academics, lawyers). Product users and buyers are just stakeholders who \u201chave AI happen to them\u201d. Seemingly ironclad values and principles start to show cracks when these assumptions are questioned. What can we expect from ethical AI that is techno-deterministic and does not take a critical view of what the technology is used for? For whom and in whose interest are AI technologies being built and deployed?<\/p>\n<p>More challenges emerge as we move away from the substantive content of AI ethics discourses and start putting principles into practice. First, AI ethics is, at best, seen as good intentions with no guarantee for good actions, and at worst, criticized as deliberate attempts to ward off hard regulations. Ethics whitewashing is a real concern as corporations eschew regulations and put forth self-formulated ethical guidelines as sufficient for AI governance. In practice, ethical considerations come in only after the top priorities of profit margins, client requirements, and project constraints have been resolved.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-16\" href=\"#article-footnote-1007-16\">16<\/a> It is difficult to rely on the goodwill of corporations which have arguably co-opted the academic field of AI ethics in an attempt to delay regulations.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-17\" href=\"#article-footnote-1007-17\">17<\/a> The existence of ethical guidelines does not guarantee that companies will be ethical. There are well-documented instances of companies resorting to ethics dumping and shirking wherever convenient, most obvious in the precarious work conditions of content moderation workers in the Global South.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-18\" href=\"#article-footnote-1007-18\">18<\/a><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-be91cf1 elementor-blockquote--skin-border elementor-blockquote--button-color-official elementor-widget elementor-widget-blockquote\" data-id=\"be91cf1\" data-element_type=\"widget\" data-widget_type=\"blockquote.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<blockquote class=\"elementor-blockquote\">\n\t\t\t<p class=\"elementor-blockquote__content\">\n\t\t\t\tEthics whitewashing is a real concern as corporations eschew regulations and put forth self-formulated ethical guidelines as sufficient for AI governance. In practice, ethical considerations come in only after the top priorities of profit margins, client requirements, and project constraints have been resolved.\t\t\t<\/p>\n\t\t\t\t\t<\/blockquote>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8fa54c4 elementor-widget elementor-widget-text-editor\" data-id=\"8fa54c4\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p><!-- wp:paragraph {\"align\":\"center\"} --><\/p>\n<p>Mainstream discussions on AI ethics assume that technologies exist in a vacuum, devoid of context. These assumptions are often made by a very small and privileged group of people in the Global North,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-19\" href=\"#article-footnote-1007-19\">19<\/a> who do not see the need to engage people outside of their own community even though the tools they build significantly impact the world at large. When AI technologies are designed and deployed without attention to context, systemic harms are amplified, and entire populations, especially in the Global South, can be rendered more vulnerable.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-20\" href=\"#article-footnote-1007-20\">20<\/a> Above all, discussions on ethics remain just that \u2013 discussions \u2013 not legally binding and enforceable. AI ethics, in its current state, does not lead to ethical AI. If we are serious about making technology work for the people and the planet, our efforts towards AI constitutionalism need to look beyond dominant discourses. This is what I attempt to do in the following section.<\/p>\n<p><!-- \/wp:paragraph --><\/p>\n<p><!-- wp:paragraph --><\/p>\n<p><!-- \/wp:paragraph --><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e4f3cb6 elementor-invisible elementor-widget elementor-widget-text-editor\" data-id=\"e4f3cb6\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeIn&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: left;\"><strong><span lang=\"en-US\">3. Towards a new AI constitutionalism<\/span><\/strong><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e317b10 elementor-widget elementor-widget-text-editor\" data-id=\"e317b10\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>Already, there is mounting resistance against corporations and their maneuvering of ethical self-regulation. Carly Kind, Director of the Ada Lovelace Institute, observes a \u201cthird wave\u201d of AI ethics, following a first wave comprising of principles and philosophical debates, and a second wave focusing on narrow, technical fixes. Kind argues that the third wave of AI ethics is less conceptual, more focused on applications, and takes into account structural issues. Research institutes, activists, and advocates have mobilized to effect changes in AI design and use, with some successes such as legislations and moratoria on the use of algorithms for applications such as test grading and facial recognition.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-21\" href=\"#article-footnote-1007-21\">21<\/a> An emerging body of work on \u201cradical AI\u201d aims to expose the power imbalances exacerbated by AI and offer solutions.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-22\" href=\"#article-footnote-1007-22\">22<\/a><\/p>\n<p>The Covid-19 pandemic has laid bare these structural imbalances and triggered a renewed rush towards digitalization, with its associated concerns. Against this backdrop, we have also seen a shift towards a more critical view of AI and its implementation. It is precisely at this point that a new AI constitutionalism, or at least a significantly upgraded one, is needed and possible. We must seize this moment to take control of the narrative and determine what is important for our collective future, and how AI can help us achieve this vision. This is particularly urgent for communities that lie outside of the AI power centres, whose views remain underrepresented in global norm-making and standards-setting, and whose contexts may not be understood by those building the technologies and making the ethical decisions that underpin them. Some groups have already rallied together to collect and compile principles important to their communities, such as the Digital Justice Manifesto put together by the Just Net Coalition<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-23\" href=\"#article-footnote-1007-23\">23<\/a> (a global network of civil society actors based mostly in the Global South), and the CARE Principles for Indigenous Data Governance by the Global Indigenous Data Alliance.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-24\" href=\"#article-footnote-1007-24\">24<\/a><\/p>\n<p>Societal constitutionalism is a process of constitutional rule-making that starts from social groups like civil society, representatives from the business community, or multistakeholder coalitions. As noted by Redeker et al.,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-25\" href=\"#article-footnote-1007-25\">25<\/a> the process can be seen in three phases: \u201can initial phase of coming to an agreement about a set of norms by a specific group; a second phase in which these norms become law; and a third phase in which reflection about this builds up to achieving constitutional character\u201d. Thus far, most of the norm-making in AI has been top down, coming from high-level policymakers, transnational Big Tech firms, or small groups of elites at national levels, reflecting the priorities of these groups. This is insufficient not only from a democratic point of view, but also because the vast applications of AI across different fields, from agriculture to zoology, necessitates the inputs of field experts who understand local contexts and implications.<\/p>\n<p>A reimagination of AI constitutionalism should move the discourse from a purely technological approach to take societal considerations into account. It needs to move from the realm of the abstract to focus on application. Governance norms, political rights, and limitations of power within the field of AI should be democratically deliberated at different levels of a nested societal system and within different political jurisdictions (e.g. city, state, national, regional, international levels). This would allow all stakeholders and interest groups (e.g. professional associations, business associations, civil society networks, grassroots communities) to contribute meaningfully to the governance of AI from their own vantage points. This collective bottom-up approach, I propose, should be underpinned by the following considerations:<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9becea9 elementor-widget elementor-widget-text-editor\" data-id=\"9becea9\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>3.1. AI as a means to an end (and not an end in itself)<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3d0371a elementor-widget elementor-widget-text-editor\" data-id=\"3d0371a\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>One prevalent assumption about AI is that it is an inevitable step towards progress, that AI technologies, if built well, can solve any problem. The tech industry\u2019s optimism in this regard is echoed by the state. As a result, AI becomes an end in itself instead of a means to an end. Technological determinism is reflected in the willingness of governments to keep the AI regulatory environment minimalist, in order to not stifle innovation. In the rush to remain competitive in a high-tech, machine-enabled future, governments have outlined national AI strategies to promote research, talent, and investments in the sector, while remaining noncommittal about safeguarding against potential human rights violations.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-26\" href=\"#article-footnote-1007-26\">26<\/a> The possibilities of \u2018AI for good\u2019 begin to fall flat when seen from this perspective. If the objective of AI is indeed to bring social and economic benefits to the people, governments need to prioritize human rights over the needs of the industry and address the thorny issues that result from these technologies, including mass job displacements and a rapid concentration of wealth in the hands of a few.<\/p>\n<p>For AI to be the means to an end, we need to first clarify our objectives and then critically assess if using AI is the best way to achieve them. In this, we can follow the lead of vision statements such as the UN Sustainable Development Goals and the Universal Declaration of Human Rights which have clearly-specified objectives, arrived at through extensive international consultations, negotiations, and agreements. The UN SDGs also come with a specific timeline (by 2030) as well as established indicators to help evaluate if the objectives have been met. Additionally, we can draw on relevant national<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-27\" href=\"#article-footnote-1007-27\">27<\/a> and sectoral policies,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-28\" href=\"#article-footnote-1007-28\">28<\/a> or even organizational vision and mission statements which have often gone through contestations and consensus-building by multiple stakeholders. The use of AI needs to be grounded in such clearly-stated visions and blueprints for a better society.<\/p>\n<p>Furthermore, it needs to be acknowledged that AI is only one tool in a full range of options, and not all problems should\/can be solved by such technologies. In a presentation at Princeton University, titled \u2018How to recognize AI snake oil\u2019, Arvind Narayanan argued that while AI has become highly accurate in applications of perception (e.g. content identification, speech to text, facial recognition), and is improving in applications of automating judgment (e.g. spam detection, detection of copyrighted material, content recommendation), applications that promise to predict social outcomes (e.g. predicting criminal recidivism, job performance, terrorist risk) are still \u201cfundamentally dubious\u201d. Justifying the use of the term \u2018snake oil AI\u2019, Narayanan pointed to existing studies that show that AI backed by thousands of datasets is not substantially better at predicting social outcomes than manual scoring using only a few data points. Discussions on AI constitutionalism should, therefore, be grounded in clearly-stated objectives and feasibility studies, as well as allow room for rejecting AI usage, especially when there are potential risks for stakeholder communities.<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d49dbaf elementor-widget elementor-widget-text-editor\" data-id=\"d49dbaf\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>3.2. AI ethics to emphasize relationality and context<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ec7127d elementor-widget elementor-widget-text-editor\" data-id=\"ec7127d\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>According to Sabelo Mhlambi from Harvard University, Western ethical traditions tend to emphasize \u201crationality\u201d as a prized quality of personhood \u2013 along the lines of \u201cI think, therefore I am\u201d \u2013 where humanness is defined by the individual\u2019s ability to arrive at the truth through logical deduction.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-29\" href=\"#article-footnote-1007-29\">29<\/a> Not only is this an inherently individualistic worldview, it has also been used to justify colonial and racial subjugation based on the belief that certain groups are not rational enough, and therefore, do not deserve to be treated as humans. An AI framework that prioritizes rationality and individualism ignores the interconnectedness of our globalized and digitalized world, and serves to exacerbate historical injustices and perpetuate new forms of digital exploitation. The failure to recognize the relationality of people, objects, and events has left us hurtling towards countless crises and avoidable tragedies (such as man-made climate change exacerbated by nations\u2019 inability to coordinate a multilateral response).<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7e13502 elementor-blockquote--skin-border elementor-blockquote--button-color-official elementor-widget elementor-widget-blockquote\" data-id=\"7e13502\" data-element_type=\"widget\" data-widget_type=\"blockquote.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<blockquote class=\"elementor-blockquote\">\n\t\t\t<p class=\"elementor-blockquote__content\">\n\t\t\t\tAn AI framework that prioritizes rationality and individualism ignores the interconnectedness of our globalized and digitalized world, and serves to exacerbate historical injustices and perpetuate new forms of digital exploitation. \t\t\t<\/p>\n\t\t\t\t\t<\/blockquote>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-42dc306 elementor-widget elementor-widget-text-editor\" data-id=\"42dc306\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>Scholars of technology and ethics have offered diverse philosophies anchored in relationality \u2013 such as Ubuntu,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-30\" href=\"#article-footnote-1007-30\">30<\/a> Confucianism,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-31\" href=\"#article-footnote-1007-31\">31<\/a> and indigenous epistemologies (e.g. Hawai\u2019i, Cree, and Lakota)<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-32\" href=\"#article-footnote-1007-32\">32<\/a> \u2013 that view ethical behavior in the context of social relationships and relationships with non-human entities such as the environment, or even sentient AI in the future. The moral character of AI must be judged based on its impacts on social relationships and the overall context and environment it interacts with. For example, evaluating AI-powered automated decision-making systems through the ethical lens of Ubuntu, Mhlambi points to a range of ethical risks. These include the exclusion of marginalized communities because of biases and non-participatory decision-making, societal fragmentation as a result of the attention economy and its associated features, and inequalities resulting from the rapid concentration of data and resources in the hands of a powerful few.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-33\" href=\"#article-footnote-1007-33\">33<\/a> In contrast, current ethical AI frameworks say very little about extractive business models of surveillance capitalism or the heavy carbon footprint of training AI.<\/p>\n<p>The development and deployment of AI technologies take place in a complex, networked world. Discussions on AI constitutionalism thus need a paradigmatic shift in ethics from the individual to the relational, and must consider issues as diverse as collective privacy and consent, power and decolonization, invisible labor and environmental externalities in AI supply chains, as well as unintended consequences (for instance, when systems interact in unpredictable ways with their particular environments).<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-df31d3e elementor-widget elementor-widget-text-editor\" data-id=\"df31d3e\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>3.3. AI governance to go beyond self-regulation by the industry<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b5c902d elementor-widget elementor-widget-text-editor\" data-id=\"b5c902d\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>The tech ethos of \u201cmove fast and break things\u201d becomes much less persuasive if we make the connection that an algorithmic tweak in Facebook can lead to (or prevent) a genocide in Myanmar.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-34\" href=\"#article-footnote-1007-34\">34<\/a> Some friction in the system, by way of checks and balances, is necessary to make sure that any technology released is safe for society, and to guard against AI exceptionalism. Besides safety, AI can have significant systems-level opportunities and threats. An AI Security Map drawn by Jessica Newman at the University of Berkeley proposes 20 such areas \u2013 digital\/physical (e.g. malicious use of AI and automated cyberattacks, secure convergence\/integration of AI with other technologies), political (e.g. disinformation and manipulation, geopolitical strategy, and international collaboration), economic (e.g. reduced inequalities, promotion of AI research and development), and social domains (e.g. privacy and data rights, sustainability and ecology).<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-35\" href=\"#article-footnote-1007-35\">35<\/a> It is difficult to imagine that self-regulation in the AI industry would carry us through all of these different areas, across different sectoral and geographical contexts.<\/p>\n<p>The World Economic Forum defines governance as \u201cmaking decisions and exercising authority in order to guide the behavior of individuals and organizations\u201d.<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-36\" href=\"#article-footnote-1007-36\">36<\/a> As AI constitutionalism is ultimately about governance of technology, discussions should not stop at AI ethics or be left to experts. Instead, we should explore other mechanisms such as institutional frameworks and regulatory environments to bridge principles and practice. Under the broad ambit of AI constitutionalism, diverse governance issues can be debated at various policy levels \u2013 for example, cross-border data flows and data sovereignty can be discussed at the international level; hard limits against malicious use of AI and data governance frameworks can be discussed at a national level; data privacy, especially in sensitive sectors such as finance and health, can be taken up at a sectoral level.<\/p>\n<p>Broad participation in AI governance can have positive spillover effects such as trust-building, pooling multidisciplinary knowledge, and capacity-building across different domains. For this, a new AI constitutionalism needs to push for stakeholder participation at various levels. Underrepresented nations need to be invited and supported in norm-making initiatives at the international level; civil society must be consulted and engaged at national and city levels. These discussions should not focus only on the technical, and the onus should be on the AI community to make the information accessible to all. As a recent report by Upturn and Omidyar Network puts it, non-technical properties about an automated system, such as clarity about its existence, purpose, constitution,<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-37\" href=\"#article-footnote-1007-37\">37<\/a> and impact, can be \u201cjust as important, and often more important\u201d than its technical artifacts (its policies, inputs and outputs, training data, and source code).<a class=\"bfn-footnoteHook\" id=\"article-footnote-hook-1007-38\" href=\"#article-footnote-1007-38\">38<\/a><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e22fb28 elementor-invisible elementor-widget elementor-widget-text-editor\" data-id=\"e22fb28\" data-element_type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeIn&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p style=\"text-align: left;\"><strong><span lang=\"en-US\">4. End reflections<\/span><\/strong><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-eb41de1 elementor-widget elementor-widget-text-editor\" data-id=\"eb41de1\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p>AI constitutionalism needs to be squarely rooted in societal contexts and must make the connections between technology and the traditional fault lines of power and privilege. The resulting discourses will be complex and contested, reflecting the messy realities that the technology is embedded in, rather than the neat lists of values and principles that see the technology in a vacuum. The values of AI ethics (such as fairness, accountability, and transparency) will take on different, more consequential meanings when applied at a societal level, challenging actors in the Global North to explore ways to decolonialize AI and distribute its benefits based on solidarity, not paternalism.<\/p>\n<p>By lifting AI constitutionalism from its narrow, technological focus to the societal and application level, we will find opportunities for greater participation and a more diverse range of perspectives to shape governance norms, power structures, and political rights in the field of AI. This will make space for actors in the Global South to deliberate on our own AI-enabled future, drawing from our cultural philosophies, and governing AI through our laws and institutional frameworks. It is critical that we claim this space to govern technology, as the unprecedented advances promised by AI can only be fulfilled if it is carefully controlled. Forfeiting this space would leave us stranded with a vastly different outcome of being controlled by technology and those wielding it.<\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3dcb76e elementor-widget elementor-widget-text-editor\" data-id=\"3dcb76e\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p><!-- wp:paragraph {\"align\":\"center\"} --><\/p>\n<div class=\"bfn-footnotes\"><h3 class='bfn-footnotes-title'>Notes<\/h3><ul class=\"bfn-footnotesList\"><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-1\" href=\"#article-footnote-hook-1007-1\">1<\/a> <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-2\" href=\"#article-footnote-hook-1007-2\">2<\/a> <a href=\"https:\/\/blogs.lse.ac.uk\/impactofsocialsciences\/2020\/08\/26\/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco\/\">https:\/\/blogs.lse.ac.uk\/impactofsocialsciences\/2020\/08\/26\/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-3\" href=\"#article-footnote-hook-1007-3\">3<\/a> <a href=\"https:\/\/www.theguardian.com\/technology\/2019\/oct\/14\/automating-poverty-algorithms-punish-poor\">https:\/\/www.theguardian.com\/technology\/2019\/oct\/14\/automating-poverty-algorithms-punish-poor<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-4\" href=\"#article-footnote-hook-1007-4\">4<\/a> <a href=\"https:\/\/www.nytimes.com\/2019\/04\/14\/technology\/china-surveillance-artificial-intelligence-racial-profiling.html\">https:\/\/www.nytimes.com\/2019\/04\/14\/technology\/china-surveillance-artificial-intelligence-racial-profiling.html<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-5\" href=\"#article-footnote-hook-1007-5\">5<\/a> <a href=\"https:\/\/www.theguardian.com\/technology\/2019\/mar\/17\/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook\">https:\/\/www.theguardian.com\/technology\/2019\/mar\/17\/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-6\" href=\"#article-footnote-hook-1007-6\">6<\/a> <a href=\"https:\/\/www.forbes.com\/sites\/robtoews\/2020\/05\/25\/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared\/#50a7daf57494\">https:\/\/www.forbes.com\/sites\/robtoews\/2020\/05\/25\/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared\/#50a7daf57494<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-7\" href=\"#article-footnote-hook-1007-7\">7<\/a> Other researchers have also found it useful to adopt Redeker et al.\u2019s conceptual framework of digital constitutionalism into the context of AI, such as a mapping effort on ethical and rights-based approaches to principles of AI, conducted by Jessica Fjeld et al. in Harvard University\u2019s Berkman Klein Center for Internet and Society. See Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., &amp; Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper ID 3518482). Social Science Research Network. <a href=\"https:\/\/papers.ssrn.com\/abstract=3518482\">https:\/\/papers.ssrn.com\/abstract=3518482<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-8\" href=\"#article-footnote-hook-1007-8\">8<\/a> Redeker, D., Gill, L., &amp; Gasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302\u2013319. <a href=\"https:\/\/doi.org\/10.1177\/1748048518757121\">https:\/\/doi.org\/10.1177\/1748048518757121<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-9\" href=\"#article-footnote-hook-1007-9\">9<\/a> Even though this is by no means the only interpretation of what digital constitutionalism means, it is the definition that I find to be helpful for the purpose of the essay. For more discussions on how others have defined the term, see more at Celeste, E. (2018). Digital Constitutionalism: Mapping the Constitutional Response to Digital Technology\u2019s Challenges (SSRN Scholarly Paper ID 3219905). Social Science Research Network. <a href=\"https:\/\/papers.ssrn.com\/abstract=3219905\">https:\/\/papers.ssrn.com\/abstract=3219905<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-10\" href=\"#article-footnote-hook-1007-10\">10<\/a> Jobin, A., Ienca, M., &amp; Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389\u2013399. <a href=\"https:\/\/doi.org\/10.1038\/s42256-019-0088-2\">https:\/\/doi.org\/10.1038\/s42256-019-0088-2<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-11\" href=\"#article-footnote-hook-1007-11\">11<\/a> <a href=\"https:\/\/ai.google\/principles\">https:\/\/ai.google\/principles<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-12\" href=\"#article-footnote-hook-1007-12\">12<\/a> <a href=\"https:\/\/thepublicvoice.org\/ai-universal-guidelines\/\">https:\/\/thepublicvoice.org\/ai-universal-guidelines\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-13\" href=\"#article-footnote-hook-1007-13\">13<\/a> <a href=\"https:\/\/www.partnershiponai.org\/tenets\/\">https:\/\/www.partnershiponai.org\/tenets\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-14\" href=\"#article-footnote-hook-1007-14\">14<\/a> Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99\u2013120. <a href=\"https:\/\/doi.org\/10.1007\/s11023-020-09517-8\">https:\/\/doi.org\/10.1007\/s11023-020-09517-8<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-15\" href=\"#article-footnote-hook-1007-15\">15<\/a> Greene, D., Hoffmann, A. L., &amp; Stark, L. (2019, January 8). Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Hawaii International Conference on System Sciences 2019 (HICSS-52). <a href=\"https:\/\/aisel.aisnet.org\/hicss-52\/dsm\/critical_and_ethical_studies\/2\">https:\/\/aisel.aisnet.org\/hicss-52\/dsm\/critical_and_ethical_studies\/2<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-16\" href=\"#article-footnote-hook-1007-16\">16<\/a> Orr, W., &amp; Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication &amp; Society, 23(5), 719\u2013735. <a href=\"https:\/\/doi.org\/10.1080\/1369118X.2020.1713842\">https:\/\/doi.org\/10.1080\/1369118X.2020.1713842<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-17\" href=\"#article-footnote-hook-1007-17\">17<\/a> <a href=\"https:\/\/theintercept.com\/2019\/12\/20\/mit-ethical-ai-artificial-intelligence\/\">https:\/\/theintercept.com\/2019\/12\/20\/mit-ethical-ai-artificial-intelligence\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-18\" href=\"#article-footnote-hook-1007-18\">18<\/a> Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy &amp; Technology, 32(2), 185\u2013193. <a href=\"https:\/\/doi.org\/10.1007\/s13347-019-00354-x\">https:\/\/doi.org\/10.1007\/s13347-019-00354-x<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-19\" href=\"#article-footnote-hook-1007-19\">19<\/a> Increasingly, researchers are arguing that \u201cGlobal South\u201d and \u201cGlobal North\u201d are not defined by geographical boundaries. There can be multiple \u201cSouths\u201d, even within developed contexts, where communities are oppressed, under-developed and marginalised. Similarly, \u201cNorths\u201d can exist in developing countries among the privileged and powerful. See Arun, C. (2019). AI and the Global South: Designing for Other Worlds (SSRN Scholarly Paper ID 3403010). Social Science Research Network. <a href=\"https:\/\/papers.ssrn.com\/abstract=3403010\">https:\/\/papers.ssrn.com\/abstract=3403010<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-20\" href=\"#article-footnote-hook-1007-20\">20<\/a> Arun, C. (2019). AI and the Global South: Designing for Other Worlds (SSRN Scholarly Paper ID 3403010). Social Science Research Network. <a href=\"https:\/\/papers.ssrn.com\/abstract=3403010\">https:\/\/papers.ssrn.com\/abstract=3403010<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-21\" href=\"#article-footnote-hook-1007-21\">21<\/a> <a href=\"https:\/\/venturebeat.com\/2020\/08\/23\/the-term-ethical-ai-is-finally-starting-to-mean-something\/\">https:\/\/venturebeat.com\/2020\/08\/23\/the-term-ethical-ai-is-finally-starting-to-mean-something\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-22\" href=\"#article-footnote-hook-1007-22\">22<\/a> <a href=\"https:\/\/radicalai.net\/\">https:\/\/radicalai.net\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-23\" href=\"#article-footnote-hook-1007-23\">23<\/a> <a href=\"https:\/\/justnetcoalition.org\/digital-justice-manifesto.pdf\">https:\/\/justnetcoalition.org\/digital-justice-manifesto.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-24\" href=\"#article-footnote-hook-1007-24\">24<\/a> <a href=\"https:\/\/www.gida-global.org\/care\">https:\/\/www.gida-global.org\/care<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-25\" href=\"#article-footnote-hook-1007-25\">25<\/a> Citing from Teubner (2012), in Redeker, D., Gill, L., &amp; Gasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302\u2013319.<a href=\"https:\/\/doi.org\/10.1177\/1748048518757121\">https:\/\/doi.org\/10.1177\/1748048518757121<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-26\" href=\"#article-footnote-hook-1007-26\">26<\/a> <a href=\"https:\/\/www.gp-digital.org\/wp-content\/uploads\/2020\/04\/National-Artifical-Intelligence-Strategies-and-Human-Rights%E2%80%94A-Review_April2020.pdf\">https:\/\/www.gp-digital.org\/wp-content\/uploads\/2020\/04\/National-Artifical-Intelligence-Strategies-and-Human-Rights%E2%80%94A-Review_April2020.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-27\" href=\"#article-footnote-hook-1007-27\">27<\/a> An example would be Malaysia\u2019s Shared Prosperity Vision 2030, accessible at <a href=\"https:\/\/www.pmo.gov.my\/wp-content\/uploads\/2019\/10\/SPV2030-summary-en.pdf\">https:\/\/www.pmo.gov.my\/wp-content\/uploads\/2019\/10\/SPV2030-summary-en.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-28\" href=\"#article-footnote-hook-1007-28\">28<\/a> Some examples from Malaysia: the National Policy on Biological Diversity (2016-2025), the Malaysia Education Blueprint (2013-2025), the National Policy on Industry 4.0 (Industry4WRD).<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-29\" href=\"#article-footnote-hook-1007-29\">29<\/a> Mhlambi, S. (2020). From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance (No. 2020\u2013009; Carr Center Discussion Paper Series, p. 31). Harvard Kennedy School Carr Center for Human Rights Policy. <a href=\"https:\/\/carrcenter.hks.harvard.edu\/files\/cchr\/files\/ccdp_2020-009_sabelo_b.pdf\">https:\/\/carrcenter.hks.harvard.edu\/files\/cchr\/files\/ccdp_2020-009_sabelo_b.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-30\" href=\"#article-footnote-hook-1007-30\">30<\/a> Ibid.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-31\" href=\"#article-footnote-hook-1007-31\">31<\/a> Wong, P.-H. (2012). Dao, Harmony and Personhood: Towards a Confucian Ethics of Technology | SpringerLink. Philosophy &amp; Technology, 25(1), 67\u201386.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-32\" href=\"#article-footnote-hook-1007-32\">32<\/a> Lewis, J. E., Arista, N., Pechawis, A., &amp; Kite, S. (2018). Making Kin with the Machines. Journal of Design and Science. <a href=\"https:\/\/doi.org\/10.21428\/bfafd97\">https:\/\/doi.org\/10.21428\/bfafd97<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-33\" href=\"#article-footnote-hook-1007-33\">33<\/a> Mhlambi, S. (2020). From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance (No. 2020\u2013009; Carr Center Discussion Paper Series, p. 31). Harvard Kennedy School Carr Center for Human Rights Policy. <a href=\"https:\/\/carrcenter.hks.harvard.edu\/files\/cchr\/files\/ccdp_2020-009_sabelo_b.pdf\">https:\/\/carrcenter.hks.harvard.edu\/files\/cchr\/files\/ccdp_2020-009_sabelo_b.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-34\" href=\"#article-footnote-hook-1007-34\">34<\/a> <a href=\"https:\/\/time.com\/5197039\/un-facebook-myanmar-rohingya-violence\/\">https:\/\/time.com\/5197039\/un-facebook-myanmar-rohingya-violence\/<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-35\" href=\"#article-footnote-hook-1007-35\">35<\/a> Newman, J. C. (2019). Toward AI Security: Global Aspirations for a More Resilient Future (CLTC White Paper Series). Centre for Long-term Cybersecurity. <a href=\"https:\/\/cltc.berkeley.edu\/wp-content\/uploads\/2019\/02\/CLTC_Cussins_Toward_AI_Security.pdf\">https:\/\/cltc.berkeley.edu\/wp-content\/uploads\/2019\/02\/CLTC_Cussins_Toward_AI_Security.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-36\" href=\"#article-footnote-hook-1007-36\">36<\/a> <a href=\"http:\/\/www3.weforum.org\/docs\/WEF_Global_Technology_Governance.pdf\">http:\/\/www3.weforum.org\/docs\/WEF_Global_Technology_Governance.pdf<\/a>.<\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-37\" href=\"#article-footnote-hook-1007-37\">37<\/a> A mapping of its technical elements, human participation, governing rules, and how these all of these interact. <\/li><li class=\"bfn-footnoteItem\"><a class=\"bfn-footnoteRef\" id=\"article-footnote-1007-38\" href=\"#article-footnote-hook-1007-38\">38<\/a> <a href=\"https:\/\/www.data.govt.nz\/assets\/Uploads\/Public-Scrutiny-of-Automated-Decisions.pdf\">https:\/\/www.data.govt.nz\/assets\/Uploads\/Public-Scrutiny-of-Automated-Decisions.pdf<\/a>.<\/li><\/ul><\/div>\n<p><!-- \/wp:paragraph --><\/p>\n<p><!-- wp:paragraph --><\/p>\n<p><!-- \/wp:paragraph --><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<section class=\"elementor-section elementor-inner-section elementor-element elementor-element-196c7e6 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"196c7e6\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-inner-column elementor-element elementor-element-a8b7cd7\" data-id=\"a8b7cd7\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-e752716 elementor-widget elementor-widget-image\" data-id=\"e752716\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-image\">\n\t\t\t\t\t\t\t\t\t\t\t\t<img width=\"405\" height=\"453\" src=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/JunE.jpg\" class=\"attachment-large size-large\" alt=\"\" loading=\"lazy\" srcset=\"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/JunE.jpg 405w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/JunE-268x300.jpg 268w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/JunE-21x24.jpg 21w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/JunE-32x36.jpg 32w, https:\/\/projects.itforchange.net\/digital-new-deal\/wp-content\/uploads\/2020\/11\/JunE-43x48.jpg 43w\" sizes=\"100vw\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-inner-column elementor-element elementor-element-8228cff\" data-id=\"8228cff\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-e49f11b elementor-widget elementor-widget-text-editor\" data-id=\"e49f11b\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-text-editor elementor-clearfix\">\n\t\t\t\t<p><span style=\"color: #000000;\">Jun-E Tan is an independent researcher based in Malaysia, currently working on the topic of AI governance in Southeast Asia. Her research interests are broadly anchored in the areas of sustainable development, human rights, and digital communication. More information on her research and projects can be found on her website, https:\/\/jun-etan.com. <\/span><\/p>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-5ad17a3\" data-id=\"5ad17a3\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-5ca18cb elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5ca18cb\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t\t\t<div class=\"elementor-row\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-f6b3af8\" data-id=\"f6b3af8\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-column-wrap\">\n\t\t\t\t\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Imagining the AI We Want: Towards a New AI Constitutionalism Jun-E Tan Imagining the AI We Want: Towards a New AI Constitutionalism Jun-E Tan Artificial intelligence (AI) technologies promise vast benefits to society but also bring unprecedented risks when abused or misused. As such, a movement towards AI constitutionalism has begun, as stakeholders come together &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/projects.itforchange.net\/digital-new-deal\/2020\/11\/01\/imagining-the-ai-we-want-towards-ai-constitutionalism\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Imagining the AI We Want: Towards a New AI Constitutionalism&#8221;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"elementor_header_footer","format":"standard","meta":[],"categories":[4],"tags":[2],"_links":{"self":[{"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/posts\/1007"}],"collection":[{"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/comments?post=1007"}],"version-history":[{"count":24,"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/posts\/1007\/revisions"}],"predecessor-version":[{"id":1572,"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/posts\/1007\/revisions\/1572"}],"wp:attachment":[{"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/media?parent=1007"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/categories?post=1007"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/projects.itforchange.net\/digital-new-deal\/wp-json\/wp\/v2\/tags?post=1007"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}