Brian Tse 谢旻希
Brian Tse (谢旻希) is the Founder and CEO of Concordia AI (安远AI), a Beijing-based social enterprise focused on AI safety and governance. He is also a Policy Affiliate at the Centre for the Governance of AI.
Previously, Brian was Senior Advisor to the Partnership on AI. He co-edited the book Global Perspective on AI Governance《全球视野下的人工智能治理》published by Tongji University Press. He also served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. He was part of the UNICRI-INTERPOL expert group on Responsible AI Innovation Toolkit for Law Enforcement, and a founding member of the AI for SDGs Cooperation Network at the Beijing Academy of AI. Brian has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University on global risk and foresight on AI.
Writing
Brian's current writing centers mainly around the governance of artificial intelligence.
In order for AI developers to earn trust from users, civil society, governments, and other stakeholders, there is a need to move beyond principles to a focus on mechanisms for demonstrating responsible behavior. Making and assessing verifiable claims, to which developers can be held accountable, is one step in this direction. This report suggests various steps that different stakeholders can take to make it easier to verify claims made about AI systems and their associated development processes. The report has received extensive media coverage, including Financial Times, VentureBeat, 机器之心 (Synced Review), AI科技评论 (within Leiphone), 中国经营报 (China Business Journal), and 专知人工智能 (zhuangzhi.ai). Brian was a participant of the workshop held in April 2019, a co-author, and the corresponding translator of the Chinese version.
From healthcare to education to transportation, AI could improve the delivery of public services. But how can governments position themselves to take advantage of this AI-powered transformation? In this report, Oxford Insights and the International Research Development Centre (IDRC) present the findings of our Government AI Readiness Index to answer this question. Brian was invited to comment on the report as an expert in East Asia (AI Readiness in East Asia: An Emerging Powerhouse.)
The report reviews some of the key progress of AI governance in 2019. It was contributed by 50 experts from 44 institutions, including AI scientists, academic researchers, industry representatives, policy experts, and others. This group of experts covers a wide range of regional developments and perspectives, including those in the United States, Europe, and Asia. The report has been cited by Montreal AI Ethics Institute’s The State of AI Ethics and received extensive media coverage, including 中国科学报 (ScienceNet.cn), 文汇报 (Wen Wei Po), and 澎湃新闻 (paper.cn). Brian is a co-executive director of the report.
This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development. The materials presented range from blogs to books, with an emphasis on English translations of Mandarin source materials.
This report summarizes the main findings of the 2019 AGI Strategy Meeting held by Foresight Institute. The meeting sought to map out concrete strategies toward cooperation. This includes both reframing adversarial coordination topics in cooperative terms and sketching concrete positive solutions to coordination issues.
Translating
Brian has directly contributed to and/or advised the translation and publishing of the following work.
The Alignment Problem: Machine Learning and Human Value 《人机对齐:如何人工智能让机学习人类价值观》
Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.
Technical Countermeasures for Security Risks of Artificial General Intelligence (针对强人工智能安全风险的技术应对策略)
This paper analyzes the sources of the security risks of AGI from the aspects of model uninterpretability, unreliability of algorithms and hardware, and uncontrollability over autonomous consciousness. The authors propose a security risk assessment system for AGI from the aspects of ability, motivation, and behavior. Subsequently, they discuss the defense countermeasures in the research and application stages.
The Precipice: Existential Risk and the Future of Humanity 《危崖:生存性风险和人类的未来》
Drawing on over a decade of research, The Precipice explores the cutting-edge science behind the existential risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies that can safeguard humanity.
Human Compatible: Artificial Intelligence and the Problem of Control 《AI新生:破解人机共存密码——人类最后一个大问题》
If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy.
Life 3.0: Being Human in the Age of Artificial Intelligence《生命3.0:人工智能时代,人类的进化与重生》
This book discusses artificial intelligence and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.
Fu Ying on AI and International Relations (人工智能对国际关系的影响初析)
This essay analyzes how AI changes the international order from the perspective of international structures and norms. It suggests that countries should discuss the future international norms of AI from the perspective of building a community of shared future for mankind and the principle of common security.
Speaking
Select talks and seminars
- “AI Toolkit for Law Enforcement”, The Third Global Meeting on AI for Law Enforcement, United Nations Interregional Crime and Justice Research Institute’s (UNICRI) Centre for AI and Robotics and the International Criminal Police Organization (INTERPOL) Global Complex for Innovation, 2020
- "Towards a Global Community of Shared Future in AGI", Beneficial AGI Conference, Puerto Rico, 2019
- “AI and US-China Relations”, Carnegie-Tsinghua Center for Global Policy, Beijing, 2019
- “AI Safety and Global Cooperation”, UC Berkeley’s Center for Human-Compatible AI Annual Workshop, Asilomar, 2019
- “Responsible AI Development and Global Cooperation”, AI Summit Asia, Hong Kong, 2019
- “The Future of Humanity and China Specialists”, Tsinghua University’s Schwarzman College, Beijing, 2019
- “The Future of Humanity and Asian Philanthropy”, Asian Philanthropy Circle, Singapore, 2019
Select panel discussions
- “AI for Children: Beijing Principles”, Beijing Academy of AI, Beijing, 2020
- “AI and Coronavirus”, Open Austria, 2020
- “AI Governance Forum”, Beijing Academy of AI, Beijing, 2019
- “World Economic Forum Global Shapers Technology and Leadership Summit”, Beijing, 2019
- “Opportunities for Cooperation on AI at Academic and Corporate Levels”, Beneficial AI Conference, Puerto Rico, 2019 (Video, 28 min)
- “AI Industry Immersion”, Tsinghua University’s Schwarzman College, Beijing, 2019
- “Dialogue with Prof. Yew Kwang Ng”, The 11th International Youth Summit on Energy and Climate Change, Shenzhen, 2019
Media
This page focuses on Brian's Chinese-language media coverage
2023年6月,北京智源大会AI安全与对齐论坛,「深度学习之父」Geoffrey Hinton、OpenAI创始人Sam Altman和张钹院士等14位国内外嘉宾围绕人机对齐、大模型展开对话;科学作家布莱恩·克里斯汀(Brian Christian)的最新作品The Alignment Problem 的中文版《人机对齐》也在会上正式发布。该书由湖南科学技术出版社引进出版,安远AI审校。安远AI创始人谢旻希主持了此次新书发布仪式。
6月29日,安远AI创始人、牛津大学人工智能治理中心政策研究员谢旻希来访上海交通大学中国法与社会研究院。上海交通大学文科资深教授、中国法与社会研究院院长季卫东,上海交通大学计算机科学与工程系教授吴晨涛与来访嘉宾就面向大模型和通用人工智能安全和对齐问题等进行了深入的讨论和交流。
2023年6月,安远AI陪同Stuart Russell教授到访清华大学人工智能国际治理研究院(I-AIIG),就AI安全与国际治理进行交流研讨。
在刚刚过去的「2023 IJCAI-WAIC 大模型与技术奇点:人文与科学面对面高峰论坛」上,Max Tegmark 教授通过连线的形式,针对「为何应该暂停 AI 的研究,如何掌控 AI」与大家展开了分享和讨论。安远 AI 创始人谢旻希先生担任主持人,华东师范大学政治与国际关系学院院长吴冠军教授、新南威尔士大学的 Toby Walsh 教授参与了提问环节。
历时一个月,安远AI与机器之心联合举办的「迈向安全、可靠、可控的人工智能」六场系列讲座圆满结束
2020 in Review with Brian Tse
Interview by Synced to discuss the current development and future trends of artificial intelligence.
2020年9月14日,北京智源人工智能研究院(以下简称“智源研究院”)联合北京大学人工智能研究院、清华大学人工智能研究院、中科院计算所、中科院自动化所、中科院心理所、清华大学人工智能国际治理研究院等高校院所,以及小米、旷视、好未来、高思、极客邦、奇虎360、新一代人工智能产业技术创新战略联盟等人工智能企业和联盟组织,共同发布了我国首个针对儿童的人工智能发展原则《面向儿童的人工智能北京共识》。
2019年8月27日,傅莹在清华大学胜因院27号楼应约会见来华访问的牛津大学人工智能治理研究中心主任Allan Dafoe,Skype创始人之一Jaan Tallinn和牛津大学人工智能治理中心政策研究员谢旻希。双方学者就人工智能国际治理问题进行了交流。
高瓴人工智能学院副院长卢斌、张国富,国际关系学院教授王义桅共同会见了该中心主任艾伦•达福(Allan Dafoe)、政策研究员谢旻希(Brian Tse),及Skype创始工程师、投资公司Ambient Sound Investments创建人贾安•塔林(Jaan Tallinn)。
2019年5月,北京智源人工智能研究院发布了《人工智能北京共识》。同年,该研究院在国家会议中心举办了“2019北京智源大会”,在当天下午的“人工智能伦理、安全与治理专题论坛”上,来自欧盟、英国、美国、日本和中国的人工智能领域的专家就人工智能的伦理问题分享了各自的观点。
“人工智能(AI)应是安全可靠的,而安全则包括算法漏洞、应用层面及使用动机等多方面的风险。”牛津大学人类未来研究所人工智能治理中心(Center for the Governance of AI)政策研究员谢旻希,在日前于深圳举办的第十一届国际青年能源与气候变化峰会间隙,接受澎湃新闻记者的专访时表示。
2019年3月,美国总统特朗普签署了一项行政命令,启动「美国 AI 计划」,以刺激美国政府在人工智能领域的投资,促进美国 AI 产业的发展。机器之心邀请了牛津大学人工智能管理中心(Center for the Governance of AI)政策研究员谢旻希(Brian Tse),分享他对于这一新的美国 AI 计划的观点。
Interview by China Tech Blog incubated at Tsinghua’s Schwarzman College.
© 2022