首页 | 本学科首页   官方微博 | 高级检索  
     检索      

算法决策风险防范的法制路径研究
引用本文:陈飏,裴亚楠.算法决策风险防范的法制路径研究[J].重庆邮电大学学报(社会科学版),2021,33(3):72-81.
作者姓名:陈飏  裴亚楠
作者单位:重庆邮电大学网络空间安全与信息法学院,重庆400065
基金项目:教育部人文社会科学研究青年基金项目:中国家事审判制度研究(18YJC820008);重庆市人文社会科学重点研究基地——网络社会发展问题研究中心重点研究课题:欧美网络安全防护经验对我国网络安全情势的启示(K2017SKJD09)
摘    要:伴随算法决策在行政、司法和商业等领域的深度应用,隐私泄露、算法歧视与人类自主性受损已成为三大主要风险.为防范该类风险,首要的是须重申、确立并加强人的主体地位意识,以此反向定位算法决策的工具属性.在此前提条件下,亦应借力他山之石雕琢己玉,积极借镜以美国、欧盟和德国为代表的域外经验,研习其有关算法决策的制度构建与规则设计,尤其要关注法律制度、规则设计的针对性与专业性.当然,更需立足我国国情,切实把脉时代发展,在适时运用比例原则以加强算法决策风险评估机制的同时,科学合理界定责任 主体,明确制定该类风险责任承担的相关法律法规,以最大化降低因法律模糊性而遭致数据主体的二度伤害.

关 键 词:算法决策  算法决策风险  风险评估  责任规定
收稿时间:2020/10/29 0:00:00

Research on the Legal Path of Algorithmic Decision Risk Prevention
CHEN Yang,PEI Yanan.Research on the Legal Path of Algorithmic Decision Risk Prevention[J].Journal of Chongqing University of Posts and Telecommunications:Social Science Edition,2021,33(3):72-81.
Authors:CHEN Yang  PEI Yanan
Institution:School of Cyber Security and Information Law, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
Abstract:With the in-depth application of algorithmic decision-making in administrative, judicial, and commercial fields, privacy leakage, algorithmic discrimination, and damage to human autonomy have become the three major risks. To prevent these risks, the first thing to do is to reiterate, establish, and strengthen the awareness of human subjectivity, and to reversely locate the tool attributes of algorithmic decision-making. Under the guarantee of this premise, we should actively learn from the extraterritorial experience of the United States, the European Union and Germany, study the system construction and rule design of algorithmic decision-making in these countries or regions, and pay special attention to the pertinence and professionalism of their legal systems and rule design. Of course, it is necessary to base on the specific national conditions of our country, effectively grasp the pulse of the development of the era, and use the principle of proportionality to strengthen algorithmic decision-making risk assessment mechanism, scientifically and reasonably define the responsible party, and clearly formulate relevant laws and regulations for the responsibility of these risks, so as to minimize the second degree of harm to the data subject due to legal ambiguity.
Keywords:algorithmic decision-making  algorithmic decision-making risk  risk assessment  responsibility regulation
本文献已被 万方数据 等数据库收录!
点击此处可从《重庆邮电大学学报(社会科学版)》浏览原始摘要信息
点击此处可从《重庆邮电大学学报(社会科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号