‘Algorithms mean power’…Do they□
2021-08-19 10:25:59
Today, we have entered a murky era which is algorithm-based and data-centric. However, while murky, it is not dark. We find ourselves amidst the cloudiness of data but still can see the brightness looming ahead. We still have the right to choose where we are heading.
Exposure to ‘algorithmic power’
In an era in which “data means resource,” “data means power,” and even “data means everything,” humans, as social subjects in the traditional sense, are also being inevitably digitalized. We are now under the influence of algorithmic power in many aspects. Computational code means law, which can limit freedom, or make freedom possible. In such an era, everybody may become one-sided and lightweight, with algorithmic power gaining the upper hand, which thus further ascends as a type of prevailing superpower.
All living creatures are distinctive beings, with their daily existence displayed in vivid, multifold forms. However, in today’s era when data means everything, everyone is condensed into and projected onto certain screens (mostly the screen of interest) with their appearance vague but personal data explicit. Before this kind of superpower, the subjectivity of humans faces the peril of being deprived. Individuals are “compressed” into a bunch of discrete eigenvalues which indicate their features. In such context, it is only necessary that the eigenvalues as a result of algorithmic rules satisfy specific demands of subjects. For example, commercial organizations summarize common characteristics that are beneficial to their profit gains and increasing income by adopting algorithmic rules in analyzing user information.
As a newly arisen power, algorithmic power does not take us as real subjects, but as objects which can be predicted and controlled by mathematic equations. For the owners of algorithmic rules, there is no necessity to regard the commoners as real subjects; they are only a set of statistics in constant change. As Zheng Ge, a professor of law from Shanghai Jiao Tong University, points out, the intelligent algorithm which is intangible has altered people’s mode of production, ways of consumption, relations of production, and social relations. At the same time, it erodes civil rights by technological power which is more obscure, ubiquitous, and diversified. Humans who possess subjective initiatives are compelled to put on a cover which is digitalized and virtual.
Being forcibly under exposure is people’s reality in contemporary times. In the era of big data, we do not really have privacy, so long as heaps of de-personalized data can become precisely personalized after a series of deduction, reckoning, and calculation through algorithm. Today, the reality we face is: the state of personal identity having been detected or being detectable is usually the result of data analysis. To this, either the citizens as individuals or authorities feels impotent. The privacy is actually not “violated,” but only “discovered”—the data does not infringe upon the public’s privacy; it only reasonably and logically “discovered” the public’s privacy. We could only feel pitifully helpless about the revelation of personal privacy.
Legal algorithm as an option
In view of this, we cannot stay aloof and indifferent. The wild horse of artificial intelligence should be saddled up with reason and the unruliness of algorithms needs to be regulated by laws. An algorithm should have its values, and efficiency should not become the sole goal that directs the values of algorithm. Values and reasons of humans should be injected into the interior of different types of algorithms.
A legal algorithm is a good option that attempts to solve technological problems through technology-based schemes. This means to refine the current legal knowledge in a more systematic way that is understandable by machines. The overarching goal is to take advantage of machines by letting machines understand legal knowledge—that is to let algorithms understand laws so as to better utilize algorithms. This is conducive to preventing algorithmic power from overly expanding. The option of “taking code as the law” can be understood as the same as “fighting poison with poison.” Even if it is not possible that this legal approach is widely adopted in the future to regulate artificial intelligence, it can be used as a practical solution in individual cases. Since the most effective means for preventing a violent act is through stronger violence, the most viable way to restrict code is stronger code. Definitely, it does not mean to reject the guiding role of human reason and confinement by legal regulations.
Guo Liang is a post-doctoral from the Research Center of Science &Technology and Legal Studies at Zhejiang University.