The recent debate between the
Associated Press and Facebook about the success of removing content posted by
terrorist organizations should be a wake-up call concerning content moderation
capabilities on these kinds of platforms. Facebook data indicates the removal
of 99% of terrorism content, while AP contends that Facebook’s success is only
38%. The point here is that #machine_learning adds a
limited capability to human content mediation. The current state of the art in #machine_learning in this
area is far from meeting expectation and is a fantasy, created around the
magical tool of #artificial_intelligence
(AI).
Terrorist
networks will continue to exploit advanced technology in the areas of social
network mapping and terrorist recruitment to benefit from the #AI arms race.
New #AI_technology in
drones, among other things, will result in the production of cheap versions of
them and that may easily fall into the hands of terrorists. There is no doubt
that terrorist groups like ISIS will attempt to utilize all possible means to
pursue terrorist activities. The gaps in content moderation in social media and
communication networks will constitute opportunities for ISIS and others as
well.
#Machine_learning has a
technology aspect, a social context, and an industry dimension. On the one
hand, it is a product of high technology and a market for it. The social
context is where it impacts the daily lives of people. In this sense, there is
a growing #AI intervention with an influence on the socio-economic conditions
of people. This is an evolving phenomenon, which requires social, political,
legal and ethical evaluations, in addition to technology.
#Machine_learning relies
on algorithms that are known as classifiers. The classifier needs to be trained
by data and works better if the difference in data, no matter how massive it
is, clearly shows it. As it is fed by labeled categories, it is fragile
against unforeseen conditions. It does not have a cognitive ability comparable
to humans in this sense. That is why one would not expect #machine_learning to be
able to respond to the complexities of societal and cultural value settings.
The automated tools in one setting may be fragile in others. However, it is
also next to impossible to monitor contents at today’s scale of social media
and relevant platforms only with human capability. The need for #machine_learning is
obvious.
The
uploaders of content are aware of the deficits of machine learning enhanced
tools. They develop measures to bypass the filters of automated tools. They may
modify the content until they reach the goal of staying on the platform as much
as possible. Human probes would help automated tools to discover blind spots.
However, the idea of creating efficient filters may not always be possible. The industry dimension of machine learning does not like to disappoint customers.
Providers may be faced with considerable fines/penalties if they cause
government dissatisfaction in the case of benign posts. This situation results
in over-filtering which puts machine learning to the side of “artificial”
rather than the desired component of “intelligence” in content management.
No comments:
Post a Comment