Could On-Demand Artificial Intelligence-based Authentication End The Fake News Menace?
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Against the backdrop of concocted news and political propaganda proliferating through digital media, fake news has become a ubiquitous and very real menace. While there is a general consensus that fake news must be curbed, regulation by intermediaries providing personal messaging platforms such as WhatsApp, iMessage, Telegram, Cyphr, Wire, Signal and others present a unique dichotomy.
On one hand, these apps are intended to be a medium of private communication and any interception of user data, at the government’s behest or as an attempt at self-regulation, would be a violation of the app user’s right to privacy. On the other hand, by their inaction, these apps would be mute spectators to fabricated content spreading through their messaging platforms.
Managing Malignant Messages
Constrained by their inability to intercept messages or dilute end to end encryption, apps have attempted to deploy various techniques to facilitate users to curb fake news. Last year, following a spate of lynching’s incited by rumours spread over WhatsApp, the Indian Government had directed the messaging platform to curb the proliferation of fabricated news by the ‘deployment of appropriate technology’.
WhatsApp did not dilute its end to end encryption to allow interception of user data but has run awareness campaigns to help users identify fake news. In its white paper titled ‘Stopping Abuse: How WhatsApp fights bulk messaging and automated behaviour’ the Facebook-owned platform recently revealed that it was identifying and deleting over a million accounts every month in its efforts to combat malicious content and fake news.
Other apps may be more reluctant to hack into their user base, given the impact on their valuations, messaging companies may need to look to other alternatives. Since false news is propagated faster and wider than all forms of news, some have argued that it is incumbent upon apps, as the preferred medium of fake news, to assist users to identify fake news.
Interception & Intermediary Liability
Since Apps act merely as a conduit for encrypted information passed between users, they are classified as an ‘intermediary’ under the Information Technology Act, 2000 (IT Act). As long as intermediaries do not initiate the transmission, select the recipient, modify the information contained in the transmission, the IT exempts intermediaries from liability for any third party data (including fabricated news) that is transmitted by it.
This tightly defined ‘safe harbour exemption’ limits the ability of apps to adopt countermeasures to identify and intercept the dissemination of fake news. If these apps were to selectively block the transmission of news stories identified as fake, the app would fall afoul of the requirements to continue being classified as intermediaries. This would consequently deprive apps of the safe harbour exemption, thereby exposing them to liability for third-party data transmitted via them.
Constrained by these conditions to continue to be classified as an intermediary, measures deployed to combat fake news, have thus far been predicated on the user’s ability to discern what news is fake, and what isn’t. With the evolution of artificial intelligence-driven data analytics, it has become increasingly possible to offer ‘on demand’ verification of a news story.
The submission of a message for verification must be end-user initiated since apps would lose their status as intermediaries under the IT Act by screening messages or intercepting transmissions. Apps could, therefore, offer an ‘in-app’ functionality that generates an authenticity report at the user’s behest. While the submission of a news story for verification is still hinged on the user’s choice, it eliminates the dependency on the user’s ability to discern authenticity.
While this would not prevent a user from maliciously forwarding false messages, it does nudge users to verify the authenticity of news without much effort, before sharing such messages onwards. It would also increase end-user accountability by diminishing the ability of a user to claim that they blindly relied on the authenticity of the news report.
User Initiated Authenticity Reports
By generating authenticity reports only when a user chooses to extract information from the encrypted ecosystem within the app and submit it, for authentication, it is not the platform, but the end-user who selects the information in the transmission to be decrypted.
Since apps typically process user data, it would not be a tall order to develop an artificial intelligence-driven functionality to verify news reports. If this is decentralized and cloud-based, the feature could also bring parity amongst users to seek corroboration reports nearly on a real-time basis by eliminating the need for the app users’ phone to have computing capabilities or high-speed internet access to run the analysis.
As apps cannot modify the contents of messages, they would be compelled to allow users to share even news articles that cannot be corroborated. However, once a user authenticating a message containing news receives an adverse report, they could mandate un-corroborated messages to be flagged as ‘un-verified’ when forwarded.
This would enable apps to find a balance in being able to continue to be classified as an intermediary, thereby avoiding liability for third party data transmitted by them, while also discharging their ethical responsibility to implement measures to curb the spread of fake news.