The Internet Needs to Be a Safer Place for Everyone
Brands custodians have a concern that how do they as a platform ensure that their brand is not seen alongside objectionable content?
Over the last couple of months, innumerable surveys have taken place and all brand custodians across categories ask the same question – how do they as a platform ensure that their brand is not seen alongside objectionable content? When probed further, the list of objectionable content starts with the usual pornographic and religious bigotry related and then goes deep into the depiction of women and children in viral videos, violence against minorities, verbal abuse, and so on. Their worry is that if their ads appear alongside objectionable content, the potential erosion of brand value is huge.
According to Comscore – “having an ad appear alongside objectionable content can negatively impact a brand’s reputation and be even more damaging to its bottom line. Consider the classic example of an airline ad poorly placed alongside a news article of a plane crash, or the simple potential for a popular consumer brand’s ad to run adjacent to racially-charged hate speech. Concerns around brand safety keep CMOs up at night, and for good reason. A single ad delivered in unsafe content can result in:
Waste of valuable ad spend o placements unlikely to drive impact
Damage to the brand image
Loss of trust and respect among consumers
An embarrassing public relations nightmare”
Many brands depend on influencers to deliver their brand message to a captive audience, particularly in the time where regular production of content is not possible. In recent times many influencers on certain apps have been promoting an acid attack on women, rape, misogyny, and other problematic content. These influencers base their value on the number of views that their content garners and the followers that they collect along the way. Millions of video views garnered only means that these influencers are able to amplify their viral videos to a huge audience
A vast majority of the Indian population jumped from having no access to the internet to cheap data almost overnight. We can rejoice at the huge potential audience we have in place but almost 60% of this audience are merely data consumers who lap up anything that is forwarded to them. Their opinions and ideas are shaped by an echo chamber that reinforces certain images and concepts. Add to it the ability to create anything and send it out without any fear and what we have is a tinderbox waiting to explode. Self-expression is good but there is a thin line between freedom to express and freedom to abuse.
Data suggests that India is now home to over half a billion internet users with more than 60% of them from rural India and the rest from urban areas. Interestingly, data also suggests that there are 21% more women online with about 71 million children aged between 5 and 12. History has shown that unregulated areas of the internet invariably attracts predators.
Given that the majority of the internet audience is relatively new in India, especially in rural areas, digital maturity will develop over time. Until then, however, the industry needs to keep a sharp eye on the quality of content brands are being integrated with and the users who are being served content. There is a dire need for tech platforms to be responsible and build moderation capabilities.
Ad-tech platforms, digital partners, agencies need to be more adept in equipping themselves with skills and measures that will bring more accountability. While few businesses have incorporated ad-fraud solutions that counter fake news and advertising, such solutions should perhaps form broader policies that include brand and user safety measures that more actively blacklist platforms.
More and more brands are now using digital video content to advertise, educate, and entertain. And owing to the half-baked content platform policies, brands are even more at risk to compromise their safety for the large numbers that some of these platforms have to offer.
It is imperative that in a nascent market, creators must be assisted with guidelines and policies. Users are at even greater risk because there are no protocols that direct/ guide a creator in deriving engagement with meaningful content that does not hurt sentiments. A constant barrage of a monoculture which sometimes objectifies women needs to be curbed. One way is to create a blacklist of creators and influencers who misbehave and make this list available freely to all brands and their agencies.
Further, the content has been a key driver as nine out of 10 users accessed the Internet daily for entertainment and communication. And with the pandemic putting the majority of India on a forced vacay, leaving users enough time to consume more, there is an urgent need to put significant moderation around content
There is a need to create an alliance between India’s leading brands, ad-tech platforms, and leading agencies, trade bodies to address the need to stop the spread of harmful content, similar to what Unilever and few other brands had done in the past with the objective to form strong policies. It is important that we all, as a part of this fast-growing digital environment needs to put collaborative efforts to put brakes on this and also assist our users to develop their digital behavior through responsible communication.
While there are several who are using Machine Learning and Artificial intelligence, in a country which has diverse sensitivities, human moderation must act as a final checkpoint for all content. Safety for brands as well as users’ needs to be a strong guarantee in India-not a long-drawn process. Considering that we live in an extremely tech-savvy environment-safety for all in the ecosystem should not even be a conversation-It should be a given.