In its latest compliance report, WhatsApp reveals that it banned over two million accounts in India between May and June for “harmful behavior” and other rules violations, including sending a “high and abnormal rate of messages.”
While only a fraction of its 400 million-strong user base in India, the number of blocked accounts is significant since it is roughly a quarter of the eight million bans WhatApp hands down globally each month.
Noting that 95% of the accounts were blocked for exceeding limits placed on the number of times messages can be forwarded in the country, the platform said its “top focus” has been to prevent the spread of harmful and unwanted messages.
“The abuse detection operates at three stages of an account’s lifestyle: at registration; during messaging; and in response to negative feedback, which we receive in the form of user reports and blocks,” the report noted.
While stating that user-to-user conversations on the platform remain encrypted and private, WhatsApp said it pays “close attention to user feedback” and engages with a team of specialists and analysts to evaluate “edge cases” and improve effectiveness against misinformation.
In addition to responding to user complaints, WhatsApp said it relied on “behavioral signals” from user accounts, available “unencrypted information,” profile and group photos, and descriptions to identify potential offenders.
WhatsApp said its grievance officer in the country had received over two hundred appeals to overturn account blocks over the month-long period. As well, it received more than a hundred complaints related to account and product support and user safety.
Social media and communications platforms have to publish monthly reports that list details of its actions under the country’s new Information Technology rules. This was the Facebook-owned messaging application’s first such report since the rules came into effect recently.
Despite publishing the report, WhatsApp has continued to refuse to disclose the initial sources of fake news, hoaxes and illegal viral messages that have been blamed by the government as having incited mob violence in the country.
Although the new IT rules have a traceability clause that requires platforms to track and reveal the accounts from where such messages originate, WhatApp has challenged this obligation in court on the grounds that user privacy would be affected.
Under this provision, social media platforms have to trace the “originator” of problematic content when required-to by a court order or official authority. The rules clarify that such an order may only be issued for serious criminal offenses such as threats to ‘public order’.
In May, the company filed a lawsuit in the High Court of the national capital New Delhi that argued the provision was a “dangerous invasion of privacy” and would break the app’s much-touted end-to-end encryption that apparently ensures messages can only be read by the sender and receiver.
In its petition to the court, WhatsApp said it would need to build the ability to identify the initial source of every communication sent on its platform since it had no way currently to predict which message would prompt an order seeking originator information.
However, the country’s IT ministry countered that the right to privacy was subject to reasonable restrictions and specified that such orders would only be passed for the “purposes of prevention, investigation, punishment” of offenses relating to national security and integrity, public order, child pornography and other serious crimes.
“None of the measures proposed by India will impact the normal functioning of WhatsApp in any manner whatsoever, and for the common users, there will be no impact,” former IT minister Ravi Shankar Prasad had said at the time.
Facebook and Instagram had also submitted compliance reports earlier in the month.
If you like this story, share it with a friend!