New Delhi: US tech giant Google has released its monthly transparency report under which it has removed 93,550 content based on 35,191 complaints received from users in August. In addition to reports from users, Google also removed 651,933 pieces of content in August as a result of automatic detection. Also Read – Latest Google Update: Google Stops Working For Gmail, YouTube, Maps On These Devices, Watch Video
The giant tech company had received 36,934 complaints from users in July and removed 95,680 pieces of content based on those complaints. It removed 5,76,892 content in July as a result of automatic detection. Also Read – Honda, Google Join Hands To Integrate In-Vehicle Connected Services In Future Models
The US-based company made these disclosures as part of compliance with India’s IT regulations that came into force on May 26. As per reports, these registered complaints pertain to third-party content that is believed to violate local laws or individual rights. Google’s important social media intermediary (SSMI) platform. Also read- Google Big Update: Gmail, YouTube, Maps will not work on these devices from Monday. full list here
“Some requests may allege infringement of intellectual property rights, while others claim violations of local laws restricting types of content on grounds such as defamation. When we receive complaints regarding content on our Platform, We assess them carefully.”
Categories under which the Monthly Transparency Report removed content
Content removal was based on several categories, such as copyright (92,750), trademark (721), counterfeit (32), fraud (19), court order (12), graphic sexual material (12), and other legal requests (4). The company has explained that a complaint can specify multiple items that are potentially related to similar or different content, and that each unique URL in a specific complaint is considered an individual “item” that is removed. Is.
In addition to what users report, the company invests heavily in the use of technology to fight harmful content online and to detect and remove it from its platform. “This includes using automated identification processes for some of our products to prevent the spread of harmful material such as child sexual abuse material and violent extremist material.
“We balance privacy and user safety: promptly remove content that violates our Community Guidelines and content policies; restrict content (for example, age-restricted content that may not be suitable for all audiences) ); or leave the content live when it does not violate our guidelines or policies.
Google said that automatic detection enables it to act more quickly and accurately to enforce its guidelines and policies. It states that these removal actions may result in the removal of content or the termination of a bad actor’s access to the Google Service. Under the new IT rules, large digital platforms – with more than 5 million users – will have to publish periodic compliance reports every month, detailing complaints received and action taken on them.
The report must also include the number of specific communication links or parts of information that the moderator has removed or disabled access to in pursuance of any active surveillance conducted using automated tools.
What does the Facebook, Instagram and WhatsApp compliance report suggest for the month of August?
Recently, Facebook and WhatsApp have also released their compliance reports for the month of August.
Facebook said it “actioned” about 31.7 million pieces of content across 10 consecutive infringing categories in the country during August, while its photo-sharing platform, Instagram, took action against about 2.2 million pieces across nine categories during the same period .
“Action” refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for violating the standards. Taking action may include removing a piece of content from Facebook or Instagram or covering up photos or videos that may disturb some viewers, with a warning.
Facebook also said that it had received 904 user reports for its app through its Indian complaints mechanism between August 1 and 31. During the same time frame, Instagram had received 106 reports through the Indian complaints mechanism. Facebook-owned WhatsApp said in its report that it has banned over two million accounts in India, while the messaging platform had received 420 complaint reports in August.
(with inputs from PTI)