TechCrunch is happy to report that Google is cracking down on hate speech and the promotion of hate.
The company has announced a new policy that allows Google+ users to block “hate speech,” or words that “offend, demean, intimidate, threaten, or harass others based on their race, ethnicity, religion, sexual orientation, gender identity, gender expression, gender ability, national origin, disability, or genetic information.”
Google’s policy has not yet been released, but it has been rolled out across Google+ and Google+ Hangouts.
The “hate” in this definition is not the only language that Google+ can block, however.
Google+ is also now able to block certain terms and phrases that have been identified as “hate-motivated,” meaning they are often used by groups of people who seek to spread hate.
Google also has a new way for people to flag “hate content,” which allows people to “flag content that contains content that incites, advocates for, or encourages violence or harassment against others based solely on the speaker’s gender, race, sexual preference, religion or belief, age, disability or genetic status.”
“It’s important to recognize that these terms and conditions do not apply to content posted by users on our platform or by users who are not directly affiliated with the company,” Google said in a statement.
“Google+ content that is not approved by the Google+ Community Guidelines will not be flagged for removal.”
It’s not the first time Google has made changes to its platform to protect against hate speech.
Last year, the company began banning “hate groups,” or people who advocate violence against people.
Since then, it has also implemented a new “Hate Safety” policy, which requires Google+ to remove “hate material” that “incites violence, harassment, or other forms of discrimination against individuals or groups on our platforms.”
This new policy is also part of Google’s effort to protect the platform from the spread of hate speech online, which has been exacerbated by the election of President Donald Trump.
“This is an important step toward addressing the spread and spread of anti-Muslim hate and intolerance online,” Google wrote in a blog post about the new policy.
Google has also updated its “Content ID” service to recognize hate speech when it is reported by a third-party and to notify users when hate speech is reported in the “wrong place.”
Google said that “the content ID system will be used to detect hate speech that is reported to us in a way that doesn’t violate our Community Guidelines.”