Connect with us

News

Twitter’s “Responsible Ml” Initiative To Analyze Damage Caused By Artificial Intelligence

Twitter

Twitter constantly provides updates on its ongoing AI and Machine Learning projects. The leading microblogging platform known for its social listening technology announced in its latest blog the launch of a joint Responsible Machine Learning initiative. Following the announcement, Twitter reinforced its commitment to building and supporting ethical AI practices and taking “responsibility for our algorithmic decisions.”

To evaluate racial and gender bias in its AI systems, Twitter is launching a new initiative.

Twitter is launching a new initiative called Responsible Machine Learning (ML) to investigate whether and how its artificial intelligence (AI)/machine learning (ML) systems are racially and gender biased. Twitter’s official statement said the move was to investigate any “unintended damage” caused by its algorithms, as the social media giant is still at the beginning of its AI and ML journey.

The ‘Responsible ML’ group is an interdisciplinary team of people who come from different disciplines, including technical, research, trust and security, and product teams. “This effort is headed by our ML Beliefs, Transparency, and Accountability (META) squad: a passionate group of engineers, investigators, and data scientists at effort together crosswise the business to evaluate subsequent or concurrent unintentional harms in the algorithms we use and help Twitter prioritize which issues, which need to be addressed first,” the company said.

What Is Responsible Machine Learning?

Like all current applications of artificial intelligence and machine learning, Responsible ML is surrounded by smoke and haze, leading to ambiguity among innovators in practically defining the scope of this unique application.

Twitter has attempted to define its Responsible ML by mentioning the following pillars:

• Taking responsibility for algorithmic decisions [AI ML]

• Equality and fairness of results

• Transparency of decisions and how to reach them

• Allowing the choice of agency and algorithm

The outcome of the exercise will help Twitter understand the impact of ML, and conduct deeper analysis and studies to understand the existence of the potential of its AI to harm users and resources. The “Responsible ML” group will conduct a gender and racial bias analysis of its image cropping (saliency) algorithm, a fair study of what differences racial groups see in recommendations on the “home” timeline, and evaluate content recommendations.

“The utmost powerful app of responsible ML arises from how we put on our visions to build an improved Twitter,” the business said. The result will have an impact on how Twitter uses AI and ML, as the company said the results could also lead to decisions to remove algorithms or modify its product.

When Twitter unveiled its Responsible Machine Learning Initiative in April, it came as no surprise. Other big companies (like Google and Microsoft) have already taken steps towards more ethical, compliant, secure, and human-centered AI. And if you wait a while, you’ll quickly see other companies follow suit. Twitter also explained that to inform people about its algorithms and the impact ML is having on the platform, the company will also create explainable ML solutions.

Read more – Announcing Twitter’s Responsible Machine Learning Initiative

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Comments

Recent Posts

Categories

Trending