Connect with us

News

Announcing Twitter’s Responsible Machine Learning Initiative

Announcing Twitter's Responsible Machine Learning Initiative

Responsible Machine Learning development is necessary to get the most out of various AI and Machine Learning undertakings. These projects give end users, data scientists, and engineers working with AI the tools they need [0l.—–to develop, test, and use a variety of AI and ML applications in an ethical manner. Almost all significant technology innovation firms promote the value of developing Responsible Machine Learning. One of them is Twitter.

Twitter, a social media platform and microblogging service based in the US has unveiled a project on “ethical machine learning” that would provide algorithmic fairness scores on the platform. 

Responsible Machine Learning: What Is It?

Like other contemporary AI and ML applications, Responsible ML is shrouded in smoke and haze, which makes it difficult for innovators to precisely define the scope of this particular application.

Twitter has mentioned the following pillars in an attempt to describe its Responsible ML:

  • Accepting accountability for [AI ML] decisions made by algorithms
  • Fairness and equity of results
  • Transparency regarding decisions and the process used to make them
  • Agency and algorithmic choice facilitation

Purpose of Responsible ML Initiative

The California messaging service claims that the effort seeks to address “the potential negative repercussions of algorithmic decisions” and improve artificial intelligence openness.

The decision was made amid growing anxiety over online service algorithms, which some claim might promote violent or terrorist content and reinforce racial and gender inequality.

Appropriate technological use entails learning about the potential long-term repercussions, according to Twitter’s ethics and transparency team.

When Twitter uses machine learning (ML), it can affect hundreds of millions of tweets per day, and occasionally, a system’s behavior may start to deviate from what was intended.

To achieve “equity and fairness of outcomes,” the program calls for “bearing responsibility for our algorithmic conclusions.”

“We’re also building explainable ML solutions so one can better understand our algorithms, what influences them, and how they affect what you see on Twitter,” the business said in a statement. Similarly to this, the algorithmic decision will give users more say and control over how Twitter should be for them. We’re only starting to investigate this, and we’ll let you know more soon.

A dedicated team of researchers, data scientists, and engineers on Twitter’s ML Ethics, Transparency, and Accountability (META) team is investigating these ML-related issues.

The team will share their findings with other researchers, Twitter is also relying on user feedback to develop its Responsible ML project. To gather more public comments, a Twitter campaign has been launched. to “increase the industry’s collective grasp of this problem, help us improve our approach, and hold us accountable.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Comments

Recent Posts

Categories

Trending