In a bid to assess racial and gender bias in its artificial intelligence or machine learning systems, Twitter is commencing a new initiative called Responsible Machine Learning. Terming it a long journey in its early days, Twitter said the initiative will assess any "unintentional harms" caused by its algorithms. "When Twitter uses Machine Learning, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. "These subtle shifts can then start to impact the people using Twitter and we want to make sure we're studying those changes and using them to build a better product," said Jutta Williams and Rumman Chowdhury from Twitter. Twitter's 'Responsible ML' working group is interdisciplinary and is made up of people from across the company, including technical, research, trust and safety, and product teams. "Leading this work is our Machine Learning Ethics, Transparency and Accountability (META) team comprising a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms we use and to help Twitter priorities which issues to tackle first," the company elaborated. Twitter said it will research and understand the impact of ML decisions, conduct in-depth analysis and studies to assess the existence of potential harms in the algorithms it uses. Japan Deputy PM Taro Aso challenged by China to drink treated Fukushima water India, United States can collaborate to find technologies to tackle climate change UAE mediates between India and Pakistan to restore 'functional relations'