Microsoft Launches ‘deepfake’ Detector to Spot AI Altered Fake News

The Authenticator software probes any photo or each frame of a video, searching for proof of manipulation that could not be visible to the naked eye. In a blog post, the company’s spokesperson said, Microsoft  has announced that it has collaborated with the AI Foundation in San Francisco to provide the video authentication tool available to political campaigns, news outlets, and others involved in the election process. Fake posts that seem to be real are of main concern in the upcoming US presidential election, particularly after fake social media posts erupted in a huge number during the 2016 election. Microsoft has also proclaimed that it built technology into its Azure cloud computing platform that enables originators of photos or videos to add data in the background that can be utilized to check whether imagery has been altered. Microsoft is also operating with the University of Washington and others on supporting people to be savvier when it comes to identifying misinformation from reliable facts. According to a Microsoft post, Check out? A New Update to the Microsoft’s Whiteboard Allows Users to Paste Sticky Notes and Text on the Web