University researchers have created a new machine learning tool that can identify when a domain name has been created solely to disseminate disinformation, ensuring that it can be shut down before it has had a chance to disseminate information. “Fake news”.
Real-Time Prediction of Online False Information Purveyors and their Characteristics, a working paper written by Anil Doshi of UCL School of Management, Sharat Raghavan of the University of California, Berkley, and William Schmidt of Cornell University, details how areas of misinformation can be identified quickly.
The early detection method uses a combination of domain registration data available during the creation of a website and browser behavior to determine if a site is likely to produce false information. Features such as whether the party registering remains private and whether a site was created at the time of a news event are also taken into consideration.
“Many models that predict false information use the content of articles or behaviors on social media to make their predictions,” Doshi commented. “By the time this data becomes available, it may be too late. These producers are nimble and we need a way to identify them early. By using domain registration data, we can provide an early warning system using data that is arguably difficult for actors to manipulate. Actors who generate false information tend to prefer to stay hidden and we use that in our model. “
Lies travel fast
The machine learning tool was able to correctly identify 92% of all false information domains and 96.2% of non-false information domains set up in relation to the 2016 US election before they started operations . As malicious actors increasingly exploit major events to spread disinformation, the tool could be of enormous value to online regulators.
In addition, it could help create a safer Internet. Fake news is sometimes used by malicious actors to spread malware, tricking online users into clicking on questionable links that actually inject malware into their device. Stopping disinformation could block a potential exploit.