The botnet is likely a resurgence of a previous porn-bot, which sported the SFW moniker “Pr0nbot” and was uncovered by F-Secure researcher Andy Patel in March. In the course of querying Twitter accounts for specific automated patterns, he found just over 22,000 Twitter bots in on the action, marketing a service dubbed “Dirty Tinder.”
However, subsequent querying a month later found that Twitter had taken action on most of those accounts – only 2,848 remained active and unrestricted in April. That indicates that its algorithm for finding and dismantling bot and troll accounts was largely working.
However, now there’s evidence that the bot-herders are creating Pr0nbot2, with new tactics to evade Twitter’s censors – most notably, pushing out its porn-related scam promotion by way of a pinned tweet rather than including it in the account’s description.
Like the previous botnet, the accounts could be crawled due to the fact that they follow each other, and the new accounts had text in their descriptions that followed a predictable pattern. Armed with this, Patel created a script that after 24 hours had identified just over 20,000 accounts. Eight days later, it had found 80,000, out of which Patel verified 30,000.
“I’m fairly confident this rabbit hole goes a lot deeper, but it would have taken weeks to query the next 50,000 accounts, not to mention the countless more that would have been added to the list during that time,” he said in a blog.
Patel created a visual representation of the size distribution of 80,000 nodes and their corresponding communities, the largest of which contained over 1,000 accounts:
“These accounts are way more connected than the older botnet,” Patel noted. “The 20,000 or so accounts identified [in the first day] connected to just over 100 separate communities. With roughly the same amount of accounts, the previous botnet contained over 1,000 communities.”
In looking at survivor accounts from Pr0nbot2, Patel discovered an almost exactly even split between restricted and non-restricted accounts in the new set, indicating that this second iteration of the botnet has managed to somewhat evade the social network’s algorithm for determining bot accounts.
“Given that these new bots show many similarities to the previously discovered botnet (similar avatar pictures, same URL shortening services, similar usage of the English language), we might speculate that this new set of accounts is being managed by the same entity as those older ones,” Patel said.
Even though some of the accounts had been around for years, they became active in the scam activity just in the last 21 days, and many accounts switched up hallmark characteristics (one took a six-year break from Twitter, and switched its language to English; another went from posting in Korean to posting in English, with a three-year break in between).
“The tweets containing shortened URLs date back only 21 days,” Patel said. “My current hypothesis is that the owner of the previous botnet has purchased a batch of Twitter accounts (of varying ages) and has been, at least for the last 21 days, repurposing those accounts to advertise adult dating sites using the new pinned-Tweet approach.”
He added, “A further hypothesis is that said entity is re-tooling based on Twitter’s action against their previous botnet.”
How much of a threat is it? From the standpoint of wanting to dismantle accounts that may be used to spread fake news or other modern information scourges, it could be a problem, he explained.
“For the most part, it seems they’re simply trying to advertise the [linked] adult dating sites,” Patel said. “They do this by liking, retweeting, and following random Twitter accounts at random times, fishing for clicks.”
The researcher added, “This network of accounts seems quite benign, but in theory, it could be quickly repurposed for other tasks including ‘Twitter marketing’ (paid services to pad an account’s followers or engagement), or to amplify specific messages.”