Note that this is "announced," so we don't know when it's actually going to be implemented.
Facebook today announced new features for Messenger that will alert you when messages appear to come from financial scammers or potential child abusers, displaying warnings in the Messenger app that provide tips and suggest you block the offenders. The feature, which Facebook started rolling out on Android in March and is now bringing to iOS, uses machine learning analysis of communications across Facebook Messenger's billion-plus users to identify shady behaviors. But crucially, Facebook says that the detection will occur only based on metadata -- not analysis of the content of messages -- so that it doesn't undermine the end-to-end encryption that Messenger offers in its Secret Conversations feature. Facebook has said it will eventually roll out that end-to-end encryption to all Messenger chats by default.
That default Messenger encryption will take years to implement.
More:
Facebook hasn't revealed many details about how its machine-learning abuse detection tricks will work. But a Facebook spokesperson tells WIRED the detection mechanisms are based on metadata alone: who is talking to whom, when they send messages, with what frequency, and other attributes of the relevant accounts -- essentially everything other than the content of communications, which Facebook's servers can't access when those messages are encrypted. "We can get pretty good signals that we can develop through machine learning models, which will obviously improve over time," a Facebook spokesperson told WIRED in a phone call. They declined to share more details in part because the company says it doesn't want to inadvertently help bad actors circumvent its safeguards.
The company's blog post offers the example of an adult sending messages or friend requests to a large number of minors as one case where its behavioral detection mechanisms can spot a likely abuser. In other cases, Facebook says, it will weigh a lack of connections between two people's social graphs -- a sign that they don't know each other -- or consider previous instances where users reported or blocked a someone as a clue that they're up to something shady.
One screenshot from Facebook, for instance, shows an alert that asks if a message recipient knows a potential scammer. If they say no, the alert suggests blocking the sender, and offers tips about never sending money to a stranger. In another example, the app detects that someone is using a name and profile photo to impersonate the recipient's friend. An alert then shows the impersonator's and real friend's profiles side-by-side, suggesting that the user block the fraudster.
Details from Facebook
from Schneier on Security https://ift.tt/2zIkklh
Comments
Post a Comment