Monday 30 April 2018

Twitter announces new video partnerships with NBCUniversal and ESPN

Twitter is hosting its Digital Content NewFronts tonight, where it’s unveiling 30 renewals and new content deals — the company says that’s nearly twice as many as it announced last year.

Those include partnerships with the big players in media — starting with NBCUniversal, which will be sharing live video and clips from properties including NBC News, MSNBC, CNBC and Telemundo.

Twitter also announced some of the shows it will be airing as part of the ESPN deal announced earlier today: SportsCenter Live (a Twitter version of the network’s flagship) and Fantasy Focus Live (a livestream of the fantasy sports podcast).

Plus, the company said it’s expanding its existing partnership with Viacom with shows like Comedy Central’s Creator’s Room, BET Breaks and MTV News.

During the NewFronts event, Twitter’s head of video Kayvon Beykpour said daily video views on the platform have nearly doubled in the past year. And Kay Madati (pictured above), the company’s head of content partnerships, described the company as “the ultimate mobile platform where video and conversation share the same screen.”

As Twitter continues to invest in video content, it’s been emphasizing its advantage in live video, a theme that continued in this year’s announcement.

“Twitter is the only place where conversation is tied to video and the biggest live moments, giving brands the unique ability to connect with leaned in consumers who are shaping culture,” said Twitter Global VP of Revenue and Content Partnerships Matthew Derella in a statement. “That’s our superpower.”

During the event, Derella also (implicitly) contrasted Twitter with other digital platforms that have struggled with questions about transparency and whether ads are running in an appropriate environment. Tonight, he said marketers could say goodbye to unsafe brand environments and a lack of transparency: “And we say hello to you being in control of where your video aligns … we say hello to a higher measure of transparency, we say hello to new premium inventory and a break from the same old choices.”

On top of all the new content, Twitter is also announcing new ad programs. There are Creator Originals, a set of scripted series from influencers who will be paired up with sponsored brands. (The program is powered by Niche, the influencer marketing startup that Twitter acquired a few years ago.) And there’s a new Live Brand Studio — as the name suggests, it’s a team that works with marketers to create live video.

Here are some other highlights from the content announcements:

  • CELEBrate, a series where people get heartwarming messages from their idols from Ellen Digital Studios.
  • Delish Food Day and IRL from Heart Magazines Digital Media
  • Power Star Live, which is “inspired by the cultural phenomenon of Black Twitter” and livestreamed from he Atlanta University Center, from Will Packer Media.
  • BuzzFeed News is renewing AM to DM until the end of 2018.
  • Pattern, a new brand focused on weather- and science-related news.
  • Programming the Huffington Post (which, like TechCrunch, is owned by Verizon/Oath), History, Vox and BuzzFeed News that highlights women around the world.

Developing



source https://techcrunch.com/2018/04/30/twitter-newfronts/

WhatsApp CEO Jan Koum quits Facebook due to privacy intrusions

“It is time for me to move on . . . I’m taking some time off to do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate frisbee” WhatsApp co-founder, CEO, and Facebook board member Jan Koum wrote today. The announcement followed shortly after The Washington Post reported that Koum would leave due to disagreements with Facebook management about WhatsApp user data privacy and weakened encryption. Koum obscured that motive in his note that says “I’ll still be cheering WhatsApp on – just from the outside.”

Facebook CEO Mark Zuckerberg quickly commented on Koum’s Facebook post about his departure, writing “Jan: I will miss working so closely with you. I’m grateful for everything you’ve done to help connect the world, and for everything you’ve taught me, including about encryption and its ability to take power from centralized systems and put it back in people’s hands. Those values will always be at the heart of WhatsApp.” That comment further tries to downplay the idea that Facebook pushed Koum away by trying to erode encryption.

It’s currently unclear who will replace Koum as WhatsApp’s CEO, and what will happen to his Facebook board seat.

Values Misaligned

Koum sold WhatsApp to Facebook for in 2014 for a jaw-dropping $19 billion. But since then it’s more than tripled its user count to 1.5 billion, making the price to turn messaging into a one-horse race seem like a steal. But at the time, Koum and co-founder Brian Acton were assured that WhatsApp wouldn’t have to run ads or merge its data with Facebook’s. So were regulators in Europe where WhatsApp is most popular.

A year and a half later, though, Facebook pressured WhatsApp to change its terms of service and give users’ phone numbers to its parent company. That let Facebook target those users with more precise advertising, such as by letting businesses upload list of phone numbers to hit those people with promotions. Facebook was eventually fined $122 million by the European Union in 2017 — a paltrey sum for a company earning over $4 billion in profit per quarter.

But the perceived invasion of WhatsApp user privacy drove a wedge between Koum and the parent company. Acton left Facebook in November, and has publicly supported the #DeleteFacebook movement since.

WashPo writes that Koum was also angered by Facebook executives pushing for a weakening of WhatsApp’s end-to-end encryption in order to facilitate its new WhatsApp For Business program. It’s possible that letting multiple team members from a business all interact with its WhatsApp account could be incompatible with strong encryption. Facebook plans to finally make money off WhatsApp by offering bonus services to big companies like airlines, e-commerce sites, and banks that want to conduct commerce over the chat app.

Jan Koum, the CEO and co-founder of WhatsApp speaks at the Digital Life Design conference on January 18, 2016, in Munich, south Germany.
On the Innovation Conference high-profile guests discuss for three days on trends and developments relating to the digitization. (Photo: TOBIAS HASE/AFP/Getty Images)

Koum was heavily critical of advertising in apps, once teling Forbes that “Dealing with ads is depressing . . . You don’t make anyone’s life better by making advertisements work better.” He vowed to keep them out of WhatsApp. But over the past year, Facebook has rolled out display ads in the Messenger inbox. Without Koum around, Facebook might push to expand those obtrusive ads to WhatsApp as well.

The high-profile departure comes at a vulnerable time for Facebook, with its big F8 developer conference starting tomorrow despite Facebook simultaneously shutting down parts of its dev platform as penance for the Cambridge Analytica scandal. Meanwhile, Google is trying to fix its fragmented messaging strategy, ditching apps like Allo to focus on a mobile carrier-backed alternative to SMS it’s building into Android Messages.

While the News Feed made Facebook rich, it also made it the villain. Messaging has become its strongest suit thanks to the dual dominance of Messenger and WhatsApp. Considering many users surely don’t even realize WhatsApp is own by Facebook, Koum’s departure over policy concerns isn’t likely to change that. But it’s one more point in what’s becoming a thick line connecting Facebook’s business ambitions to its cavalier approach to privacy.

You can read Koum’s full post below.

It's been almost a decade since Brian and I started WhatsApp, and it's been an amazing journey with some of the best…

Posted by Jan Koum on Monday, April 30, 2018



source https://techcrunch.com/2018/04/30/jan-koum-quits-facebook/

Faster, Fresher, Better: Announcing Link Explorer, Moz's New Link Building Tool

Posted by SarahBird

More link data. Fresher link data. Faster link data.

Today, I’m delighted to share that after eons of hard work, blood, sweat, tears, and love, Moz is taking a major step forward on our commitment to provide the best SEO tools money can buy.

We’ve rebuilt our link technology from the ground up and the data is now broadly available throughout Moz tools. It’s bigger, fresher, and much, much faster than our legacy link tech. And we’re just getting started! The best way to quickly understand the potential power of our revolutionary new link tech is to play with the beta of our Link Explorer.

Introducing Link Explorer, the newest addition to the Moz toolset!

We’ve heard your frustrations with Open Site Explorer and we know that you want more from Moz and your link building tools. OSE has done more than put in its time. Groundbreaking when it launched in 2008, it’s worked long and hard bring link data to the masses. It deserves the honor of a graceful retirement.

OSE represents our past; the new Link Explorer is our fast, innovative, ambitious future.

Here are some of my favorite things about the Link Explorer beta:

  • It’s 20x larger and 30x fresher than OSE (RIP)
  • Despite its huge index size, the app is lightning fast! I can’t stand waiting so this might be my number-one fav improvement.
  • We’re introducing Link Tracking Lists to make managing your link building efforts a breeze. Sometimes the simple things make the biggest difference, like when they started making vans with doors on each side. You’ll never go back.
  • Link Explorer includes historic data, a painful gap in OSE. Studying your gained/lost linking domains is fast and easy.
  • The new UX surfaces competitive insights much more quickly
  • Increases the size and freshness of the index improved the quality of Domain Authority and Spam Score. Voilà.

All this, and we’re only in beta.

Dive into your link data now!

Here’s a deeper dive into my favorites:

#1: The sheer size, quality, and speed of it all

We’re committed to data quality. Here are some ways that shows up in the Moz tools:

  • When we collect rankings, we evaluate the natural first page of rankings to ensure that the placement and content of featured snippets and other SERP features are correctly situated (as can happen when ranking are collected in 50- or 100-page batches). This is more expensive, but we think the tradeoff is worth it.
  • We were the first to build a hybrid search volume model using clickstream data. We still believe our model is the most accurate.
  • Our SERP corpus, which powers Keywords by Site, is completely refreshed every two weeks. We actively update up to 15 million of the keywords each month to remove keywords that are no longer being searched and replace them with trending keywords and terms. This helps keep our keyword data set fresh and relevant.

The new Link Explorer index extends this commitment to data quality. OSE wasn’t cutting it and we’re thrilled to unleash this new tech.

Link Explorer is over 20x larger and 30x fresher than our legacy link index. Bonus points: the underlying technology is very cost-efficient, making it much less expensive for us to scale over time. This frees up resources to focus on feature delivery. BOOM!

One of my top pet peeves is waiting. I feel physical pain while waiting in lines and for apps to load. I can’t stand growing old waiting for a page to load (amirite?).

The new Link Explorer app is delightfully, impossibly fast. It’s like magic. That’s how link research should be. Magical.

#2: Historical data showing discovered and lost linking domains

If you’re a visual person, this report gives you an immediate idea of how your link building efforts are going. A spike you weren't expecting could be a sign of spam network monkey business. Deep-dive effortlessly on the links you lost and gained so you can spend your valuable time doing thoughtful, human outreach.

#3: Link Tracking Lists

Folks, this is a big one. Throw out (at least one of... ha. ha.) those unwieldy spreadsheets and get on board with Link Tracking Lists, because these are the future. Have you been chasing a link from a particular site? Wondering if your outreach emails have borne fruit yet? Want to know if you’ve successfully placed a link, and how you’re linking? Link Tracking Lists cut out a huge time-suck when it comes to checking back on which of your target sites have actually linked back to you.

Why announce the beta today?

We’re sharing this now for a few reasons:

  • The new Link Explorer data and app have been available in beta to a limited audience. Even with a quiet, narrow release, the SEO community has been talking about it and asking good questions about our plans. Now that the Link Explorer beta is in broad release throughout all of Moz products and the broader Moz audience can play with it, we’re expecting even more curiosity and excitement.
  • If you’re relying on our legacy link technology, this is further notice to shift your applications and reporting to the new-and-improved tech. OSE will be retired soon! We’re making it easier for API customers to get the new data by providing a translation layer for the legacy API.
  • We want and need your feedback. We are committed to building the very best link building tool on the planet. You can expect us to invest heavily here. We need your help to guide our efforts and help us make the most impactful tradeoffs. This is your invitation to shape our roadmap.

Today’s release of our new Link Explorer technology is a revolution in Moz tools, not an evolution. We’ve made a major leap forward in our link index technology that delivers a ton of immediate value to Moz customers and the broader Moz Community.

Even though there are impactful improvements around the corner, this ambitious beta stands on its own two feet. OSE wasn’t cutting it and we’re proud of this new, fledgling tech.

What’s on the horizon for Link Explorer?

We’ve got even more features coming in the weeks and months ahead. Please let us know if we’re on the right track.

  • Link Building Assistant: a way to quickly identify new link acquisition opportunities
  • A more accurate and useful Link Intersect feature
  • Link Alerts to notify you when you get a link from a URL you were tracking in a list
  • Changes to how we count redirects: Currently we don't count links to a redirect as links to the target of the redirect (that's a lot of redirects), but we have this planned for the future.
  • Significantly scaling up our crawling to further improve freshness and size

Go forth, and explore:

Try the new Link Explorer!

Tomorrow Russ Jones will be sharing a post that discusses the importance of quality metrics when it comes to a link index, and don’t miss our pinned Q&A post answering questions about Domain Authority and Page Authority changes or our FAQ in the Help Hub.

We’ll be releasing early and often. Watch this space, and don’t hold back your feedback. Help us shape the future of Links at Moz. We’re listening!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



source https://moz.com/blog/link-explorer

Is Twitter Worth Your Time? Here’s What New 2018 Data Says About Twitter for Marketing

Misinformation is spreading like wildfire.

On Twitter, it’s no different.

Numerous spam accounts and bots plague Twitter. They share false and misleading information, which has negatively impacted user experience.

The network is now working to correct some of these problems, but only time will tell with how the network will fare.

But if new 2018 data is accurate, then the future looks dim.

Researchers at MIT recently released a comprehensive study about “the spread of true and false news online,” which examined over a decade’s worth of data.

They discovered that misinformation reached 1,500 people six times faster than valid information.

This has marketers asking the question, “How do we counteract that?”

Some are even wondering, “Is Twitter worth using?”

To effectively use Twitter and see a return on your efforts, you need to understand how to best use the network for your long-term gain.

Twitter is much different now than it was when it first debuted in 2006. It is important for marketers to understand the network’s evolution as well as its current user ecosystem.

Despite these new revelations and the current state of misinformation, I’m going to show you how to get the most out of your Twitter marketing strategy in 2018.

But before deploying your 2018 strategy, you need to understand how Twitter has changed in recent months, so you don’t make the same mistakes you’ve likely made in the past.

How understanding Twitter’s current state can strengthen your business’s marketing strategy

I’m going to guess that Twitter plays some sort of role in your marketing strategy.

A recent study asked respondents, “Which social media platforms do you use to market your business?”

Not surprisingly, Twitter emerged as one of the top platforms.

twitter top platform to market business

But should it be?

Lately, Twitter has had its fair share of problems.

To start, there are bots.

A Twitter bot is “a software program that sends out automated posts on Twitter.”

Often, these automated posts are tweets. Other times, the bots will automatically respond to user messages that include specific phrases.

But is this really a problem? It certainly can be.

Although some bots can be helpful for your business objectives, there has been an influx of bots permeating through Twitter’s user base.

Now, there are a lot of them.

In fact, there are an estimated 48 million bots on Twitter, accounting for 15% of Twitter’s total users.

So how many people are actually on Twitter?

Well, at the time of publication, Twitter had 336 total monthly active users.

twitter monthly active users as of april 2018

Compared to other social media sites like Facebook, YouTube, and Instagram, Twitter isn’t leading when it comes to monthly active users.

And if 15% of these users are actually bots, then that decreases the potential number of people you can market to even further.

Bots have started to impact Twitter’s user experience negatively, too.

Bots recently came under scrutiny for playing a part in spreading misinformation in the 2016 election.

Those who create bots can also program them to share spam.

A study from the Pew Research Center found that bots shared links directing traffic to sites across a variety of industries.

twitter automated accounts link sharing

They also found that “an estimated two-thirds of tweeted links to popular sites are posted by automated accounts – not human beings.”

How’s that for making your brand feel more “human?”

While Twitter is cracking down on bots, many are skeptical that this will help with the increase of misinformation plaguing the platform.

After all, bots aren’t the only reason for a poor Twitter user experience.

In the MIT study I mentioned earlier, they found that humans are more susceptible to spreading fake news than bots.

Twitter might be able to lower the influence of bots, but trying to prevent real people from spreading false information is much harder.

In another recent study, 51% of respondents felt that “the information environment will not be improved by changes designed to reduce the spread of lies and other misinformation online.”

So with the influx of bots, spread of misinformation, and stifled user growth, how should marketers approach their Twitter strategy?

Your strategy needs to evolve with the platform and take advantage of Twitter’s strengths while keeping in mind its weaknesses.

Here are five ways to tailor your Twitter strategy for results in 2018.

Use Twitter for quick, direct customer service interactions and resolutions

We’ve all been there.

You need a piece of information that you can’t find on a business’ website and don’t really want to call them.

“Oh, I’ll just tweet at them, because they’ll probably reply,” you think to yourself.

This is more common than you probably think.

Investing time and resources in your Twitter customer service strategy is important for the long-term growth of your business.

Sometimes, your customers need a bit of TLC. And this is where Twitter can shine.

In fact, 85% of Twitter users said that it’s important that businesses provide customer support on Twitter.

By being responsive on Twitter, you add a level of transparency to your business’ brand. Your business will seem more helpful and approachable.

And, Dove proves it.

In 2017, Dove focused on responding to more tweets which, in turn, resulted in an increase in positive sentiment from customers.

difference in answered tweets

Dove’s net positive sentiment was 41% in the last three months of 2016, and three months later, that sentiment score rose to 43%.

That’s a lot to gain with minimal effort.

You may be wondering, “But how do you provide optimal service through Twitter?”

It’s different for each company, but there are some specific strategies to maximize your responses.

Most companies direct public inquiries to their DMs if any sensitive information needs to be transferred.

southwest airlines dm us tweet

And, now since Twitter’s launch of the new Direct Message features, we’re seeing brands build a more personalized, one-on-one experiences for customers.

Like Patrón Tequila.

Patrón built the “Bot-Tender” — a chatbot “bartender” — that uses Patrón’s Direct Messages that serves up cocktail recommendations based on the consumer’s preferences.

patron tequila mixologist tweet

The “Bot-Tender” resulted in 39% of click-through rate to the website and 2.6% click-through rate using the direct message card.

In some instances, it might even make sense to gather additional information about your customers to better manage the issue. This could help you:

In some instances, you can even set up a chatbot to accept orders with a hashtag.

For example, Wingstop uses a bot to accept orders from people who Tweet ‘@Wingstop #Order’:

wingstop dm your order

Now, that’s an example of optimal customer experience that doesn’t rely on a wing and a prayer.

Also, depending on the size of their business or the number of customer inquiries a company receives, some even have specific accounts solely focused on helping customers.

For example, LinkedIn owns both the handles @LinkedIn and @LinkedInHelp.

Both channels exist for different objectives. @LinkedIn provides general updates, company news and announcements of features, while @LinkedInHelp focuses on customer support.

Both accounts are valuable for LinkedIn’s overarching Twitter strategy.

linkedin help twitter account

Private messages have become a popular way to resolve issues, so the platform has included a feature that enables you to include a “Send a private message” link on a tweet.

send a private message tweet

To do so, make sure your account is accepting direct messages from anyone. Begin by accessing your Settings tab.

settings tab in twitter

Click the “Privacy and Safety” tab on the left side.

privacy and safety twitter account

Check the box to “Receive Direct Messages from Anyone.”

receive direct messages from anyone

Find your TwitterID using TweeterID and add it to the end of this link in place of YourTwitterID:

https://twitter.com/messages/compose?recipient_id=YourTwitterID

Now, you can add that URL with your own TwitterID inserted into any tweet, and the “Send a private message” button will appear directing your customers into a private conversation.

Focus on sharing video content for higher engagement with your followers

Sharing video content on Twitter isn’t exactly new.

But over the past few years, Twitter has continuously worked to evolve how your video can be shared and the impact it can generate. (They’re even teasing a Snapchat sharing tool!)

The result? Users are eating it up.

Will you deliver?

If you don’t, it’ll be costly. The stats don’t lie; video views on Twitter have grown 220x what they were 12 months ago.

twitter video views growth

But can video cut through th

source https://blog.kissmetrics.com/is-twitter-worth-your-time/

Facebook is trying to block Schrems II privacy referral to EU top court

Facebook’s lawyers are attempting to block a High Court decision in Ireland, where its international business is headquartered, to refer a long-running legal challenge to the bloc’s top court.

The social media giant’s lawyers asked the court to stay the referral to the CJEU today, Reuters reports. Facebook is trying to appeal the referral by challenging Irish case law — and wants a stay granted in the meanwhile.

The case relates to a complaint filed by privacy campaigner and lawyer Max Schrems regarding a transfer mechanism that’s currently used by thousands of companies to authorize flows of personal data on EU citizens to the US for processing. Though Schrems was actually challenging the use of so-called Standard Contractual Clauses (SCCs) by Facebook, specifically, when he updated an earlier complaint on the same core data transfer issue — which relates to US government mass surveillance practices, as revealed by the 2013 Snowden disclosures — with Ireland’s data watchdog.

However the Irish Data Protection Commissioner decided to refer the issue to the High Court to consider the legality of SCCs as a whole. And earlier this month the High Court decided to refer a series questions relating to EU-US data transfers to Europe’s top court — seeking a preliminary ruling on a series of fundamental questions that could even unseat another data transfer mechanism, called the EU-US Privacy Shield, depending on what CJEU judges decide.

An earlier legal challenge by Schrems — which was also related to the clash between US mass surveillance programs (which harvest data from social media services) and EU fundamental rights (which mandate that web users’ privacy is protected) — resulted in the previous arrangement for transatlantic data flows being struck down by the CJEU in 2015, after standing for around 15 years.

Hence the current case being referred to by privacy watchers as ‘Schrems II’. You can also see why Facebook is keen to delay another CJEU referral if it can.

According to comments made by Schrems on Twitter the Irish High Court reserved judgement on Facebook’s request today, with a decision expected within a week…

Facebook’s appeal is based on trying to argue against Irish case law — which Schrems says does not allow for an appeal against such a referral, hence he’s couching it as another delaying tactic by the company:

Twitter also sold data access to Cambridge Analytica-linked researcher

Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

Cambridge Analytica tweeted out this morning that the data obtained by Kogan/GSR from Twitter was never purchased or used by Cambridge Analytica.

We reached out to Twitter and will update when we hear back.



source https://techcrunch.com/2018/04/30/twitter-also-sold-data-access-to-cambridge-analytica-researcher/

Twitter also sold data access to Cambridge Analytica researcher

Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

Still, it shows just how broad the Cambridge Analytica data collection was ahead of the 2016 election.

We reached out to Twitter and will update when we hear back.



source https://techcrunch.com/2018/04/30/twitter-also-sold-data-access-to-cambridge-analytica-researcher/

Europe eyeing bot IDs, ad transparency and blockchain to fight fakes

European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.

This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.

The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

Bots, fake accounts, political ads, filter bubbles

In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.

Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.

Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.

That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).

Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.

Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.

The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.

We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.

Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.

Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 

On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.

In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.

Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.

The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.

Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.

In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 

The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.

Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.

It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.

It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.

It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.

The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.

It is also proposing a range of other measures to tackle the online disinformation issue — including:

  • An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
  • A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
  • Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
  • Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
  • Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
  • Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”

It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.

Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”

The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.

“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 

The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.

And if self-regulation fails…

In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”

And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.

For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.

Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.

In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.

Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”

A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.

At the time of writing Google had not responded to a request for comment.

Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.

 



source https://techcrunch.com/2018/04/30/europe-eyeing-bot-ids-ad-transparency-and-blockchain-to-fight-fakes/

Saturday 28 April 2018

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy-hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’;

source https://techcrunch.com/2018/04/28/facebooks-dark-ads-problem-is-systemic/

Friday 27 April 2018

Facebook shrinks fake news after warnings backfire

Tell someone not to do something and sometimes they just want to do it more. That’s what happened when Facebook put red flags on debunked fake news. Users who wanted to believe the false stories had their fevers ignited and they actually shared the hoaxes more. That led Facebook to ditch the incendiary red flags in favor of showing Related Articles with more level-headed perspectives from trusted news sources.

But now it’s got two more tactics to reduce the spread of misinformation, which Facebook detailed at its Fighting Abuse @Scale event in San Francisco. Facebook’s director of News Feed integrity Michael McNally and data scientist Lauren Bose held a talk discussing all the ways it intervenes. The company is trying to walk a fine line between censorship and sensibility.

These red warning labels actually backfired and made some users more likely to share, so Facebook switched to showing Related Articles

First, rather than call more attention to fake news, Facebook wants to make it easier to miss these stories while scrolling. When Facebook’s third-party fact-checkers verify an article is inaccurate, Facebook will shrink the size of the link post in the News Feed. “We reduce the visual prominence of feed stories that are fact-checked false,” a Facebook spokesperson confirmed to me.

As you can see below in the image on the left, confirmed-to-be-false news stories on mobile show up with their headline and image rolled into a single smaller row of space. Below, a Related Articles box shows “Fact-Checker”-labeled stories debunking the original link. Meanwhile on the right, a real news article’s image appears about 10 times larger, and its headline gets its own space.

 

Second, Facebook is now using machine learning to look at newly published articles and scan them for signs of falsehood. Combined with other signals like user reports, Facebook can use high falsehood prediction scores from the machine learning systems to prioritize articles in its queue for fact-checkers. That way, the fact-checkers can spend their time reviewing articles that are already qualified to probably be wrong.

“We use machine learning to help predict things that might be more likely to be false news, to help prioritize material we send to fact-checkers (given the large volume of potential material),” a spokesperson from Facebook confirmed. The social network now works with 20 fact-checkers in several countries around the world, but it’s still trying to find more to partner with. In the meantime, the machine learning will ensure their time is used efficiently.

Bose and McNally also walked the audience through Facebook’s “ecosystem” approach that fights fake news at every step of its development:

  • Account Creation – If accounts are created using fake identities or networks of bad actors, they’re removed.
  • Asset Creation – Facebook looks for similarities to shut down clusters of fraudulently created Pages and inhibit the domains they’re connected to.
  • Ad Policies – Malicious Pages and domains that exhibit signatures of wrong use lose the ability to buy or host ads, which deters them from growing their audience or monetizing it.
  • False Content Creation – Facebook applies machine learning to text and images to find patterns that indicate risk.
  • Distribution – To limit the spread of false news, Facebook works with fact-checkers. If they debunk an article, its size shrinks, Related Articles are appended and Facebook downranks the stories in News Feed.

Together, by chipping away at each phase, Facebook says it can reduce the spread of a false news story by 80 percent. Facebook needs to prove it has a handle on false news before more big elections in the U.S. and around the world arrive. There’s a lot of work to do, but Facebook has committed to hiring enough engineers and content moderators to attack the problem. And with conferences like Fighting Abuse @Scale, it can share its best practices with other tech companies so Silicon Valley can put up a united front against election interference.



source https://techcrunch.com/2018/04/27/facebook-false-news/