Monday, 11 November 2019

Facebook finally lets you banish nav bar tabs & red dots

Are those red notification dots on your Facebook home screen driving you crazy? Sick of Facebook Marketplace wasting your screen space? Now you can control what appears in the Facebook app’s navigation bar thanks to a new option called Shortcuts Bar Settings.

Over the weekend TechCrunch spotted the option to remove certain tabs like Marketplace, Watch, Groups, Events, and Dating or just silence their notification dots. In response to our inquiry, Facebook confirms that Shortcut Bar Settings is now rolling out to everyone, with most iOS users already equipped and the rest of Android owners getting it in the next few weeks.

The move could save the sanity and improve the well-being of people who don’t want their Facebook cluttered with distractions. Users already get important alerts that they could actually control via their Notifications tab. Constant red notification counts on the homescreen are an insidious growth hack, trying to pull in people’s attention to random Group feeds, Event wall posts, and Marketplace.

“We are rolling out navigation bar controls to make it easier for people to connect with the things they like and control the notifications they get within the Facebook app” a Facebook spokesperson tells me.

Back in July 2018, Facebook said it would start personalizing the navigation bar based on what utilities you use most. But the navigation bar seemed more intent on promoting features Facebook wanted to be popular like its Craigslist competitor Marketplace, which I rarely use, rather than its long-standing Events feature I access daily.

To use the Shortcuts Bar Settings options, tap and hold on any of the shortcuts in your navigation bar that’s at the bottom of the Facebook homescreen on iOS and the top on Android. You’ll see a menu pop up letting you remove that tab entirely, or leave it but disable the red notification count overlays. That clears space in your nav bar for a more peaceful experience.

You’ll also now find in the three-line More tab -> Settings & Privacy -> Settings -> Shortcuts menu the ability to toggle any of the Marketplace, Groups, Events, and Pages tabs on or off. Eagle-eyed reverse engineering specialist Jane Manchun Wong spotted in June that Facebook was testing Notification Dots settings menu that’s now available too.

A Facebook spokesperson admits people should have the ability to take a break from notifications within the app. They tell me Facebook wanted to give users more control so they can have access to what’s relevant to them.

For all of Facebook’s talk about well-being, with it trying out hiding Like counts in its app and Instagram (this week starting in the US), there’s still plenty of low-hanging fruit. Better batching of Facebook notifications would be a great step, allowing users to get a daily digest of Groups or Events posts rather than a constant flurry. Its Time Well Spent dashboard that counts your minutes on Facebook should also say how many notifications you get of each type, how many you actually open, and let you disable the most common but useless ones right from there.

If Facebook wants to survive long-term, it can’t piss off users by trapping them in an anxiety-inducing hellscape of growth hacks that benefit the company. The app has become bloated and cramped with extra features over the last 15 years. Facebook could get away with more aggressive cross-promotion of some of these forgotten features as long as it empowers us to hide what we hate.



source https://techcrunch.com/2019/11/11/facebook-shortcut-bar-settings/

Twitter drafts a deepfake policy that would label and warn, but not always remove, manipulated media

Twitter last month said it was introducing a new policy to help fight deepfakes and other “manipulated media” that involve photos, videos or audio that’s been significantly altered to change its original meaning or purpose, or those that make it seem like something happened that actually did not. Today, Twitter is sharing a draft of its new policy and opening it up for public input before it goes live.

The policy is meant to address the growing problem with deepfakes on today’s internet.

Deepfakes have proliferated thanks to advances made in artificial intelligence that have made it easier to produce convincing fake videos, audio and other digital content. Anyone with a computer and internet connection can now create this sort of fake media. The technology can be dangerous when used as propaganda, or to make someone believe something is real which is not. In politics, deepfakes can be used to undermine a candidate’s reputation, by making them say and do things they never said or did.

A deepfake of Facebook CEO Mark Zuckerberg went viral earlier this year, after Facebook refused to pull down a doctored video that showed House Speaker Nancy Pelosi stumbling over her words was tweeted by Trump.

In early October, two members of the Senate Intelligence Committee, Mark Warner (D-VA) and Marco Rubio (R-FL), called on major tech companies to develop a plan to combat deepfakes on their platforms. The senators asked 11 tech companies — including Facebook, Twitter, YouTube, Reddit and LinkedIn — to come up with a plan to develop industry standards for “sharing, removing, archiving, and confronting the sharing of synthetic content as soon as possible.”

Twitter later in the month announced its plans to seek public feedback on the policy. Meanwhile, Amazon joined up with Facebook and Microsoft to support the DeepFake Detection challenge (DFDC), which aims to develop new approaches to detect manipulated media.

Today, Twitter is detailing a draft of its deepfakes policy. The company says that when it sees synthetic or manipulated media that’s intentionally trying to mislead or confuse people it will:

  • place a notice next to Tweets that share synthetic or manipulated media;
  • warn people before they share or like Tweets with synthetic or manipulated media; or
  • add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

Twitter says if a deepfake could threaten someone’s physical safety or lead to serious harm, it may also remove it.

The company is accepting feedback by way of a survey as well as on Twitter itself, by way of the #TwitterPolicyFeedback hashtag.

The survey asks questions like whether altered photos and videos should be removed entirely, have warning labels, or not be removed at all. And it asks whether certain actions are acceptable, like hiding tweets or alerting people if they’re about to share a deepfake. It also asks when it should remove a tweet with misleading media. The policy Twitter created says tweets will be removed if the tweet threatens someone’s physical safety, but will otherwise be labeled. The survey suggests some other times a tweet could be pulled — like if it threatens someone’s mental health, privacy, dignity, property and more.

The survey takes five minutes to complete and is available in English, Japanese, Portuguese, Arabic, Hindi and Spanish.

What isn’t clear, however, is how Twitter will be able to detect the deepfakes published on its platform, given that detection techniques aren’t perfect and often lag behind the newer and more advanced creation methods. On this front, Twitter invites those who want to partner with it on detection solutions to fill out a form.

Twitter is accepting feedback on its deepfakes policy from now until Wednesday, November 27 at 11:59 p.m. GMT. At that time, it will review the feedback received and make adjustments to the policy, as needed. The policy will then be incorporated into Twitter’s Rules with a 30-day notice before the change goes live.

 



source https://techcrunch.com/2019/11/11/twitter-drafts-a-deepfake-policy-that-would-label-and-warn-but-not-remove-manipulated-media/

Friday, 8 November 2019

“Trump should not be our president” says Ex-Facebook CPO Chris Cox

Chris Cox’s motivational speeches were at the heart of Facebook’s new employee orientation. But after 14 years at the social network, the chief product officer left in March amidst an executive shake-up and Facebook’s new plan to prioritize privacy by moving to encrypt its messaging apps. No details on his next projects were revealed.

Now the 37-year-old leader will be putting his inspirational demeanor and keen strategy sense to work to protect the environment and improve the government. Today at Wired25 conference, Cox finally shared more about his work advising political technology developer for progressives Acronym, and climate change-tracking satellite startup Planet Labs. He also explained more about the circumstances of his departure from the social network’s C-suite.

SAN FRANCISCO, CALIFORNIA – NOVEMBER 08: Chris Cox speaks onstage at the WIRED25 Summit 2019 – Day 1 at Commonwealth Club on November 08, 2019 in San Francisco, California. (Photo by Phillip Faraone/Getty Images for WIRED)

Leaving Facebook

On how he felt leaving Facebook, Cox said, “part of the reason I was okay leaving was that after 2016 I’d spent a couple years building out a bunch of the teams that I felt were most important to sort of take the lessons that we learned through some of 2016 and start to put in place institutions that can help the company, be more responsible and be a better communicator on some of the key issues.”

LIVE: Chris Cox, Former Chief Product Officer, Facebook, in conversation with WIRED's Lauren Goode

LIVE: We're live with Chris Cox, former Chief Product Officer, Facebook, from our #WIRED25 summit in conversation with WIRED senior writer Lauren Goode.

Posted by WIRED on Friday, November 8, 2019

As for what specifically drove him to leave, Cox explained that, “It wasn’t something where I felt I wanted to spend another 13 years on social media. Mark and I saw things a little bit differently . . . I think we are still investigating as an industry, how do you balance protecting the privacy of people’s information and continuing to keep people safe,” Cox said.

On whether moving toward encryption was part of that, he said he thinks encryption is “great: and that “It offers an enormous amount of protection,” but noted “it certainly makes some of those things more complicated” on the privacy versus safety balance. He complemented Facebook’s efforts to build ways of catching bad actors even if they’re shielded by encryption. That includes digital literacy initiatives in Brazil and India ahead of elections, and offering forwarding systems for sending questionable information to fact checkers. “I think there are pros and cons with these systems and I’m not a hard-liner on any one of them,” Cox said, and noted that what Facebook is building is “resonant with what people want.”

Cox was asked about the major debate about whether Facebook should allow political advertising. “We think political advertising can be good and helpful. It often favors up and comers versus incumbents.” Still, on fact-checking, he said, “I’m a big fan,” even though Facebook isn’t applying that to political ads. He did note that “I think the company should investigate and is investigating micro targeting . . . if there’s hundreds of variants being run of the creative then it’s tricky to get your arms around what’s being said.” He also advocated for more context in the user interface distinguishing political ads. 

Chris Cox speaks at Wired25

Cox’s next projects

Since leaving Facebook, Cox has joined the advisory board of a group called Acronym, which is helping to build out the campaign and messaging technology stack for progressive candidates. “This is an area where my perception is that the progressives have been behind on the ability to develop and use as a team infrastructure that helps you have a good voter file, how to develop messaging — just basic politics in 2019.”

Wired’s Lauren Goode asked if he was aligning himself with progressives, taking a political stance, and whether he could do that while still at Facebook. “Absolutely not,” Cox responded.And why is that I think when you’re in a very senior role at a platform, you have a duty to be much more neutral in your politics.”

He then came out with a bold statement enable

source https://techcrunch.com/2019/11/08/chris-cox-since-facebook/

Instagram to test hiding Like counts in US, which could hurt influencers

“We will make decisions that hurt the business if they help people’s well-being and health” says Instagram’s CEO Adam Mosseri. To that end, next week Instagram will expand its test of hiding Like counts from everyone but a post’s creator to some users in the United States. But there are major questions about whether the change will hurt influencers.

Mosseri revealed the plan at the Wired25 conference today, saying Instagram “We have to see how it affects how people feel about the platform, how it affects how they use the platform, how it affects the creator ecosystem.”

Instagram’s CEo explained that “The idea is to try to depressurize Instagram, make it less of a competition, and give people more space to focus on connect ing with the people they love and things that inspire them.” The intention is to “reduce anxiety” and “reduce social comparison”.

Instagram began testing this in April in Canada and expanded it to Ireland, Italy, Japan, Brazil, Australia, and New Zealand in July. Facebook started a similar experiment in Australia in September.

While it seems likely that making Instagram less of a popularity contest might aid the average user, Instagram has to be mindful that it doesn’t significantly decrease creators’ or influencers’ engagement and business success. These content makers are vital to Instagram’s success, since they keep their fan bases coming back day after day, even If  users’ friends are growing stale.

A new study by HypeAuditor reported by Social Media Today found that influencers across tiers of follower counts almost unanimously saw their Like counts fall in countries where the hidden Like count test was active. Likes fell 3% to 15% in all the countries for influencers with 5,000 to 20,000 followers.

Only in Japan, and only for influencers with 1,000 to 5,000 or 100,000 to 1 million followers did the change lead to a 6% boost in Likes. Meanwhile, influencers saw the biggest loss of Likes in the Brazilian market. Those trends could relate to how users in certain countries might feel more comfortable Liking something if they don’t know who else is, while in other nations users might rely on more herd mentality to know what to Like.

If Instagram finds the impact of the test to be too negative on influencers, it may not roll out the change. While Mosseri stated the company wasn’t afraid to hurt it’s own bottom line, impairing the careers of influencers may not be acceptable unless the positive impacts on well-being are significant enough.

https://www.facebook.com/wired/videos/1745568895573311/?xts[0]=68.ARCnhigtpUX2ohjQeN9KIlum_5cxn4DOJYlqFOXPwWo4ndBn7QBKOYXIv8ulLQY2jz2qYbkJxzVzGI-uyPS5oQ5wpoDbDRsM9Rj2qzemd1YriP7AScwpYoE6oozTUowaSTEqqsP5EciyVLvpbSgtIysbCzILMmnyr-0mIq7v-6dKuk3SlRiYTXU7R3dUXCcQfktwP41b7QN4JVagnHrfWg2Ag5xAkvmcdJw7z01CmGmTzp_2D_bVpZHJw73r0S9KvYOW6emyrrZAu61r4u5ZDjKf8yw8vHyKWWFT9mz5dS9oUC8uCpDBpL2CKDg3MzLOgGoRJtXtTtGCAF_dt40Ewr4C&tn=-R



source https://techcrunch.com/2019/11/08/instagram-hide-likes-us/

Ex-Facebook CPO Chris Cox now advises on climate & campaign tech

Chris Cox’s motivational speeches were at the heart of Facebook’s new employee orientation. But after 14 years at the social network, the Chief Product Officer left in March amidst an executive shake-up and Facebook’s new plan to prioritize privacy by moving to encrypt its messaging apps. No details on his next projects were revealed.

Now the 37-year-old leader will be putting his inspirational demeanor and keen strategy sense to work to protect the environment and improve the government. Today at Wired25 conference, Cox finally shared more about his work advising political technology developer for progressives Acronym, and climate change-tracking satellite startup Planet Labs. He also explained more about the circumstances of his departure from social network’s C-suite.

SAN FRANCISCO, CALIFORNIA – NOVEMBER 08: Chris Cox attends the WIRED25 Summit 2019 – Day 1 at Commonwealth Club on November 08, 2019 in San Francisco, California. (Photo by Phillip Faraone/Getty Images for WIRED)

On how he felt leaving Facebook, Cox said “part of the reason I was okay leaving was that after 2016 I’d spent a couple years building out a bunch of the teams that I felt were most important to sort of take the lessons that we learned through some of 2016 and start to put in place institutions that can help the company, be more responsible and be a better communicator on some of the key issues.”

As for what specifically drove him to leave, Cox explained that “It wasn’t something where I felt I wanted to spend another 13 years on social media. Mark and I saw things a little bit differently . . . I think we are still investigating as an industry, how do you balance protecting the privacy of people’s information and continuing to keep people safe” Cox said. On whether moving towards encryption was part of that, he said he thinks encryption is “great: and that “It offers an enormous amount of protection” but noted “it certainly makes things more complicated” on the privacy vs safety balance.

Cox was asked about the major debate about whether Facebook should allow political advertising. “We think political advertising can be good and helpful. It often favors up and comers versus incumbents.” Still, on fact-checking, he said “I’m a big fan” even though Facebook isn’t applying that to political ads. He did note that “I think the company should investigate and is investigating micro targeting . . . if there’s hundreds of variants being run of the creative then it’s tricky to get your arms around what’s being said.” He also advocated for more context in the user interface distinguishing political ads. 

Chris Cox speaks at Wired25

Since leaving Facebook, Cox has been advising a group called Acronym which is helping to build out the campaign and messaging technology stack for progressive candidates. Cox has also been working to advise San Francisco startup Planet Labs which is using satellite imagery to track climate change.

Cox concluded that the technology industry can lead on both fronts to create the world we want to live in.



source https://techcrunch.com/2019/11/08/chris-cox-since-facebook/

Facebook’s first experimental apps from its ‘NPE Team’ division focus on students, chat & music

This July, Facebook announced a new division called NPE Team which would build experimental consumer-facing apps, allowing the company to try out new ideas and features to see how people would react. It soon thereafter tapped former Vine GM Jason Toff to join the team as a product manager. The first apps to emerge from the NPE Team have now quietly launched. One, Bump, is a chat app that aims to help people make new friends through conversations, not appearances. Another, Aux, is a social music listening app.

Aux seems a bit reminiscent of an older startup, Turntable.fm, that closed its doors in 2013. As in Turntable.fm, the idea with Aux is that of a virtual DJ’ing experience where people instead of algorithms are programming the music. This concept of crowdsourced DJ’ing also caught on in years past with radio stations that put their audiences in control of the playlist through their mobile app.

Later, streaming music apps like Spotify experimented with party playlists, and various startups launched their own guest-controlled playlists.

The NPE Team’s Aux app is a slightly different take on this general idea of people-powered playlists.

The app is aimed at school-aged kids and teens who join a party in the app every day at 9 PM. They then choose the songs they want to play and compete for the “AUX” to get theirs played first. At the end of the night, a winner is chosen based on how many “claps” are received.

As the app describes it, Aux is a “DJ for Your School” — a title that’s a bit confusing, as it brings to mind music being played over the school’s intercom system, as opposed to a social app for kids who attend school to use in the evenings.

Aux launched on August 8, 2019 in Canada, and has less than 500 downloads on iOS, according to data from Sensor Tower. It’s not available on Android. It briefly ranked No. 38 among all Music apps on the Canadian App Store on October 22, which may point to some sort of short campaign to juice the downloads.

The other new NPE Team app is Bump, which aims to help people “make new friends.”

Essentially an anonymous chat app, the idea here is that Bump can help people connect by giving them icebreakers to respond to using text. There are no images, videos or links in Bump — just chats.

Based on the App Store screenshots, the app seems to be intended for college students. The screenshots show questions about “the coolest place” on campus and where to find cheap food. A sample chat shown in the screenshots mentions things like classes and roommate troubles. 

There could be a dating component to the app, as well, as it stresses that Bump helps people make a connection through “dialog versus appearances.” That levels the playing field a bit, compared with other social apps — and certainly dating apps — where the most attractive users with the best photos tend to receive the most attention.

Chats in Bump take place in real time, and you can only message in one chat at a time. There’s also a time limit of 30 seconds to respond to messages, which keeps the chat active. When the chat ends, the app will ask you if you want to keep in touch with the other person. Only if both people say yes will you be able to chat with them again.

Bump is available on both iOS and Android and is live in Canada and the Philippines. Bump once ranked as high as No. 252 in Social Networking on the Canadian App Store on September 1, 2019, according to Sensor Tower. However, it’s not ranking at all right now.

What’s interesting is that only one of these NPE Team apps, Bump, discloses in its App Store description that the NPE Team is from Facebook. The other, Aux, doesn’t mention this. However, both do point to an App privacy policy that’s hosted on Facebook.com for those who go digging.

That’s not too different from how Google’s in-house app incubator, Area 120, behaves. Some of its apps aren’t clear about their affiliation with Google, save for a link to Google’s privacy policy. It seems these companies want to see if the apps succeed or fail on their own merit, not because of their parent company’s brand name recognition.

Facebook hasn’t said much about its plans for the NPE Team beyond the fact that they will focus on new ways of building community and may be shut down quickly if they’re not useful.

Facebook has been asked for comment about the new apps and we’ll update if one is provided.



source https://techcrunch.com/2019/11/08/facebooks-first-experimental-apps-from-its-npe-team-division-focus-on-students-chat-music/

Thursday, 7 November 2019

Legislators from ten parliaments put the squeeze on Facebook

The third session of the International Grand Committee on Disinformation, a multi-nation body comprised of global legislators with concerns about the societal impacts of social media giants, has been taking place in Dublin this week — once again without any senior Facebook management in attendance.

The committee was formed last year after Facebook’s CEO Mark Zuckerberg repeatedly refused to give evidence to a wide-ranging UK parliamentary enquiry into online disinformation and the use of social media tools for political campaigns. That snub encouraged joint working by international parliamentarians over a shared concern that’s also a cross-border regulatory and accountability challenge.

But while Zuckerberg still, seemingly, does not feel personally accountable to international parliaments — even as his latest stand-in at today’s committee hearing, policy chief Monika Bickert, proudly trumpeted the fact that 87 per cent of Facebook’s users are people outside the US — global legislators have been growth hacking a collective understanding of nation-state-scale platforms and the deleterious impacts their data-gobbling algorithmic content hierarchies and microtargeted ads are having on societies and democracies around the world.

Incisive questions from the committee today included sceptical scrutiny of Facebook’s claims and aims for a self-styled ‘Content Oversight Board’ it has said will launch next year — with one Irish legislator querying how the mechanism could possibly be independent of Facebook , as well as wondering how a retrospective appeals body could prevent content-driven harms. (On that Facebook seemed to claim that most complaints it gets from users are about content takedowns.)

Another question was whether the company’s planned Libra digital currency might not at least partially be an attempt to resolve a reputational risk for Facebook, of accepting political ads in foreign currency, by creating a single global digital currency that scrubs away that layer of auditability. Bickert denied the suggestion, saying the Libra project is unrelated to the disinformation issue and “is about access to financial services”.

Twitter’s recently announced total ban on political issue ads also faced some critical questioning by the committee, with the company being asked whether it will be banning environmental groups from running ads about climate change yet continuing to take money from oil giants that wish to run promoted tweets on the topic. Karen White, director of public policy, said they were aware of the concern and are still working through the policy detail for a fuller release due later this month.

But it was Facebook that came in for the bulk of criticism during the session, with Bickert fielding the vast majority of legislators’ questions — almost all of which were sceptically framed and some, including from the only US legislator in the room asking questions, outright hostile.

Google’s rep, meanwhile, had a very quiet hour and a half, with barely any questions fired his way. While Twitter won itself plenty of praise from legislators and witnesses for taking a proactive stance and banning political microtargeting altogether.

The question legislators kept returning to during many of today’s sessions, most of which didn’t involve the reps from the tech giants, is how can governments effectively regulate US-based Internet platforms whose profits are fuelled by the amplification of disinformation as a mechanism for driving engage with their service and ads? 

Suggestions varied from breaking up tech giants to breaking down business models that were roundly accused of incentivizing the spread of outrageous nonsense for a pure-play profit motive, including by weaponizing people’s data to dart them with ‘relevant’ propaganda.

The committee also heard specific calls for European regulators to hurry up and enforce existing data protection law — specifically the EU’s General Data Protection Regulation (GDPR) — as a possible short-cut route to shrinking the harms legislators appeared to agree are linked to platforms’ data-reliant tracking for individual microtargeting.

A number of witnesses warned that liberal democracies remain drastically unprepared for the ongoing onslaught of malicious, hypertargeted fakes; that adtech giants’ business models are engineered for outrage and social division as an intentional choice and scheme to monopolize attention; and that even if we’ve now passed “peak vulnerability”, in terms of societal susceptibility to Internet-based disinformation campaigns (purely as a consequence of how many eyes have been opened to the risks since 2016), the activity itself hasn’t yet peaked and huge challenges for democratic nation states remain.

The latter point was made by disinformation researcher Ben Nimmo, director of investigations at Graphika.

Multiple witnesses called for Facebook to be prohibited from running political advertising as a matter of urgency, with plenty of barbed questions attacking its recent policy decision not to fact-check political ads.

Others went further — calling for more fundamental interventions to force reform of its business model and/or divest it of other social platforms it also owns. Given the company’s systematic failure to demonstrate it can be trusted with people’s data that’s enough reason to break it back up into separate social products, runs the argument.

Former Blackberry co-CEO, Jim Ballsillie, espoused a view that tech giants’ business models are engineered to profit from manipulation, meaning they inherently pose a threat to liberal democracies. While investor and former Facebook mentor, Roger McNamee, who has written a critical book about the company’s business model, called for personal data to be treated as a human right — so it cannot be stockpiled and turned into an asset to be exploited by behavior-manipulating adtech giants.

Also giving evidence today, journalist Carole Cadwalladr, who has been instrumental in investigating the Cambridge Analytica Facebook data misuse scandal, suggested no country should be trusting its election to Facebook. She also decried the fact that the UK is now headed to the polls, for a December general election, with no reforms to its electoral law and with key individuals involved in breaches of electoral law during the 2016 Brexit referendum now in positions of greater power to manipulate democratic outcomes. She too added her voice to calls for Facebook to be prohibited from running political ads.

In another compelling testimony, Marc Rotenberg, president and executive director of the Electronic Privacy Information Center (Epic) in Washington DC, recounted the long and forlorn history of attempts by US privacy advocates to win changes to Facebook’s policies to respect user agency and privacy — initially from the company itself, before petitioning regulators to try to get them to enforce promises Facebook had renaged on, yet still getting exactly nowhere.

No more ‘speeding tickets’

“We have spent the last many years trying to get the FTC to act against Facebook and over this period of time the complaints from many other consumer organizations and users have increased,” he told the committee. “Complaints about the use of personal data, complaints about the tracking of people who are not Facebook users. Complaints about the tracking of Facebook users who are no longer on the platform. In fact in a freedom of information request brought by Epic we uncovered 29,000 complaints now pending against the company.”

He described the FTC judgement against Facebook, which resulted in a $5BN penalty for the company in June, as both a “historic fine” but also essentially just a “speeding ticket” — because the regulator did not enforce any changes to its business model. So yet another regulatory lapse.

“The FTC left in place Facebook’s business practices and left at risk the users of the service,” he warned, adding: “My message to you today is simple: You must act. You cannot wait. You cannot wait ten years or even a year to take action against this company.”

He too urged legislators to ban the company from engaging in political advertising — until “adequate legal safeguards are established”. “The terms of the GDPR must be enforced against Facebook and they should be enforced now,” Rotenberg added, calling also for Facebook to be required to divest of WhatsApp — “not because of a great scheme to break up big tech but because the company violated its commitments to protect the data of WhatsApp users as a condition of the acquisition”.

In another particularly awkward moment for the social media giant, Keit Pentus-Rosimannus, a legislator from Estonia, asked Bickert directly why Facebook doesn’t stop taking money for political ads.

The legislator pointed out that it has already claimed revenue related to such ads is incremental for its business, making the further point that political speech can simply be freely posted to Facebook (as organic content); ergo, Facebook doesn’t need to take money from politicians to run ads that lie — since they can just post their lies freely to Facebook.

Bickert had no good answer to this. “We think that there should be ways that politicians can interact with their public and part of that means sharing their views through ads,” was her best shot at a response.

“I will say this is an area we’re here today to discuss collaboration, with a thought towards what we should be doing together,” she added. “Election integrity is an area where we have proactively said we want regulation. We think it’s appropriate. Defining political ads and who should run them and who should be able to and when and where. Those are things that we would like to work on regulation with governments.”

“Yet Twitter has done it without new regulation. Why can’t you do it?” pressed Pentus-Rosimannus.

“We think that it is not appropriate for Facebook to be deciding for the world what is true or false and we think that politicians should have an ability to interact with their audiences. So long as they’re following our ads policies,” Bickert responded. “But again we’re very open to how together we could come up with regulation that could define and tackle these issues.”

tl;dr Facebook could be seen once again deploying a policy minion to push for a ‘business as usual’ strategy that functions by seeking to fog the issues and re-frame the notion of regulation as a set of self-serving (and very low friction) ‘guide-rails’, rather than as major business model surgery.

Bickert was doing this even as the committee was hearing from multiple voices making the equal and opposite point with acute force.

Another of those critical voices was congressman David Cicilline — a US legislator making his first appearance at the Grand Committee. He closely questioned Bickert on how a Facebook user seeing a political ad that contains false information would know they are being targeted by false information, rejecting repeated attempts to misleading reframe his question as just about general targeting data.

“Again, with respect to the veracity, they wouldn’t know they’re being targeted with false information; they would know why they’re being targeted as to the demographics… but not as to the veracity or the falseness of the statement,” he pointed out.

Bickert responded by claiming that political speech is “so heavily scrutinized there is a high likelihood that somebody would know if information is false” — which earned her a withering rebuke.

“Mark Zuckerberg’s theory that sunlight is the best disinfectant only works if an advertisment is actually exposed to sunlight. But as hundreds of Facebook employees made clear in an open letter last week Facebook’s advanced targeting and behavioral tracking tools — and I quote — “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — end quote — as they know — and I quote — “these ads are often so microtargeted that the conversations on Facebook’s platforms are much more siloed than on the other platforms,” said Cicilline.

“So, Ms Bickert, it seems clear that microtargeting prevents the very public scrutiny that would serve as an effective check on false advertisements. And doesn’t the entire justification for this policy completely fall apart given that Facebook allows politicians both to run fake ads and to distribute those fake ads only to the people most vulnerable to believe in them? So this is a good theory about sunlight but in fact in practice you policies permit someone to make false representations and to microtarget who gets them — and so this big public scrutiny that serves as a justification just doesn’t exist.”

Facebook’s head of global policy management responded by claiming there’s “great transparency” around political ads on its platform — as a result of what she dubbed its “unprecedented” political ad library.

“You can look up any ad in this library and see what is the breakdown on the audience who has seen this ad,” she said, further claiming that “many [political ads] are not microtargeted at all”.

“Isn’t the problem here that Facebook has too much power — and shouldn’t we be thinking about breaking up that power rather than allowing Facebook’s decisions to continue to have such enormous consequences for our democracy?” rejoined Cicilline, not waiting for an answer and instead laying down a critical statement. “The cruel irony is that your company is invoking the protections of free speech as a cloak to defend your conduct which is in fact undermining and threatening the very institutions of democracy it’s cloaking itself in.”

The session was long on questions for Facebook and short on answers with anything other than the most self-serving substance from Facebook.

Major GDPR enforcements coming in 2020

During a later session without any of the tech giants present which was intended for legislators to query the state of play of regulation around online platforms, Ireland’s data protection commissioner, Helen Dixon, signalled that no major enforcements will be coming against Facebook et al this year — saying instead that decisions on a number of cross-border cases will be coming in 2020.

Ireland has a plate stacked high with complaints against tech giants since the GDPR came into force in May 2018. Among the 21 “large scale” investigations into big tech companies that remain ongoing are probes around transparency and the lawfulness of data processing by social media platform giants.

The adtech industry’s use of personal data in the real-time bidding programmatic process is also under the regulatory microscope.

Dixon and the Irish Data Protection Commission (DPC) takes center stage as a regulator for US tech giants given how many of these companies have chosen to site their international headquarters in Ireland — encouraged by business friendly corporate tax rates.

The DPC has a pivot al role on account of a one-stop-shop mechanism within the regulation that allows for a data protection agency with primary jurisdiction over a data controller to take a lead on cross-border data processing cases, with other EU member states’ DPAs able to feed but not lead such a complaint.

Some of the Irish DPC’s probes have already lasted as long as the 18 months since GDPR came into force across the bloc.

Dixon argued today that this is still a reasonable timeframe for enforcing an updated data protection regime, despite signalling further delay before any enforcements in these major cases. “It’s a mistake to say there’s been no enforcement… but there hasn’t been an outcome yet to the large scale investigations we have open, underway into the big tech platforms around lawfulness, transparency, privacy by design and default and so on. Eighteen months is not a long time. Not all of the investigations have been open for 18 months,” she said.

“We must follow due process or we won’t secure the outcome in the end. These companies they’ve market power but they also have the resources to litigate forever. And so we have to ensure we follow due process, we allow them a right to be heard, we conclude the legal analysis carefully by applying what our principles in the GDPR to the scenarios at issue and then we can hope to deliver the outcomes that the GDPR promises.

“So that work is underway. We couldn’t be working more diligently at it. And we will have the first sets of decisions that will start rolling out in the very near term.”

Asked by the committee about the level of cooperation the DPC is getting from the tech giants under investigation she said they are “engaging and cooperating” — but also that they’re “challenging at every turn”.

She also expressed a view that it’s not yet clear whether GDPR enforcement will be able to have a near-term impact on reining in any behaviors found to be infringing the law, given further potential legal push back from platforms after decisions are issued.

“The regulated entities are obliged under the GDPR to cooperate with investigations conducted by the data protection authority, and to date of the 21 large-scale investigations were have opened into big tech organizations they are engaging and cooperating. With equal measure they’re challenging at every turn as well and seeking constant clarifications around due process but they are cooperating and engaging,” she told the committee.

“What remains to be seen is how the investigations we currently have open will conclude. And whether there will ultimately be compliance with the outcomes of those investigations or whether they will be subject to lengthy challenge and so on. So I think the big question of whether we’re going to be able to near-term drive the kind of outcomes we want is still an open question. And it awaiting us as a data protection authority to put down the first final decisions in a number of cases.”

She also expressed doubt about whether the GDPR data protection framework will, ultimately, sum to a tool that can  regulate underlying business models that are based on collecting data for the purpose of behavioral advertising.

“The GDPR isn’t set up to tackle business models, per se,” she said. “It’s set up to apply principles to data processing operations. And so there’s a complexity when we come to look at something like adtech or online behavioral advertising in that we have to target multiple actors.



source https://techcrunch.com/2019/11/07/legislators-from-ten-parliaments-put-the-squeeze-on-facebook/