Thursday 31 January 2019

We dismantle Facebook’s memo defending its “Research”

Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’s investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees — despite Apple punishing it for the violation.

For reference, Facebook was recruiting users age 13-35 to install a Research app, VPN, and give it root network access so it could analyze all their traffic. It’s pretty sketchy to be buying people’s privacy, and despite being shut down on iOS, it’s still running on Android.

Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.

Memo from Facebook VP Pedro Canahuati

APPLE ENTERPRISE CERTS REINSTATED

Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.

New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.

Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.

What happened?

On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.

TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.

We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.

Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Research app for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.

Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly.

TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”

The Facebook Research program was called Project Atlas until you signed up

How does this program work?

We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.

After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.

This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.

Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”

The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.

Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other user feedback and research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certificate also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately before users sign up.

Facebook’s ads recruiting teens for the program didn’t disclose its involvement

Did we intentionally hide our identity as Facebook?

No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook

Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.

What data do we collect? Do we read people’s private messages?

No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.

Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.

Facebook Research abused the Enterprise Certificate system meant for employee-only apps

Did we break Apple’s terms of service?

Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.

Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”

Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.



source https://techcrunch.com/2019/01/31/facebook-researchgate/

We dismantle Facebook’s memo defending its Research data-grab

Facebook published an internal memo today trying to minimize the morale damage of TechCrunch’investigation that revealed it’d been paying people to suck in all their phone data. Attained by Business Insider’s Rob Price, the memo from Facebook’s VP of production engineering and security Pedro Canahuati gives us more detail about exactly what data Facebook was trying to collect from teens and adults in the US and India. But it also tries to claim the program wasn’t secret, wasn’t spying, and that Facebook doesn’t see it as a violation of Apple’s policy against using its Enterprise Certificate system to distribute apps to non-employees.

Here we lay out the memo with section by section responses to Facebook’s claims challenging TechCrunch’s reporting. Our responses are in bold and we’ve added images.

Memo from Facebook VP Pedro Canahuati

APPLE ENTERPRISE CERTS REINSTATED

Early this morning, we received agreement from Apple to issue a new enterprise certificate; this has allowed us to produce new builds of our public and enterprise apps for use by employees and contractors. Because we have a few dozen apps to rebuild, we’re initially focusing on the most critical ones, prioritized by usage and importance: Facebook, Messenger, Workplace, Work Chat, Instagram, and Mobile Home.

New builds of these apps will soon be available and we’ll email all iOS users for detailed instructions on how to reinstall. We’ll also post to iOS FYI with full details.

Meanwhile, we’re expecting a follow-up article from the New York Times later today, so I wanted to share a bit more information and background on the situation.

What happened?

On Tuesday TechCrunch reported on our Facebook Research program. This is a market research program that helps us understand consumer behavior and trends to build better mobile products.

TechCrunch implied we hid the fact that this is by Facebook – we don’t. Participants have to download an app called Facebook Research App to be involved in the stud. They also characterized this as “spying,” which we don’t agree with. People participated in this program with full knowledge that Facebook was sponsoring this research, and were paid for it. They could opt-out at any time. As we built this program, we specifically wanted to make sure we were as transparent as possible about what we were doing, what information we were gathering, and what it was for — see the screenshots below.

We used an app that we built ourselves, which wasn’t distributed via the App Store, to do this work. Instead it was side-loaded via our enterprise certificate. Apple has indicated that this broke their Terms of Service so disabled our enterprise certificates which allow us to install our own apps on devices outside of the official app store for internal dogfooding.

Author’s response: To start, “build better products” is a vague way of saying determining what’s popular and buying or building it. Facebook has used competitive analysis gathered by its similar Onavo Protect app and Facebook Reserch for years to figure out what apps were gaining momentum and either bring them in or box them out. Onavo’s data is how Facebook knew WhatsApp was sending twice as many messages as Messenger, and it should invest $19 billion to acquire it.

Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause (which owns uTest) and CentreCode (which owns Betabound) to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly. TechCrunch has reviewed communications indicating Facebook would threaten legal action if a user spoke publicly about being part of the Research program. While the program had run since 2016, it had never been reported on. We believe that these facts combined justify characterizing the program as “secret”

The Facebook Research program was called Project Atlas until you signed up

How does this program work?

We partner with a couple of market research companies (Applause and CentreCode) to source and onboard candidates based in India and USA for this research project. Once people are onboarded through a generic registration page, they are informed that this research will be for Facebook and can decline to participate or opt out at any point. We rely on a 3rd party vendor for a number of reasons, including their ability to target a Diverse and representative pool of participants. They use a generic initial Registration Page to avoid bias in the people who choose to participate.

After generic onboarding people are asked to download an app called the ‘Facebook Research App,’ which takes them through a consent flow that requires people to check boxes to confirm they understand what information will be collected. As mentioned above, we worked hard to make this as explicit and clear as possible.

This is part of a broader set of research programs we conduct. Asking users to allow us to collect data on their device usage is a highly efficient way of getting industry data from closed ecosystems, such as iOS and Android. We believe this is a valid method of market research.

Author’s response: Facebook claims it wasn’t “spying”, yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were described in merely a footnote. The program did not specify specific data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity”

The parental consent form from Facebook and Applause lists none of the specific types of data collected or the extent of Facebook’s access. Under “Risks/Benefits”, the form states “There are no known risks associated with this project¨ however you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” It gives parents no information about what data their kids are giving up.

Facebook claims it uses third-parties to target a diverse pool of participants. Yet Facebook conducts other research programs on its own without the need for intermediaries that obscure its identity, and only ran the program in two countries. It claims to use a generic signup page to avoid biasing who will choose to participate, yet the cash incentive and technical process of installing the root certification also bias who will participate, and the intermediaries conveniently prevent Facebook from being publicly associated with the program at first glance. Meanwhile, other clients of the Betabound testing platform like Amazon, Norton, and SanDisk reveal their names immediately

Facebook’s ads recruiting teens for the program didn’t disclose its involvement

Did we intentionally hide our identity as Facebook?

No — The Facebook brand is very prominent throughout the download and installation process, before any data is collected. Also, the app name of the device appears as “Facebook Research” — see attached screenshots. We use third parties to source participants in the research study, to avoid bias in the people who choose to participate. But as soon as they register, they become aware this is research for Facebook

Author’s response: Facebook here admits that users did not know Facebook was involved before they registered.

What data do we collect? Do we read people’s private messages?

No, we don’t read private messages. We collect data to understand how people use apps, but this market research was not designed to look at what they share or see. We’re interested in information such as watch time, video duration, and message length, not that actual content of videos, messages, stories or photos. The app specifically ignores information shared via financial or health apps.

Author’s response: We never reported that Facebook was reading people’s private messages, but that it had the ability to collect them. Facebook here admits that the program was “not designed to look at what they share or see”, but stops far short of saying that data wasn’t collected. Fascinatingly, Facebook reveals it was that it was closely monitoring how much time people spent on different media types.

Facebook Research abused the Enterprise Certificate system meant for employee-only apps

Did we break Apple’s terms of service?

Apple’s view is that we violated their terms by sideloading this app, and they decide the rules for their platform, We’ve worked with Apple to address any issues; as a result, our internal apps are back up and running. Our relationship with Apple is really important — many of us use Apple products at work every day, and we rely on iOS for many of our employee apps, so we wouldn’t put that relationship at any risk intentionally. Mark and others will be available to talk about this further at Q&A later today.

Author’s response: TechCrunch reported that Apple’s policy plainly states that the Enterprise Certificate program requires companies to “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing” and that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers”. Apple took a firm stance in its statement that Facebook did violate the program’s policies, stating “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple.”

Given Facebook distributed the Research apps to teenagers that never signed tax forms or formal employment agreements, they were obviously not employees or contractors, and most likely use some Facebook-owned service that qualifies them as customers. Also, I’m pretty sure you can’t pay employees in gift cards.



source https://techcrunch.com/2019/01/31/facebook-researchgate/

Apple reactivates Facebook’s employee apps after punishment for Research spying

After TechCrunch caught Facebook violating Apple’s employee-only app distribution policy to pay people for all their phone data, Apple invalidated the social network’s Enterprise Certificate as punishment. That deactivated not only this Facebook Research app VPN, but also all of Facebook’s internal iOS apps for workplace collaboration, beta testing and even getting the company lunch or bus schedule. That threw Facebook’s offices into chaos yesterday morning. Now after nearly two work days, Apple has ended Facebook’s time-out and restored its Enterprise Certification. That means employees can once again access all their office tools, pre-launch test versions of Facebook and Instagram… and the lunch menu.

A Facebook spokesperson issued this statement to TechCrunch: “We have had our Enterprise Certification, which enables our internal employee applications, restored. We are in the process of getting our internal apps up and running. To be clear, this didn’t have an impact on our consumer-facing services.”

Meanwhile, TechCrunch’s follow-up report found that Google was also violating the Enterprise Certificate program with its own “market research” VPN app called Screenwise Meter that paid people to snoop on their phone activity. After we informed Google and Apple yesterday, Google quickly apologized and took down the app. But apparently in service of consistency, this morning Apple invalidated Google’s Enterprise Certificate too, breaking its employee-only iOS apps.

Google’s internal apps are still broken. Unlike Facebook that has tons of employees on iOS, Google at least employs plenty of users of its own Android platform, so the disruption may have caused fewer problems in Mountain View than Menlo park. “We’re working with Apple to fix a temporary disruption to some of our corporate iOS apps, which we expect will be resolved soon,” said a Google spokesperson. A spokesperson for Apple said: “We are working together with Google to help them reinstate their enterprise certificates very quickly.”

TechCrunch’s investigation found that the Facebook Research app not only installed an Enterprise Certificate on users phones and a VPN that could collect their data, but also demanded root network access that allows Facebook to man-in-the-middle their traffic and even deencrypt secure transmissions. It paid users age 13 to 35 $10 to $20 per month to run the app so it could collect competitive intelligence on who to buy or copy. The Facebook Research app contained numerous code references to Onavo Protect, the app Apple banned and pushed Facebook to remove last August, yet Facebook kept on operating the Research data collection program.

When we first contacted Facebook, it claimed the Research app and its Enterprise Certificate distribution that sidestepped Apple’s oversight was in line with Apple’s policy. Seven hours later, Facebook announced it would shut down the Research app on iOS (though it’s still running on Android which has fewer rules). Facebook also claimed that “there was nothing ‘secret’ about this”, challenging the characterization of our reporting. However, TechCrunch has since reviewed communications proving that the Facebook Research program threatened legal action if its users spoke publicly about the app. That sounds pretty “secret” to us.

Then we learned yesterday morning that Facebook hadn’t voluntarily pulled the app as Apple had actually already invalidated Facebook’s Enterprise Certificate, thereby breaking the Research app and the social network’s employee tools. Apple provided this brutal statement, which it in turn applied to Google today:

We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.”

Apple is being likened to a vigilante privacy regulator overseeing Facebook and Google by The Verge’s Casey Newton and The New York Times’ Kevin Roose, perhaps with too much power given they’re all competitors. But in this case, both Facebook and Google blatantly violated Apple’s policies to collect the maximum amount of data about iOS users, including teenagers. That means Apple was fully within its right to shut down their market research apps. Breaking their employee apps too could be seen as just collateral damage since they all use the same Enterprise Certification, or as additional punishment for violating the rules. This only becomes a real problem if Apple steps beyond the boundaries of its policies. But now, all eyes are on how it enforces its rules, whether to benefit its users or beat up on its rivals.



source https://techcrunch.com/2019/01/31/mess-with-the-cook/

Twitter cuts off API access to follow/unfollow spam dealers

Notification spam ruins social networks, diluting the real human interaction. Desperate to gain an audience, users pay services to rapidly follow and unfollow tons of people in hopes that some will follow them back. The services can either automate this process or provide tools for users to generate this spam themselves, Earlier this month, a TechCrunch investigation found over two dozen follow-spam companies were paying Instagram to run ads for them. Instagram banned all the services in response an vowed to hunt down similar ones more aggressively.

ManageFlitter’s spammy follow/unfollow tools

Today, Twitter is stepping up its fight against notification spammers. Earlier today, the functionality of three of these services — ManageFlitter, Statusbrew, Crowdfire — ceased to function, as spotted by social media consultant Matt Navarra.

TechCrunch inquired with Twitter about whether it had enforced its policy against those companies. A spokesperson provided this comment: “We have suspended these three apps for having repeatedly violated our API rules related to aggressive following & follow churn. As a part of our commitment to building a healthy service, we remain focused on rapidly curbing spam and abuse originating from use of Twitter’s APIs.” These apps will cease to function since they’ll no longer be able to programatically interact with Twitter to follow or unfollow people or take other actions.

Twitter’s policies specify that “Aggressive following (Accounts who follow or unfollow Twitter accounts in a bulk, aggressive, or indiscriminate manner) is a violation of the Twitter Rules.” This is to prevent a ‘tragedy of the commons’ situation. These services and their customers exploit Twitter’s platform, worsening the experience of everyone else to grow these customers’ follower counts. We dug into these three apps and found they each promoted features designed to help their customers spam Twitter users.

ManageFlitter‘s site promotes how “Following relevant people on Twitter is a great way to gain new followers. Find people who are interested in similar topics, follow them and often they will follow you back.” For $12 to $49 per month, customers can use this feature shown in the GIF above to rapidly follow others, while another feature lets them check back a few days later and rapidly unfollow everyone who didn’t follow them back. 

Crowdfire had already gotten in trouble with Twitter for offering a prohibited auto-DM feature and tools specifically for generating follow notifications. Yep it only changed its functionality to dip just beneath the rate limits Twitter imposes. It seems it preferred charging users up to $75 per month to abuse the Twitter ecosystem than accept that what it was doing was wrong.

StatusBrew details how “Many a time when you follow users, they do not follow back . . . thereby, you might want to disconnect with such users after let’s say 7 days. Under ‘Cleanup Suggestion’ we give you a reverse sorted list of the people who’re Not Following Back”. It charges $25 to $416 month for these spam tools. After losing its API access today, StatusBrew posted a confusing half-mea culpa, half-“it was our customers’ fault” blog post announcing it will shut down its follow/unfollow features.

Twitter tells TechCrunch it will allow these companies “apply for a new developer account and register a new, compliant app” but the existing apps will remain suspended. I think they deserve an additional time-out period. But still, this is a good step towards Twitter protecting the health of conversation on its platform from greedy spam services. I’d urge the company to also work to prevent companies and sketchy individuals from selling fake followers or follow/unfollow spam via Twitter ads or tweets.

When you can’t trust that someone who follows you is real, the notifications become meaningless distractions, faith in finding real connection sinks, and we become skeptical of the whole app. It’s the users that lose, so it’s the platforms’ responsibility to play referee.



source https://techcrunch.com/2019/01/31/dont-buy-twitter-followers/

Facebook just removed a new wave of suspicious activity linked to Iran

Facebook just announced its latest round of “coordinated inauthentic behavior,” this time out of Iran. The company took down 262 Pages, 356 accounts, three Facebook groups and 162 Instagram accounts that exhibited “malicious-looking indicators” and patterns that identify it as potentially state-sponsored or otherwise deceptive and coordinated activity.

As Facebook Head of Cybersecurity Policy Nathaniel Gleicher noted in a press call, Facebook coordinated closely with Twitter to discover these accounts, and by collaborating early and often the company “[was] able to use that to build up our own investigation.” Today, Twitter published a postmortem on its efforts to combat misinformation during the US midterm election last year.

Example of the content removed

As the Newsroom post details, the activity affected a broad swath of areas around the globe:

“There were multiple sets of activity, each localized for a specific country or region, including Afghanistan, Albania, Algeria, Bahrain, Egypt, France, Germany, India, Indonesia, Iran, Iraq, Israel, Libya, Mexico, Morocco, Pakistan, Qatar, Saudi Arabia, Serbia, South Africa, Spain, Sudan, Syria, Tunisia, US, and Yemen. The Page administrators and account owners typically represented themselves as locals, often using fake accounts, and posted news stories on current events… on topics like Israel-Palestine relations and the conflicts in Syria and Yemen, including the role of the US, Saudi Arabia, and Russia.

Today’s takedown is the result of an internal investigation linking the newly discovered activity to other content out of Iran late last year. Remarkably, the activity Facebook flagged today dates back to 2010.

The Iranian activity was not focused on creating real world events, as we’ve seen in other cases. In many cases, the content “repurposed” reporting from Iranian state media and spread ideas that could benefit Iran’s positions on various geopolitical issues. Still, Facebook declined to link the newly identified activity to Iran’s government directly.

“Whenever we make an announcement like this we’re really careful,” Gleicher said. “We’re not in a position to directly assert who the actor is in this case, we’re asserting what we can prove.”



source https://techcrunch.com/2019/01/31/facebook-iran-2019/

Facebook users who quit the social network for a month feel happier

New research out of Stanford and New York University took a look at what happens when people step back from Facebook for a month.

Through Facebook, the research team recruited 2,488 people who averaged an hour of Facebook use each day. After assessing their “willingness to accept” the idea of deactivating their account for a month, the study assigned eligible participants to an experimental category that would deactivate their accounts or a control group that would not.

Over the course of the month-long experiment, researchers monitored compliance by checking participants’ profiles. The participants self-reported a rotating set of well being measures in real time, including happiness, what emotion a participant felt over the last 10 minutes and a measure of loneliness.

As the researchers report, leaving Facebook correlated with improvements on well being measures. They found that the group tasked with quitting Facebook ended up spending less time on other social networks too, instead spending more time to offline activities like spending time with friends and family (good) and watching television (maybe not so good). Overall the group reported that it spent less time consuming news in general.

The group that Facebook also reported less time spent on the social network after the study-imposed hiatus was up, suggesting that the break might have given them new insight into their own habits.

“Reduced post-experiment use aligns with our finding that deactivation improved subjective well-being, and it is also consistent with the hypotheses that Facebook is habit forming… or that people learned that they enjoy life without Facebook more than they had anticipated,” the paper’s authors wrote.

There are a few things to be aware of with the research. The paper notes that subjects were told they would “keep [their] access to Facebook Messenger.” Though the potential impact of letting participants remain on Messenger isn’t mentioned again, it sounds like they were still freely using one of the platform’s main functions though perhaps one with fewer potential negative effects on mood and behavior.

Unlike some recent research, this study was conducted by economics researchers. That’s not unusual for social psych-esque stuff like this but does inform aspects of the method, measured used and perspective.

Most important for a bit more context, the research was conducted in the run-up to the 2016 U.S. presidential election. That fact that is likely to have informed participants’ attitudes around social media, both before and after the election.

While the participants reported that they were less informed about current events, they also showed evidence of being less politically polarized, “consistent with the concern that social media have played some role in the recent rise of polarization in the US.”

In an era of ubiquitous threats to quit the world’s biggest social network, the fact remains that we mostly have no idea what our online habits are doing to our brains and behavior. Given that, we also don’t know what happens when we step back from social media environments like Facebook and give our brains a reprieve. With its robust sample size and fairly thorough methodology, this study provides us a useful glimpse into those effects. For more insight into the research, you can read the full paper here.



source https://techcrunch.com/2019/01/31/stanford-nyu-econ-facebook-study/

Digital influencers and the dollars that follow them

Animated characters are as old as human storytelling itself, dating back thousands of years to cave drawings that depict animals in motion. It was really in the last century, however — a period bookended by the first animated short film in 1908 and Pixar’s success with computer animation with Toy Story from 1995 onward — that animation leapt forward. Fundamentally, this period of great innovation sought to make it easier to create an animated story for an audience to passively consume in a curated medium, such as a feature-length film.

Our current century could be set for even greater advances in the art and science of bringing characters to life. Digital influencers — virtual or animated humans that live natively on social media — will be central to that undertaking. Digital influencers don’t merely represent the penetration of cartoon characters into yet another medium, much as they sprang from newspaper strips to TV and the multiplex. Rather, digital humans on social media represent the first instance in which fictional entities act in the same plane of communication as you and I — regular people — do. Imagine if stories about Mickey Mouse were told over a telephone or in personalized letters to fans. That’s the kind of jump we’re talking about.

Social media is a new storytelling medium, much as film was a century ago. As with film then, we have yet to transmit virtual characters to this new medium in a sticky way.

Which isn’t to say that there aren’t digital characters living their lives on social channels right now. The pioneers have arrived: Lil’ Miquela, Astro, Bermuda and Shudu are prominent examples. But they are still only notable for their novelty, not yet their ubiquity. They represent the output of old animation techniques applied to a new medium. This TechCrunch article did a great job describing the current digital influencer landscape.

So why haven’t animated characters taken off on social media platforms? It’s largely an issue of scale — it’s expensive and time-consuming to create animated characters and to depict their adventures. One 2017 estimate stated that a 60 to 90-second animation took about 6 weeks to create. An episode of animated TV takes between 13 months to produce, typically with large teams in South Korea doing much of the animation legwork. That pace simply doesn’t work in a medium that calls for new original content multiple times a day.

Yet the technical piece of the puzzle is falling into place, which is primarily what I want to talk about today. Traditionally, virtual characters were created by a team of experts — not scalable — in the following way:

  • Create a 3D model
  • Texture the model and add additional materials
  • Rig the 3D model skeleton
  • Animate the 3D model
  • Introduce character into desired scene

Today, there are generally three types of virtual avatar: realistic high-resolution CGI avatars, stylized CGI avatars and manipulated video avatars. Each has its strengths and pitfalls, and the fast-approaching world of scaled digital influencers will likely incorporate aspects of all three.

The digital influencers mentioned above are all high-resolution CGI avatars. It’s unsurprising that this tech has breathed life into the most prominent digital influencers so far — this type of avatar offers the most creative latitude and photorealism. You can create an original character and have her carry out varied activities.

The process for their creation borrows most from the old-school CGI pipeline described above, though accelerated through the use of tools like Daz3D for animation, Moka Studio for rigging, and Rokoko for motion capture. It’s old wine in new bottles. Naturally, it shares the same bottlenecks as the old-school CGI pipeline: creating characters in this way consumes a lot of time and expertise.

Though researchers, like Ari Shapiro at the University of Southern California Institute for Creative Technologies, are currently working on ways to automate the creation of high-resolution CGI avatars, that bottleneck remains the obstacle for digital influencers entering the mainstream.

Stylized CGI avatars, on the other hand, have entered the mainstream. If you have an iPhone or use Snapchat, chances are you have one. Apple, Samsung, Pinscreen, Loom.ai, Embody Digital, Genies and Expressive.ai are just some of the companies playing in this space. These avatars, while likely to spread ubiquitously à la Bitmoji before them, are limited in scope.

While they extend the ability to create an animated character to anyone who uses an associated app, that creation and personalization is circumscribed: the avatar’s range is limited for the purposes of what we’re discussing in this article. It’s not so much a technology for creating new digital humans as it is a tool for injecting a visual shorthand for someone into the digital world. You’ll use it to embellish your Snapchat game, but storytellers will be unlikely to use these avatars to create a spiritual successor to Mickey Mouse and Buzz Lightyear (though they will be a big advertising / brand partnership opportunity nonetheless).

Video manipulation — you probably know it as deepfakes — is another piece of tech that is speeding virtual or fictional characters into the mainstream. As the name implies, however, it’s more about warping reality to create something new. Anyone who has seen Nicolas Cage’s striking features dropped onto Amy Adams’ body in a Superman film will understand what I’m talking about.

Open-source packages like this one allow almost anyone to create a deepfake (with some technical knowhow — your grandma probably hasn’t replaced her time-honored Bingo sessions with some casual deepfaking). It’s principally used by hobbyists, though recently we’ve seen startups like Synthesia crop up with business use cases. You can use deepfake tech for mimicry, but we haven’t yet seen it used for creating original characters. It shares some of the democratizing aspects of stylized CGI avatars, and there are likely many creative applications for the tech that simply haven’t been realized yet.

While none of these technology stacks on their own currently enable digital humans at scale, when combined they may make up the wardrobe that takes us into Narnia. Video manipulation, for example, could be used to scale realistic high-res characters like Lil’ Miquela through accelerating the creation of new stories and tableaux for her to inhabit. Nearly all of the most famous animated characters have been stylized, and I wouldn’t bet against social media’s Snow White being stylized too. What is clear is that the technology to create digital influencers at scale is nearing a tipping point. When we hit that tipping point, these creations will transform entertainment and storytelling.



source https://techcrunch.com/2019/01/31/digital-influencers-and-the-dollars-that-follow-them/

Leaked TikTok ad deck suggests it has 17M+ MAUs in Europe

An advertising pitch deck used by fast-growing short form video sharing app TikTok has leaked, providing a snapshot of usage in its biggest markets in Europe.

The pitch deck was obtained by Digiday which says it was sent to a large (unnamed) European ad agency.

Metrics and gender breakdowns for the UK, France, Germany, Spain and Italy are included in the deck. The slides are dated November 2018.

Germany and France come out as the top European markets for the video sharing app, according to the deck, with 4.1M+ and 4M+ monthly active users respectively, and an average of 6.5BN and 5BN video views.

Next is the UK, with 3.7M+ users (and 5BN video views); followed by Spain with 2.7M+ users (and 3BN video views); and Italy with 2.4M+ users (and 3BN views).

Last summer Beijing’s ByteDance, the company behind TikTok, said the app had passed 500 million monthly active users worldwide.

Analyst estimates suggest it’s had around 800M total downloads in total since launch in fall 2016.

Although usage stepped up in 2017, after Bytedance shelled out to acquire rival lip-sync video app, Musical.ly — paying between $800M and $1BN to bag and merge its 60M (mostly US) users.

In the UK, France and Germany TikTok users open the app an average of 8 times per day, according to the leaked deck, vs 6 times in Italy and Spain.

While UK users clock up the most time spent in the app, with an average of 41 minutes per day; followed by France (40 minutes); Germany (39 minutes); Italy (34 minutes); and Spain (31 minutes).

Users of the app skew female across all five markets but the skew is greatest in Italy and Spain, which both have a 65:35 female to male ratio.

The smallest skew is in Germany where the female to male ratio of users is 54:46.

The pitch deck also details ad formats TikTok is selling in the region, covering four ad products and how they are measured.

The listed ad products are: Brand takeover; in-feed native video; hashtag challenge; and Snapchat-style 2D lens filters for photos — with 3D and AR lenses listed as “coming soon” (2019, per another slide).

The slides do not include prices for the ad formats but Digiday cites one media buyer who told it the company is charging $10 CPMs for fixed buys. Though it says another media exec told it agencies are being given different rates, noting the person had heard higher prices for the brand takeover ad unit for example.

We’ve reached out to TikTok for comment.



source https://techcrunch.com/2019/01/31/leaked-tiktok-ad-deck-suggests-it-has-17m-maus-in-europe/

Social media should have ‘duty of care’ towards kids, UK MPs urge

Social media platforms are being urged to be far more transparent about how their services operate and to make “anonymised high-level data” available to researchers so the technology’s effects on users — and especially on children and teens — can be better understood.

The calls have been made in a report by the UK parliament’s Science and Technology Committee which has been looking into the impacts of social media and screen use among children — to consider whether such tech is “healthy or harmful”.

“Social media companies must also be far more open and transparent regarding how they operate and particularly how they moderate, review and prioritise content,” it writes.

Concerns have been growing about children’s use of social media and mobile technology for some years now, with plenty of anecdotal evidence and also some studies linking tech use to developmental problems, as well as distressing stories connecting depression and even suicide to social media use.

Although the committee writes that its dive into the topic was hindered by “the limited quantity and quality of academic evidence available”. But it also asserts: “The absence of good academic evidence is not, in itself, evidence that social media and screens have no effect on young people.”

“We found that the majority of published research did not provide a clear indication of causation, but instead indicated a possible correlation between social media/screens and a particular health effect,” it continues. “There was even less focus in published research on exactly who was at risk and if some groups were potentially more vulnerable than others when using screens and social media.”

The UK government expressed its intention to legislate in this area, announcing a plan last May to “make social media safer” — promising new online safety laws to tackle concerns.

The committee writes that it’s therefore surprised the government has not commissioned “any new, substantive research to help inform its proposals”, and suggests it get on and do so “as a matter of urgency” — with a focus on identifying people at risk of experiencing harm online and on social media; the reasons for the risk factors; and the longer-term consequences of the tech’s exposure on children.

It further suggests the government should consider what legislation is required to improve researchers’ access to this type of data, given platforms have failed to provide enough access for researchers of their own accord.

The committee says it heard evidence of a variety of instances where social media could be “a force for good” but also received testimonies about some of the potential negative impacts of social media on the health and emotional wellbeing of children.

“These ranged from detrimental effects on sleep patterns and body image through to cyberbullying, grooming and ‘sexting’,” it notes. “Generally, social media was not the root cause of the risk but helped to facilitate it, while also providing the opportunity for a large degree of amplification. This was particularly apparent in the case of the abuse of children online, via social media.

“It is imperative that the government leads the way in ensuring that an effective partnership is in place, across civil society, technology companies, law enforcement agencies, the government and non-governmental organisations, aimed at ending child sexual exploitation (CSE) and abuse online.”

The committee suggests the government commission specific research to establish the scale and prevalence of online CSE — pushing it to set an “ambitious target” to halve reported online CSE in two years and “all but eliminate it in four”.

A duty of care

A further recommendation will likely send a shiver down tech giants’ spines, with the committee urging a duty of care principle be enshrined in law for social media users under 18 years of age to protect them from harm when on social media sites.

Such a duty would up the legal risk stakes considerably for user generated content platforms which don’t bar children from accessing their services.

The committee suggests the government could achieve that by introducing a statutory code of practice for social media firms, via new primary legislation, to provide “consistency on content reporting practices and moderation mechanisms”.

It also recommends a requirement in law for social media companies to publish detailed Transparency Reports every six months.

It is also for a 24 hour takedown law for illegal content, saying that platforms should have to review reports of potentially illegal content and take a decision on whether to remove, block or flag it — and reply the decision to the individual/organisation who reported it — within 24 hours.

Germany already legislated for such a law, back in 2017 — though in that case the focus is on speeding up hate speech takedowns.

In Germany social media platforms can be fined up to €50 million if they fail to comply with the NetzDG law, as its truncated German name is known. (The EU executive has also been pushing platforms to remove terrorist related material within an hour of a report, suggesting it too could legislate on this front if they fail to moderate content fast enough.)

The committee suggests the UK’s media and telecoms regulator, Ofcom would be well-placed to oversee how illegal content is handled under any new law.

It also recommends that social media companies use AI to identify and flag to users (or remove as appropriate) content that “may be fake” — pointing to the risk posed by new technologies such as “deep fake videos”.

More robust systems for age verification are also needed, in the committee’s view. It writes that these must go beyond “a simple ‘tick box’ or entering a date of birth”.

Looking beyond platforms, the committee presses the government to take steps to improve children’s digital literacy and resilience, suggesting PSHE (personal, social and health) education should be made mandatory for primary and secondary school pupils — delivering “an age-appropriate understanding of, and resilience towards, the harms and benefits of the digital world”.

Teachers and parents should also not be overlooked, with the committee suggesting training and resources for teachers and awareness and engagement campaigns for parents.



source https://techcrunch.com/2019/01/31/social-media-should-have-duty-of-care-towards-kids-uk-mps-urge/

Social media should have “duty of care” towards kids, UK MPs urge

Social media platforms are being urged to be far more transparent about how their services operate and to make “anonymised high-level data” available to researchers so the technology’s effects on users — and especially on children and teens — can be better understood.

The calls have been made in a report by the UK parliament’s Science and Technology Committee which has been looking into the impacts of social media and screen use among children — to consider whether such tech is “healthy or harmful”.

“Social media companies must also be far more open and transparent regarding how they operate and particularly how they moderate, review and prioritise content,” it writes.

Concerns have been growing about children’s use of social media and mobile technology for some years now, with plenty of anecdotal evidence and also some studies linking tech use to developmental problems, as well as distressing stories connecting depression and even suicide to social media use.

Although the committee writes that its dive into the topic was hindered by “the limited quantity and quality of academic evidence available”. But it also asserts: “The absence of good academic evidence is not, in itself, evidence that social media and screens have no effect on young people.”

“We found that the majority of published research did not provide a clear indication of causation, but instead indicated a possible correlation between social media/screens and a particular health effect,” it continues. “There was even less focus in published research on exactly who was at risk and if some groups were potentially more vulnerable than others when using screens and social media.”

The UK government expressed its intention to legislate in this area, announcing a plan last May to “make social media safer” — promising new online safety laws to tackle concerns.

The committee writes that it’s therefore surprised the government has not commissioned “any new, substantive research to help inform its proposals”, and suggests it get on and do so “as a matter of urgency” — with a focus on identifying people at risk of experiencing harm online and on social media; the reasons for the risk factors; and the longer-term consequences of the tech’s exposure on children.

It further suggests the government should consider what legislation is required to improve researchers’ access to this type of data, given platforms have failed to provide enough access for researchers of their own accord.

The committee says it heard evidence of a variety of instances where social media could be “a force for good” but also received testimonies about some of the potential negative impacts of social media on the health and emotional wellbeing of children.

“These ranged from detrimental effects on sleep patterns and body image through to cyberbullying, grooming and ‘sexting’,” it notes. “Generally, social media was not the root cause of the risk but helped to facilitate it, while also providing the opportunity for a large degree of amplification. This was particularly apparent in the case of the abuse of children online, via social media.

“It is imperative that the government leads the way in ensuring that an effective partnership is in place, across civil society, technology companies, law enforcement agencies, the government and non-governmental organisations, aimed at ending child sexual exploitation (CSE) and abuse online.”

The committee suggests the government commission specific research to establish the scale and prevalence of online CSE — pushing it to set an “ambitious target” to halve reported online CSE in two years and “all but eliminate it in four”.

A duty of care

A further recommendation will likely send a shiver down tech giants’ spines, with the committee urging a duty of care principle be enshrined in law for social media users under 18 years of age to protect them from harm when on social media sites.

Such a duty would up the legal risk stakes considerably for user generated content platforms which don’t bar children from accessing their services.

The committee suggests the government could achieve that by introducing a statutory code of practice for social media firms, via new primary legislation, to provide “consistency on content reporting practices and moderation mechanisms”.

It also recommends a requirement in law for social media companies to publish detailed Transparency Reports every six months.

It is also for a 24 hour takedown law for illegal content, saying that platforms should have to review reports of potentially illegal content and take a decision on whether to remove, block or flag it — and reply the decision to the individual/organisation who reported it — within 24 hours.

Germany already legislated for such a law, back in 2017 — though in that case the focus is on speeding up hate speech takedowns.

In Germany social media platforms can be fined up to €50 million if they fail to comply with the NetzDG law, as its truncated German name is known. (The EU executive has also been pushing platforms to remove terrorist related material within an hour of a report, suggesting it too could legislate on this front if they fail to moderate content fast enough.)

The committee suggests the UK’s media and telecoms regulator, Ofcom would be well-placed to oversee how illegal content is handled under any new law.

It also recommends that social media companies use AI to identify and flag to users (or remove as appropriate) content that “may be fake” — pointing to the risk posed by new technologies such as “deep fake videos”.

More robust systems for age verification are also needed, in the committee’s view. It writes that these must go beyond “a simple ‘tick box’ or entering a date of birth”.

Looking beyond platforms, the committee presses the government to take steps to improve children’s digital literacy and resilience, suggesting PSHE (personal, social and health) education should be made mandatory for primary and secondary school pupils — delivering “an age-appropriate understanding of, and resilience towards, the harms and benefits of the digital world”.

Teachers and parents should also not be overlooked, with the committee suggesting training and resources for teachers and awareness and engagement campaigns for parents.



source https://techcrunch.com/2019/01/31/social-media-should-have-duty-of-care-towards-kids-uk-mps-urge/

Wednesday 30 January 2019

Facebook plans new products as Instagram Stories hits 500M users/day

Roughly half of Instagram’s users 1 billion users now use Instagram Stories every day. That 500 million daily user count is up from 400 million in June 2018. 2 million advertiseres are now buying Stories ads across Facebook’s properties.

CEO Mark Zuckerberg called Stories the last big game-changing feature from Facebook, but after concentrating on security last year, it plans to ship more products that make “major improvements” in people’s lives.

During today’s Q4 2018 earnings call, Zuckerberg outlined several areas where Facebook will push new products this year:

  • Encryption and ephemerality will be added to more features for security and privacy
  • Messaging features will make Messenger and WhatsApp “the center of [your] social experiences”
  • WhatsApp payments will expand to more countries
  • Stories will gain new private sharing options
  • Groups will become an organizing function of Facebook on par with friends & family
  • Facebook Watch will become mainstream this year as video is moved there from the News Feed, Zuckerberg expects
  • Augmented and virtual reality will be improved, and Oculus Quest will ship this spring
  • Instagram commerce and shopping will get new features

Zuckerberg was asked about Facebook’s plan to unify the infrastructure to allow encrypted cross-app messaging between Facebook Messenger, Instagram, and WhatsApp, as first reported by NYT’s Mike Isaac. Zuckerberg explained that the plan wasn’t about a business benefit, but supposedly to improve the user experience. Specifically, it would allow Marketplace buyers and sellers in countries where WhatsApp dominates messaging to use that app to chat instead of Messenger. And for Android users who use Messenger as their SMS client, the unification would allow those messages to be sent with encryption too. He sees expanding encryption here as a way to decentralize Facebook and keep users’ data safe by never having it on the company’s servers. However, Zuckerberg says this will take time and could be a “2020 thing”.

Facebook says it now has 2.7 billion monthly users across the Facebook family of apps: Facebook, Instagram, Messenger, and WhatsApp. However, Facebook CFO David Wehner says “Over time we expect family metrics to play the primary role in how we talk about our company and we will eventually phase out Facebook-only community metrics.” That shows Facebook is self-conscious about how its user base is shifting away from its classic social network and towards Instagram and its messaging apps. Family-only metrics could mask how teens are slipping away.



source https://techcrunch.com/2019/01/30/instagram-stories-500-million/

Facebook shares shoot up after strong Q4 earnings despite data breach

Facebook managed to beat Wall Street’s estimates in its Q4 earnings amidst a constant beat down in the press. Facebook hit 2.32 billion monthly users, up 2.2 perecent from 2.27 billion last quarter, speeding up its growth rate. Facebook climbed to 1.52 billion daily active users from 1.49 billion last quarter for a 2 percent growth rate that dwarfed last quarter’s 1.36 percent.

Facebook earned $16.91 billion off all those users with a $2.38 GAAP earnings per share. Those numbers handily beat Wall Street’s expectations of $16.39 billion in revenue and $2.18 GAAP earnings per share, plus 2.32 billion monthly and 1.51 billion daily active users. Facebook’s daily to monthly user ratio, or stickiness, held firm at 66 percent where it’s stayed for years, showing those still on Facebook aren’t using it much less.

Facebook shares had closed today at $150.42 but shot up over 9 percent following the record revenue and profit announcements to hover around $162. A big 30 percent year-over-year boost in average revenue per user in North America fueled those gains. Yet that’s still way down from $186 where it was a year ago and a peak of $217 in July.

CEO Mark Zuckerberg went beyond his usual intro to the earnings report where he assures investors things are going well and highlights new opportunities. This quarter he noted “We’ve fundamentally changed how we run our company to focus on the biggest social issues, and we’re investing more to build new and inspiring ways for people to connect.”

Squeezing Money From The Olds

Facebook managed to grow its DAU in both the critical US & Canada and Europe markets where it earns the most money after stagnation or shrinkage in previous quarters. The fact that Facebook is no longer dwindling it its most lucrative markets is surely contributing to its share price climb. Facebook’s monthly active user plateaued in North America but roared up in Europe. That was shored up by a reversal of last quarter’s decline in Rest Of World average revenue per user, which fell 4.7% in Q3 but bounced back with 16.5 percent growth in Q4.

 

Facebook raked in $6.8 billion in profit this quarter as it slowed down hiring and only grew headcount 5 percent from 33,606 to 35,587. It seems Facebook has gotten to a comfortable place with its security staff-up in the wake of election interference, fake news, and content moderation troubles. Its revenue is up 30 percent year-over-year while profits grew 61 percent, which is pretty remarkable for a 15-year old technology company.

But morale isn’t quite as rosy. It’s been a brutal quarter for Facebook At least its swifter user growth rates show Facebook survived its biggest ever data breach without scaring off too many people. Meanwhile it’s continuously struggled with scandals like hiring opposition research firm Definers, and it saw its new teen app Lasso largely flop. Facebook will have to convince investors it knows how to win back the next generation, or at least keep squeezong a lot more money out of the last one like it did in Q4.



source https://techcrunch.com/2019/01/30/facebook-earnings-q4-2018/