Wednesday, 31 July 2019

Snapchat launches ‘instant’ tool for creating vertical ads

Snapchat is hoping to attract new advertisers (and make advertising easier for the ones already on the platform) with the launch of a new tool called Instant Create.

Some of these potential advertisers may not be used to creating ads in the smartphone-friendly vertical format that Snapchat has popularized, so Instant Create is to designed to make the process as simple as possible.

Executives at parent organization Snap discussed the tool during last week’s earnings call (in which the company reported that its daily active users increased to 203 million).

“Just this month we started testing our new Instant Create on-boarding flow, which generates ads for businesses in three simple steps from their existing assets, be it their app or their ecommerce storefront,” said CEO Evan Spiegel.

Now the product is moving from testing to availability for all advertisers using Snapchat’s self-serve Ads Manager.

Snapchat Instant Create

Those three steps that Spiegel mentioned involve identifying the objective of a campaign (website visits, app installs or app visits), entering you website address and finalizing you audience targeting.

You can upload your creative assets if you want, but that’s not required since Instant Create will also import images from your website. And Snap notes that you won’t need to do any real design work, because there’s “a streamlined ad creation flow that leverages our most popular templates and simplified ad detail options, enabling you to publish engaging creative without additional design resources.”

The goal is to make Snapchat advertisers accessible to smaller advertisers who may not have the time or resources to try to understand new ad formats. After all, on that same earnings call, Chief Business Officer Jeremi Gorman said, “We believe the single biggest driver for our revenue in the short to medium term will be increasing the number of active advertisers using Snapchat.”

Instant Create is currently focused Snapchat’s main ad format, Snap Ads. You can read more in the company’s blog post.



source https://techcrunch.com/2019/07/31/snapchat-instant-create/

Tuesday, 30 July 2019

17 “Quick Wins” to Improve Your SEO In 2019

Former Cambridge Analytica director, Brittany Kaiser, dumps more evidence of Brexit’s democratic trainwreck

A UK parliamentary committee has published new evidence fleshing out how membership data was passed from UKIP, a pro-Brexit political party, to Leave.EU, a Brexit supporting campaign active in the 2016 EU referendum — via the disgraced and now defunct data company, Cambridge Analytica.

In evidence sessions last year, during the DCMS committee’s enquiry into online disinformation, it was told by both the former CEO of Cambridge Analytica, and the main financial backer of the Leave.EU campaign, the businessman Arron Banks, that Cambridge Analytica did no work for the Leave.EU campaign.

Documents published today by the committee clearly contradict that narrative — revealing internal correspondence about the use of a UKIP dataset to create voter profiles to carry out “national microtargeting” for Leave.EU.

They also show CA staff raising concerns about the legality of the plan to model UKIP data to enable Leave.EU to identify and target receptive voters with pro-Brexit messaging.

The UK’s 2016 in-out EU referendum saw the voting public narrowing voting to leave — by 52:48.

New evidence from Brittany Kaiser

The evidence, which includes emails between key Cambridge Analytica, employees of Leave.EU and UKIP, has been submitted to the DCMS committee by Brittany Kaiser — a former director of CA (who you may just have seen occupying a central role in Netflix’s The Great Hack documentary, which digs into links between the Trump campaign and the Brexit campaign).

“As you can see with the evidence… chargeable work was completed for UKIP and Leave.EU, and I have strong reasons to believe that those datasets and analysed data processed by Cambridge Analytica as part of a Phase 1 payable work engagement… were later used by the Leave.EU campaign without Cambridge Analytica’s further assistance,” writes Kaiser in a covering letter to committee chair, Damian Collins, summarizing the submissions.

Kaiser gave oral evidence to the committee at a public hearing in April last year.

At the time she said CA had been undertaking parallel pitches for Leave.EU and UKIP — as well as for two insurance brands owned by Banks — and had used membership survey data provided by UKIP to built a model for pro-brexit voter personality types, with the intention of it being used “to benefit Leave.EU”.

“We never had a contract with Leave.EU. The contract was with the UK Independence party for the analysis of this data, but it was meant to benefit Leave.EU,” she said then.

The new emails submitted by Kaiser back up her earlier evidence. They also show there was discussion of drawing up a contract between CA, UKIP and Leave.EU in the fall before the referendum vote.

In one email — dated November 10, 2015 — CA’s COO & CFO, Julian Wheatland, writes that: “I had a call with [Leave.EU’s] Andy Wigmore today (Arron’s right hand man) and he confirmed that, even though we haven’t got the contract with the Leave written up, it’s all under control and it will happen just as soon as [UKIP-linked lawyer] Matthew Richardson has finished working out the correct contract structure between UKIP, CA and Leave.”

Another item Kaiser has submitted to the committee is a separate November email from Wigmore, inviting press to a briefing by Leave.EU — entitled “how to win the EU referendum” — an event at which Kaiser gave a pitch on CA’s work. In this email Wigmore describes the firm as “the worlds leading target voter messaging campaigners”.

In another document, CA’s Wheatland is shown in an email thread ahead of that presentation telling Wigmore and Richardson “we need to agree the line in the presentations next week with regards the origin of the data we have analysed”.

“We have generated some interesting findings that we can share in the presentation, but we are certain to be asked where the data came from. Can we declare that we have analysed UKIP membership and survey data?” he then asks.

UKIP’s Richardson replies with a negative, saying: “I would rather we didn’t, to be honest” — adding that he has a meeting with Wigmore to discuss “all of this”, and ending with: “We will have a plan by the end of that lunch, I think”.

In another email, dated November 10, sent to multiple recipients ahead of the presentation, Wheatland writes: “We need to start preparing Brittany’s presentation, which will involve working with some of the insights David [Wilkinson, CA’s chief data scientist] has been able to glean from the UKIP membership data.”

He also asks Wilkinson if he can start to “share insights from the UKIP data” — as well as asking “when are we getting the rest of the data?”. (In a later email, dated November 16, Wilkinson shares plots of modelled data with Kaiser — apparently showing the UKIP data now segmented into four blocks of brexit supporters, which have been named: ‘Eager activist’; ‘Young reformer’; ‘Disaffected Tories’; and ‘Left behinds’.)

In the same email Wheatland instructs Jordanna Zetter, an employee of CA’s parent company SCL, to brief Kaiser on “how to field a variety of questions about CA and our methodology, but also SCL. Rest of the world, SCL Defence etc” — asking her to liaise with other key SCL/CA staff to “produce some ‘line to take’ notes”.

Another document in the bundle appears to show Kaiser’s talking points for the briefing. These make no mention of CA’s intention to carry out “national microtargeting” for Leave.EU — merely saying it will conduct “message testing and audience segmentation”.

“We will be working with the campaign’s pollsters and other vendors to compile all the data we have available to us,” is another of the bland talking points Kaiser was instructed to feed to the press.

“Our team of data scientists will conduct deep-dive analysis that will enable us to understand the electorate better than the rival campaigns,” is one more unenlightening line intended for public consumption.

But while CA was preparing to present the UK media with a sanitized false narrative to gloss over the individual voter targeting work it actually intended to carry out for Leave.EU, behind the scenes concerns were being raised about how “national microtargeting” would conflict with UK data protection law.

Another email thread, started November 19, highlights internal discussion about the legality of the plan — with Wheatland sharing “written advice from Queen’s Counsel on the question of how we can legally process data in the UK, specifically UKIP’s data for Leave.eu and also more generally”. (Although Kaiser has not shared the legal advice itself.)

Wilkinson replies to this email with what he couches as “some concerns” regarding shortfalls in the advice, before going into detail on how CA is intending to further process the modelled UKIP data in order to individually microtarget brexit voters — which he suggests would not be legal under UK data protection law “as the identification of these people would constitute personal data”.

He writes:

I have some concerns about what this document says is our “output” – points 22 to 24. Whilst it includes what we have already done on their data (clustering and initial profiling of their members, and providing this to them as summary information), it does not say anything about using the models of the clusters that we create to extrapolate to new individuals and infer their profile. In fact it says that our output does not identify individuals. Thus it says nothing about our microtargeting approach typical in the US, which I believe was something that we wanted to do with leave eu data to identify how each their supporters should be contacted according to their inferred profile.

For example, we wouldn’t be able to show which members are likely to belong to group A and thus should be messaged in this particular way – as the identification of these people would constitute personal data. We could only say “group A typically looks like this summary profile”.

Wilkinson ends by asking for clarification ahead of a looming meeting with Leave.EU, saying: “It would be really useful to have this clarified early on tomorrow, because I was under the impression it would be a large part of our product offering to our UK clients.” [emphasis ours]

Wheatland follows up with a one line email, asking Richardson to “comment on David’s concern” — who then chips into the discussion, saying there’s “some confusion at our end about where this data is coming from and going to”.

He goes on to summarize the “premises” of the advice he says UKIP was given regarding sharing the data with CA (and afterwards the modelled data with Leave.EU, as he implies is the plan) — writing that his understanding is that CA will return: “Analysed Data to UKIP”, and then: “As the Analysed Dataset contains no personal data UKIP are free to give that Analysed Dataset to anyone else to do with what they wish. UKIP will give the Analysed Dataset to Leave.EU”.

“Could you please confirm that the above is correct?” Richardson goes on. “Do I also understand correctly that CA then intend to use the Analysed Dataset and overlay it on Leave.EU’s legitimately acquired data to infer (interpolate) profiles for each of their supporters so as to better control the messaging that leave.eu sends out to those supporters?

“Is it also correct that CA then intend to use the Analysed Dataset and overlay it on publicly available data to infer (interpolate) which members of the public are most likely to become Leave.EU supporters and what messages would encourage them to do so?

“If these understandings are not correct please let me know and I will give you a call to discuss this.”

About half an hour later another SCL Group employee, Peregrine Willoughby-Brown, joins the discussion to back up Wilkinson’s legal concerns.

“The [Queen’s Counsel] opinion only seems to be an analysis of the legality of the work we have already done for UKIP, rather than any judgement on whether or not we can do microtargeting. As such, whilst it is helpful to know that we haven’t already broken the law, it doesn’t offer clear guidance on how we can proceed with reference to a larger scope of work,” she writes without apparent alarm at the possibility that the entire campaign plan might be illegal under UK privacy law.

“I haven’t read it in sufficient depth to know whether or not it offers indirect insight into how we could proceed with national microtargeting, which it may do,” she adds — ending by saying she and a colleague will discuss it further “later today”.

It’s not clear whether concerns about the legality of the microtargeting plan derailed the signing of any formal contract between Leave.EU and CA — even though the documents imply data was shared, even if only during the scoping stage of the work.

“The fact remains that chargeable work was done by Cambridge Analytica, at the direction of Leave.EU and UKIP executives, despite a contract never being signed,” writes Kaiser in her cover letter to the committee on this. “Despite having no signed contract, the invoice was still paid, not to Cambridge Analytica but instead paid by Arron Banks to UKIP directly. This payment was then not passed onto Cambridge Analytica for the work completed, as an internal decision in UKIP, as their party was not the beneficiary of the work, but Leave.EU was.”

Kaiser has also shared a presentation of the UKIP survey data, which bears the names of three academics: Harold Clarke, University of Texas at Dallas & University of Essex; Matthew Goodwin, University of Kent; and Paul Whiteley, University of Essex, which details results from the online portion of the membership survey — aka the core dataset CA modelled for targeting Brexit voters with the intention of helping the Leave.EU campaign.

(At a glance, this survey suggests there’s an interesting analysis waiting to be done of the choice of target demographics for the current blitz of campaign message testing ads being run on Facebook by the new (pro-brexit) UK prime minister Boris Johnson and the core UKIP demographic, as revealed by the survey data… )

[gallery ids="1862050,1862051,1862052"]

Call for Leave.EU probe to be reopened

Ian Lucas, MP, a member of the DCMS committee has called for the UK’s Electoral Commission to re-open its investigation into Leave.EU in view of “additional evidence” from Kaiser.

We reached out to the Electoral Commission to ask if it will be revisiting the matter.

An Electoral Commission spokesperson told us: “We are considering this new information in relation to our role regulating campaigner activity at the EU referendum. This relates to the 10 week period leading up to the referendum and to campaigning activity specifically aimed at persuading people to vote for a particular outcome.

“Last July we did impose significant penalties on Leave.EU for committing multiple offences under electoral law at the EU Referendum, including for submitting an incomplete spending return.”

Last year the Electoral Commission also found that the official Vote Leave Brexit campaign broke the law by breaching election campaign spending limits. It channelled money to a Canadian data firm linked to Cambridge Analytica to target political ads on Facebook’s platform, via undeclared joint working with a youth-focused Brexit campaign, BeLeave.

Six months ago the UK’s data watchdog also issued fines against Leave.EU and Banks’ insurance company, Eldon Insurance — having found what it dubbed as “serious” breaches of electronic marketing laws, including the campaign using insurance customers’ details to unlawfully to send almost 300,000 political marketing messages.

A spokeswoman for the ICO told us it does not have a statement on Kaiser’s latest evidence but added that its enforcement team “will be reviewing the documents released by DCMS”.

The regulator has been running a wider enquiry into use of personal data for social media political campaigning. And last year the information commissioner called for an ethical pause on its use — warning that trust in democracy risked being undermined.

And while Facebook has since applied a thin film of ‘political ads’ transparency to its platform (which researches continue to warn is not nearly transparent enough to quantify political use of its ads platform), UK election campaign laws have yet to be updated to take account of the digital firehoses now (il)liberally shaping political debate and public opinion at scale.

It’s now more than three years since the UK’s shock vote to leave the European Union — a vote that has so far delivered three years of divisive political chaos, despatching two prime ministers and derailing politics and policymaking as usual.

Leave.EU

Many questions remain over a referendum that continues to be dogged by scandals — from breaches of campaign spending; to breaches of data protection and privacy law; and indeed the use of unregulated social media — principally Facebook’s ad platform — as the willing conduit for distributing racist dogwhistle attack ads and political misinformation to whip up anti-EU sentiment among UK voters.

Dark money, dark ads — and the importing of US style campaign tactics into UK, circumventing election and data protection laws by the digital platform backdoor.

This is why the DCMS committee’s preliminary report last year called on the government to take “urgent action” to “build resilience against misinformation and disinformation into our democratic system”.

The very same minority government, struggling to hold itself together in the face of Brexit chaos, failed to respond to the committee’s concerns — and has now been replaced by a cadre of the most militant Brexit backers, who are applying their hands to the cheap and plentiful digital campaign levers.

The UK’s new prime minister, Boris Johnson, is demonstrably doubling down on political microt

source https://techcrunch.com/2019/07/30/brittany-kaiser-dumps-more-evidence-of-brexits-democratic-trainwreck/

Brittany Kaiser dumps more evidence of Brexit’s democratic trainwreck

A UK parliamentary committee has published new evidence fleshing out how membership data was passed from UKIP, a pro-Brexit political party, to Leave.EU, a Brexit supporting campaign active in the 2016 EU referendum — via the disgraced and now defunct data company, Cambridge Analytica.

In evidence sessions last year, during the DCMS committee’s enquiry into online disinformation, it was told by both the former CEO of Cambridge Analytica, and the main financial backer of the Leave.EU campaign, the businessman Arron Banks, that Cambridge Analytica did no work for the Leave.EU campaign.

Documents published today by the committee clearly contradict that narrative — revealing internal correspondence about the use of a UKIP dataset to create voter profiles to carry out “national microtargeting” for Leave.EU.

They also show CA staff raising concerns about the legality of the plan to model UKIP data to enable Leave.EU to identify and target receptive voters with pro-Brexit messaging.

The UK’s 2016 in-out EU referendum saw the voting public narrowing voting to leave — by 52:48.

New evidence from Brittany Kaiser

The evidence, which includes emails between key Cambridge Analytica, employees of Leave.EU and UKIP, has been submitted to the DCMS committee by Brittany Kaiser — a former director of CA (who you may just have seen occupying a central role in Netflix’s The Great Hack documentary, which digs into links between the Trump campaign and the Brexit campaign).

“As you can see with the evidence… chargeable work was completed for UKIP and Leave.EU, and I have strong reasons to believe that those datasets and analysed data processed by Cambridge Analytica as part of a Phase 1 payable work engagement… were later used by the Leave.EU campaign without Cambridge Analytica’s further assistance,” writes Kaiser in a covering letter to committee chair, Damian Collins, summarizing the submissions.

Kaiser gave oral evidence to the committee at a public hearing in April last year.

At the time she said CA had been undertaking parallel pitches for Leave.EU and UKIP — as well as for two insurance brands owned by Banks — and had used membership survey data provided by UKIP to built a model for pro-brexit voter personality types, with the intention of it being used “to benefit Leave.EU”.

“We never had a contract with Leave.EU. The contract was with the UK Independence party for the analysis of this data, but it was meant to benefit Leave.EU,” she said then.

The new emails submitted by Kaiser back up her earlier evidence. They also show there was discussion of drawing up a contract between CA, UKIP and Leave.EU in the fall before the referendum vote.

In one email — dated November 10, 2015 — CA’s COO & CFO, Julian Wheatland, writes that: “I had a call with [Leave.EU’s] Andy Wigmore today (Arron’s right hand man) and he confirmed that, even though we haven’t got the contract with the Leave written up, it’s all under control and it will happen just as soon as [UKIP-linked lawyer] Matthew Richardson has finished working out the correct contract structure between UKIP, CA and Leave.”

Another item Kaiser has submitted to the committee is a separate November email from Wigmore, inviting press to a briefing by Leave.EU — entitled “how to win the EU referendum” — an event at which Kaiser gave a pitch on CA’s work. In this email Wigmore describes the firm as “the worlds leading target voter messaging campaigners”.

In another document, CA’s Wheatland is shown in an email thread ahead of that presentation telling Wigmore and Richardson “we need to agree the line in the presentations next week with regards the origin of the data we have analysed”.

“We have generated some interesting findings that we can share in the presentation, but we are certain to be asked where the data came from. Can we declare that we have analysed UKIP membership and survey data?” he then asks.

UKIP’s Richardson replies with a negative, saying: “I would rather we didn’t, to be honest” — adding that he has a meeting with Wigmore to discuss “all of this”, and ending with: “We will have a plan by the end of that lunch, I think”.

In another email, dated November 10, sent to multiple recipients ahead of the presentation, Wheatland writes: “We need to start preparing Brittany’s presentation, which will involve working with some of the insights David [Wilkinson, CA’s chief data scientist] has been able to glean from the UKIP membership data.”

He also asks Wilkinson if he can start to “share insights from the UKIP data” — as well as asking “when are we getting the rest of the data?”. (In a later email, dated November 16, Wilkinson shares plots of modelled data with Kaiser — apparently showing the UKIP data now segmented into four blocks of brexit supporters, which have been named: ‘Eager activist’; ‘Young reformer’; ‘Disaffected Tories’; and ‘Left behinds’.)

In the same email Wheatland instructs Jordanna Zetter, an employee of CA’s parent company SCL, to brief Kaiser on “how to field a variety of questions about CA and our methodology, but also SCL. Rest of the world, SCL Defence etc” — asking her to liaise with other key SCL/CA staff to “produce some ‘line to take’ notes”.

Another document in the bundle appears to show Kaiser’s talking points for the briefing. These make no mention of CA’s intention to carry out “national microtargeting” for Leave.EU — merely saying it will conduct “message testing and audience segmentation”.

“We will be working with the campaign’s pollsters and other vendors to compile all the data we have available to us,” is another of the bland talking points Kaiser was instructed to feed to the press.

“Our team of data scientists will conduct deep-dive analysis that will enable us to understand the electorate better than the rival campaigns,” is one more unenlightening line intended for public consumption.

But while CA was preparing to present the UK media with a sanitized false narrative to gloss over the individual voter targeting work it actually intended to carry out for Leave.EU, behind the scenes concerns were being raised about how “national microtargeting” would conflict with UK data protection law.

Another email thread, started November 19, highlights internal discussion about the legality of the plan — with Wheatland sharing “written advice from Queen’s Counsel on the question of how we can legally process data in the UK, specifically UKIP’s data for Leave.eu and also more generally”. (Although Kaiser has not shared the legal advice itself.)

Wilkinson replies to this email with what he couches as “some concerns” regarding shortfalls in the advice, before going into detail on how CA is intending to further process the modelled UKIP data in order to individually microtarget brexit voters — which he suggests would not be legal under UK data protection law “as the identification of these people would constitute personal data”.

He writes:

I have some concerns about what this document says is our “output” – points 22 to 24. Whilst it includes what we have already done on their data (clustering and initial profiling of their members, and providing this to them as summary information), it does not say anything about using the models of the clusters that we create to extrapolate to new individuals and infer their profile. In fact it says that our output does not identify individuals. Thus it says nothing about our microtargeting approach typical in the US, which I believe was something that we wanted to do with leave eu data to identify how each their supporters should be contacted according to their inferred profile.

For example, we wouldn’t be able to show which members are likely to belong to group A and thus should be messaged in this particular way – as the identification of these people would constitute personal data. We could only say “group A typically looks like this summary profile”.

Wilkinson ends by asking for clarification ahead of a looming meeting with Leave.EU, saying: “It would be really useful to have this clarified early on tomorrow, because I was under the impression it would be a large part of our product offering to our UK clients.” [emphasis ours]

Wheatland follows up with a one line email, asking Richardson to “comment on David’s concern” — who then chips into the discussion, saying there’s “some confusion at our end about where this data is coming from and going to”.

He goes on to summarize the “premises” of the advice he says UKIP was given regarding sharing the data with CA (and afterwards the modelled data with Leave.EU, as he implies is the plan) — writing that his understanding is that CA will return: “Analysed Data to UKIP”, and then: “As the Analysed Dataset contains no personal data UKIP are free to give that Analysed Dataset to anyone else to do with what they wish. UKIP will give the Analysed Dataset to Leave.EU”.

“Could you please confirm that the above is correct?” Richardson goes on. “Do I also understand correctly that CA then intend to use the Analysed Dataset and overlay it on Leave.EU’s legitimately acquired data to infer (interpolate) profiles for each of their supporters so as to better control the messaging that leave.eu sends out to those supporters?

“Is it also correct that CA then intend to use the Analysed Dataset and overlay it on publicly available data to infer (interpolate) which members of the public are most likely to become Leave.EU supporters and what messages would encourage them to do so?

“If these understandings are not correct please let me know and I will give you a call to discuss this.”

About half an hour later another SCL Group employee, Peregrine Willoughby-Brown, joins the discussion to back up Wilkinson’s legal concerns.

“The [Queen’s Counsel] opinion only seems to be an analysis of the legality of the work we have already done for UKIP, rather than any judgement on whether or not we can do microtargeting. As such, whilst it is helpful to know that we haven’t already broken the law, it doesn’t offer clear guidance on how we can proceed with reference to a larger scope of work,” she writes without apparent alarm at the possibility that the entire campaign plan might be illegal under UK privacy law.

“I haven’t read it in sufficient depth to know whether or not it offers indirect insight into how we could proceed with national microtargeting, which it may do,” she adds — ending by saying she and a colleague will discuss it further “later today”.

It’s not clear whether concerns about the legality of the microtargeting plan derailed the signing of any formal contract between Leave.EU and CA — even though the documents imply data was shared, even if only during the scoping stage of the work.

“The fact remains that chargeable work was done by Cambridge Analytica, at the direction of Leave.EU and UKIP executives, despite a contract never being signed,” writes Kaiser in her cover letter to the committee on this. “Despite having no signed contract, the invoice was still paid, not to Cambridge Analytica but instead paid by Arron Banks to UKIP directly. This payment was then not passed onto Cambridge Analytica for the work completed, as an internal decision in UKIP, as their party was not the beneficiary of the work, but Leave.EU was.”

Kaiser has also shared a presentation of the UKIP survey data, which bears the names of three academics: Harold Clarke, University of Texas at Dallas & University of Essex; Matthew Goodwin, University of Kent; and Paul Whiteley, University of Essex, which details results from the online portion of the membership survey — aka the core dataset CA modelled for targeting Brexit voters with the intention of helping the Leave.EU campaign.

(At a glance, this survey suggests there’s an interesting analysis waiting to be done of the choice of target demographics for the current blitz of campaign message testing ads being run on Facebook by the new (pro-brexit) UK prime minister Boris Johnson and the core UKIP demographic, as revealed by the survey data… )

[gallery ids="1862050,1862051,1862052"]

Call for Leave.EU probe to be reopened

Ian Lucas, MP, a member of the DCMS committee has called for the UK’s Electoral Commission to re-open its investigation into Leave.EU in view of “additional evidence” from Kaiser.

We reached out to the Electoral Commission to ask if it will be revisiting the matter.

An Electoral Commission spokesperson told us: “We are considering this new information in relation to our role regulating campaigner activity at the EU referendum. This relates to the 10 week period leading up to the referendum and to campaigning activity specifically aimed at persuading people to vote for a particular outcome.

“Last July we did impose significant penalties on Leave.EU for committing multiple offences under electoral law at the EU Referendum, including for submitting an incomplete spending return.”

Last year the Electoral Commission also found that the official Vote Leave Brexit campaign broke the law by breaching election campaign spending limits. It channelled money to a Canadian data firm linked to Cambridge Analytica to target political ads on Facebook’s platform, via undeclared joint working with a youth-focused Brexit campaign, BeLeave.

Six months ago the UK’s data watchdog also issued fines against Leave.EU and Banks’ insurance company, Eldon Insurance — having found what it dubbed as “serious” breaches of electronic marketing laws, including the campaign using insurance customers’ details to unlawfully to send almost 300,000 political marketing messages.

A spokeswoman for the ICO told us it does not have a statement on Kaiser’s latest evidence but added that its enforcement team “will be reviewing the documents released by DCMS”.

The regulator has been running a wider enquiry into use of personal data for social media political campaigning. And last year the information commissioner called for an ethical pause on its use — warning that trust in democracy risked being undermined.

And while Facebook has since applied a thin film of ‘political ads’ transparency to its platform (which researches continue to warn is not nearly transparent enough to quantify political use of its ads platform), UK election campaign laws have yet to be updated to take account of the digital firehoses now (il)liberally shaping political debate and public opinion at scale.

It’s now more than three years since the UK’s shock vote to leave the European Union — a vote that has so far delivered three years of divisive political chaos, despatching two prime ministers and derailing politics and policymaking as usual.

Leave.EU

Many questions remain over a referendum that continues to be dogged by scandals — from breaches of campaign spending; to breaches of data protection and privacy law; and indeed the use of unregulated social media — principally Facebook’s ad platform — as the willing conduit for distributing racist dogwhistle attack ads and political misinformation to whip up anti-EU sentiment among UK voters.

Dark money, dark ads — and the importing of US style campaign tactics into UK, circumventing election and data protection laws by the digital platform backdoor.

This is why the DCMS committee’s preliminary report last year called on the government to take “urgent action” to “build resilience against misinformation and disinformation into our democratic system”.

The very same minority government, struggling to hold itself together in the face of Brexit chaos, failed to respond to the committee’s concerns — and has now been replaced by a cadre of the most militant Brexit backers, who are applying their hands to the cheap and plentiful digital campaign levers.

The UK’s new prime minister, Boris Johnson, is demonstrably doubling down on political microt

source https://techcrunch.com/2019/07/30/brittany-kaiser-dumps-more-evidence-of-brexits-democratic-trainwreck/

17 Ways to Improve Your SEO In 2019

17 Powerful Ways to Improve Your SEO (In 2019)

Advanced Hack: How to Improve Your SEO in Less Than 30 Minutes

digital marketing

I’ve been testing a new SEO hack and it works no matter how old or how new your site is.

Heck, you can have barely any links, and I’ve found it to work as well.

Best of all, unlike most SEO changes, it doesn’t take months or years to see results from this… you can literally see results in less than 30 minutes.

And here’s what’s crazy: I had my team crawl 10,000 sites to see how many people are leveraging this SEO technique and it was only 17.

In other words, your competition doesn’t know about this yet!

So what is this hack that I speak of?

Google’s ever-changing search results

Not only is Google changing its algorithm on a regular basis, but they also test out new design elements.

For example, if you search for “food near me”, you’ll not only see a list of restaurants but you also see their ratings.

food near me

And if you look up a person, Google may show you a picture of that person and a quick overview.

elon musk

Over the years, Google has adapted its search results to give you the best experience. For example, if you search “2+2” Google will show you the answer of “4” so you don’t have to click through and head over to a webpage.

2 plus 2

But you already know this.

Now, what’s new that no one is really using are FAQ-rich results and Answer Cards.

Here’s what I mean… if you search “digital marketing” you’ll see that I rank on Google. But my listing doesn’t look like most people’s…

digital marketing

As you can see from the image above, Google has pulled FAQ rich results from my site.

And best of all, I was able to pull it off in less than 30 minutes. That’s how quickly Google picked it up and adjusted their SERP listing.

Literally all within 30 minutes.

And you can do the same thing through Answer Cards anytime you have pages related to question and answers.

qa example

So how can you do this?

Picking the right markup

Before we get this going with your site, you have to pick the right schema markup.

FAQpage schema is used when you offer a Frequently Asked Question page or have a product page that contains frequently asked questions about the product itself. This will let you be eligible for a collapsible menu under your SERP with the question, that when clicked on, reveals the answer.

faq rich result

It can also let you be eligible for FAQ Action that is shown on Google Assistant. This can potentially help get you noticed by people using voice search to find out an answer!

faq action

Q&A schema is used when people are contributing different types of answers and can vote for which answer they think is the best. This will provide the rich result cads under your SERP and shows all the answers, with the top answer highlighted.

qa rich result

After making sure you understand what these are used for, Google also has additional guidelines on when you can and can’t use these schema’s for:

Google’s guidelines

Google has a list of FAQpage schema guidelines.

Only use FAQPage if your page has a list of questions with answers. If your page has a single question and users can submit alternative answers, use QAPage instead. Here are some examples:

Valid use cases:

  • An FAQ page was written by the site itself with no way for users to submit alternative answers
  • A product support page that lists FAQs with no way for users to submit alternative answers 

Invalid use cases:

  • A forum page where users can submit answers to a single question
  • A product support page where users can submit answers to a single question
  • A product page where users can submit multiple questions and answers on a single page
  • Don’t use FAQPagefor advertising purposes
  • Make sure each Questionincludes the entire text of the question and make sure each answer includes the entire text of the answer. The entire question text and answer text may be displayed.
  • Question and answer content may not be displayed as a rich result if it contains any of the following types of content: obscene, profane, sexually explicit, graphically violent, promotion of dangerous or illegal activities, or hateful or harassing language.
  • All FAQcontent must be visible to the user on the source page.

And here are the guidelines for Q&A schema:

Only use the QAPage markup if your page has information in a question and answer format, which is one question followed by its answers.

Users must be able to submit answers to the question. Don’t use QAPage markup for content that has only one answer for a given question with no way for users to add alternative answers; instead, use FAQPage. Here are some examples:

Valid use cases:

  • A forum page where users can submit answers to a single question
  • A product support page where users can submit answers to a single question 

Invalid use cases:

  • An FAQ page was written by the site itself with no way for users to submit alternative answers
  • A product page where users can submit multiple questions and answers on a single page
  • A how-to guide that answers a question
  • A blog post that answers a question
  • An essay that answers a question
  • Don’t apply QAPagemarkup to all pages on a site or forum if not all the content is eligible. For example, a forum may have lots of questions posted, which are individually eligible for the markup. However, if the forum also has pages that are not questions, those pages are not eligible.
  • Don’t use QAPagemarkup for FAQ pages or pages where there are multiple questions per page. QAPagemarkup is for pages where the focus of the page is a single question and its answers.
  • Don’t use QAPagemarkup for advertising purposes.
  • Make sure each Questionincludes the entire text of the question and make sure each Answer includes the entire text of the answer.
  • Answermarkup is for answers to the question, not for comments on the question or comments on other answers. Don’t mark up non-answer comments as an answer.
  • Question and answer content may not be displayed as a rich result if it contains any of the following types of content: obscene, profane, sexually explicit, graphically violent, promotion of dangerous or illegal activities, or hateful or harassing language.

If your content meets these guidelines, the next step is to figure out how to implement the schema onto your website and which type to use.

How do I implement Schema and which to use? 

There are two ways to implement it… either through JSON-LD or Microdata.

I recommend choosing one style and sticking to it throughout your webpage, and I also recommend not using both types on the same page.

JSON-LD is what Google recommends wherever possible and Google has been in the process of adding support for markup-powered features. JSON-LD can be implemented into the header of your content and can take very little time to implement.

The other option is Microdata, which involves coding elements into your website. This can be a challenging process for some odd reason, I prefer it. Below are examples of how each work.

FAQpage Schema JSON-LD:

<html>

<head>

<title>Digital Marketing Frequently Asked Questions (FAQ) – Neil Patel</title>

</head>

<body>

<script type=”application/ld+json”>

{

“@context”: “https://schema.org”,

“@type”: “FAQPage”,

“mainEntity”: [

{

“@type”: “Question”,

“name”: “What is digital marketing?”,

“acceptedAnswer”: {

“@type”: “Answer”,

“text”:”Digital marketing is any form of marketing products or services that involves electronic device”}

}]

}

</script>

</body>

</html>

FAQpage Schema Microdata:

<html itemscope itemtype=”https://schema.org/FAQPage”>

<head>

<title>Digital Marketing Frequently Asked Questions (FAQ) – Neil Patel</title>

</head>

<body>

<div itemscope itemprop=”mainEntity” itemtype=”https://schema.org/Question”>

<h3 itemprop=”name”>What is digital marketing?</h3>

<div itemscope itemprop=”acceptedAnswer” itemtype=”https://schema.org/Answer”>

<div itemprop=”text”>

<p>Digital marketing is any form of marketing products or services that involves electronic device.</p>

</div>

</div>

</div>

</body>

</html>

Q&A Schema JSON-LD:

{

“@context”: “https://schema.org”,

“@type”: “QAPage”,

“mainEntity”: {

“@type”: “Question”,

“name”: “Can I tie my shoe with one hand?”,

“text”: “I currently have taken a hobby to do many actions with one hand and I’m currently stuck on how to tie a shoe with one hand. Is it possible to tie my shoe with one hand?”,

“answerCount”: 2,

“upvoteCount”: 20,

“dateCreated”: “2019-07-23T21:11Z”,

“author”: {

“@type”: “Person”,

“name”: “Expert at Shoes”

},

“acceptedAnswer”: {

“@type”: “Answer”,

“text”: “It is possible to tie your shoe with one hand by using your teeth to hold the other lace”,

“dateCreated”: “2019-11-02T21:11Z”,

“upvoteCount”: 9000,

“url”: “https://example.com/question1#acceptedAnswer”,

“author”: {

“@type”: “Person”,

“name”: “AnotherShoeMan”

}

},

“suggestedAnswer”: [

{

“@type”: “Answer”,

“text”: “It is not possible to tie your shoe with one hand”,

“dateCreated”: “2019-11-02T21:11Z”,

“upvoteCount”: 2,

“url”: “https://example.com/question1#suggestedAnswer1”,

“author”: {

“@type”: “Person”,

“name”: “Best Shoe Man”

}

}

]

}

}

Q&A Schema Microdata:

<div itemprop=”mainEntity” itemscope itemtype=”https://schema.org/Question”>

<h2 itemprop=”name”>Can I tie my shoe with one hand?</h2>

<div itemprop=”upvoteCount”>20</div>

<div itemprop=”text”>I currently have taken a hobby to do many actions with one hand and I’m currently stuck on how to tie a shoe with one hand. Is it possible to tie my shoe with one hand?</div>

<div>asked <time itemprop=”dateCreated” datetime=”2019-07-23T21:11Z”>July 23’19 at 21:11</time></div>

<div itemprop=”author” itemscope itemtype=”https://schema.org/Person”><span

itemprop=”name”>Expert at Shoes</span></div>

<div>

<div><span itemprop=”answerCount”>2</span> answers</div>

<div><span itemprop=”upvoteCount”>20</span> votes</div>

<div itemprop=”acceptedAnswer” itemscope itemtype=”https://schema.org/Answer”>

<div itemprop=”upvoteCount”>9000</div>

<div itemprop=”text”>

It is possible to tie your shoe with one hand by using your teeth to hold the other lace.

</div>

<a itemprop=”url” href=”https://example.com/question1#acceptedAnswer”>Answer Link</a>

<div>answered <time itemprop=”dateCreated” datetime=”2019-11-02T22:01Z”>Nov 2 ’19 at 22:01</time></div>

<div itemprop=”author” itemscope itemtype=”https://schema.org/Person”><span itemprop=”name”>AnotherShoeMan</span></div>

</div>

<div itemprop=”suggestedAnswer” itemscope itemtype=”https://schema.org/Answer”>

<div itemprop=”upvoteCount”>2</div>

<div itemprop=”text”>

It is not possible to tie your shoe with one hand

</div>

<a itemprop=”url” href=”https://example.com/question1#suggestedAnswer1″>Answer Link</a>

<div>answered <time itemprop=”dateCreated”datetime=”2019-11-02T21:11Z”>Nov 2 ’19 at 21:11</time></div>

<div itemprop=”author” itemscope itemtype=”https://schema.org/Person”><span

itemprop=”name”>Best Shoe Man</span></div>

</div>

</div>

</div>

When you are implementing it on your website, feel free and just use the templates above and modify them with your content.

If you are unsure if your code is correctly implemented or not, use Google’s Structured Data Testing Tool and you can add your code snippet or the page that you implemented the schema on and it will tell you if you did it right or wrong.

Plus it will give you feedback on if there are any errors or issues with your code.

google structure data testing

You can also try Google’s Rich Result Tester. This will give you a brief look at how your structured data will look like in the results!

google rich snippet

Getting results in under 30 minutes

Once you make the changes to any page that you think is a good fit, you’ll want to log into Google Search Console and enter the URL of the page you modified in the top search bar.

add url

You’ll then want to have Google crawl that page so they can index the results. All you have to do is click “request indexing”.

request indexing

And typically within 10 minutes, you’ll notice it kick in and when you perform a Google search you’ll see your updated listing.

Now the key to making this work is to do this with pages and terms that already rank on page 1. That’s where I’ve seen the biggest improvement.

Will Schema get me to rank for People Also Ask and Featured Snippets?

Will this help with People Also Ask and Featured Snippets? So far, there has been no correlation between schema markup and People Also Ask or Featured Snippets and you do not need them to be featured in them.

Optimizing your content for this will not hurt you though and can potentially improve your chances to be on here.

Google has been testing out how they can show these types of Q&A, FAQ, and How-To results and looking at structured data to help understand them.

It’s better to be early to the game and help Google understand your pages, as well as possibly participating in any of Google’s experiments.

snippet

Will this get me on voice search?

With more and more people using mobile devices to find answers to questions, this is a very relevant question!

Especially considering that over half of the searches on Google will be from voice search in the near future.

Answers from voice search get most of their answers from featured snippets.

And adding structured data on your website increases the chances of getting you into featured snippets, which increases the chance of you getting featured on voice search.

Conclusion

This simple hack can potentially increase the visibility of your brand and help improve the authority of your website. It’s a simple solution that can take a single day to implement across your main question, product, or FAQ page.

I’ve been using it heavily for the last week or so and as long as I pick keywords that I already rank on page 1 for, I am seeing great results.

And as I mentioned above, when my team analyzed 10,000 sites we only found 17 to be using FAQ and QA schema. In other words, less than 1% of the sites are using this, which means you if you take advantage now, you’ll have the leg up on your competition.

So what do you think about this tactic? Are you going to use it?

The post Advanced Hack: How to Improve Your SEO in Less Than 30 Minutes appeared first on Neil Patel.



source https://neilpatel.com/blog/faq-schema/

Monday, 29 July 2019

A guide to Virtual Beings and how they impact our world

Money from big tech companies and top VC firms is flowing into the nascent “virtual beings” space. Mixing the opportunities presented by conversational AI, generative adversarial networks, photorealistic graphics, and creative development of fictional characters, “virtual beings” envisions a near-future where characters (with personalities) that look and/or sound exactly like humans are part of our day-to-day interactions.

Last week in San Francisco, entrepreneurs, researchers, and investors convened for the first Virtual Beings Summit, where organizer and Fable Studio CEO Edward Saatchi announced a grant program. Corporates like Amazon, Apple, Google, and Microsoft are pouring resources into conversational AI technology, chip-maker Nvidia and game engines Unreal and Unity are advancing real-time ray tracing for photorealistic graphics, and in my survey of media VCs one of the most common interests was “virtual influencers”.

The term “virtual beings” gets used as a catch-all categorization of activities that overlap here. There are really three separate fields getting conflated though:

  1. Virtual Companions
  2. Humanoid Character Creation
  3. Virtual Influencers

These can overlap — there are humanoid virtual influencers for example — but they represent separate challenges, separate business opportunities, and separate societal concerns. Here’s a look at these fields, including examples from the Virtual Beings Summit, and how they collectively comprise this concept of virtual beings:

Virtual companions

Virtual companions are conversational AI that build a unique 1-to-1 relationship with us, whether to provide friendship or utility. A virtual companion has personality, gauges the personality of the user, retains memory of prior conversations, and uses all that to converse with humans like a fellow human would. They seem to exist as their own being even if we rationally understand they are not.

Virtual companions can exist across 4 formats:

  1. Physical presence (Robotics)
  2. Interactive visual media (social media, gaming, AR/VR)
  3. Text-based messaging
  4. Interactive voice

While pop culture depictions of this include Her and Ex Machina, nascent real-world examples are virtual friend bots like Hugging Face and Replika as well as voice assistants like Amazon’s Alexa and Apple’s Siri. The products currently on the market aren’t yet sophisticated conversationalists or adept at engaging with us as emotional creatures but they may not be far off from that.



source https://techcrunch.com/2019/07/29/a-guide-to-virtual-beings/

Facebook and YouTube’s moderation failure is an opportunity to deplatform the platforms

Facebook, YouTube, and Twitter have failed their task of monitoring and moderating the content that appears on their sites; what’s more, they failed to do so well before they knew it was a problem. But their incidental cultivation of fringe views is an opportunity to recast their role as the services they should be rather than the platforms they have tried so hard to become.

The struggles of these juggernauts should be a spur to innovation elsewhere: While the major platforms reap the bitter harvest of years of ignoring the issue, startups can pick up where they left off. There’s no better time to pass someone up as when they’re standing still.

Asymmetrical warfare: Is there a way forward?

At the heart of the content moderation issue is a simple cost imbalance that rewards aggression by bad actors while punishing the platforms themselves.

To begin with, there is the problem of defining bad actors in the first place. This is a cost that must be borne from the outset by the platform: With the exception of certain situations where they can punt (definitions of hate speech or groups for instance), they are responsible for setting the rules on their own turf.

That’s a reasonable enough expectation. But carrying it out is far from trivial; you can’t just say “here’s the line; don’t cross it or you’re out.” It is becoming increasingly clear that these platforms have put themselves in an uncomfortable lose-lose situation.

If they have simple rules, they spend all their time adjudicating borderline cases, exceptions, and misplaced outrage. If they have more granular ones, there is no upper limit on the complexity and they spend all their time defining it to fractal levels of detail.

Both solutions require constant attention and an enormous, highly-organized and informed moderation corps, working in every language and region. No company has shown any real intention to take this on — Facebook famously contracts the responsibility out to shabby operations that cut corners and produce mediocre results (at huge human and monetary cost); YouTube simply waits for disasters to happen and then quibbles unconvincingly.



source https://techcrunch.com/2019/07/29/facebook-and-youtubes-moderation-failure-is-an-opportunity-to-deplatform-the-platforms/

Europe’s top court sharpens guidance for sites using leaky social plug-ins

Europe’s top court has made a ruling that could affect scores of websites that embed the Facebook ‘Like’ button and receive visitors from the region.

The ruling by the Court of Justice of the EU states such sites are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

The ruling is significant because, as currently seems to be the case, Facebook’s Like buttons transfer personal data automatically, when a webpage loads — without the user even needing to interact with the plug-in — which means if websites are relying on visitors’ ‘consenting’ to their data being shared with Facebook they will likely need to change how the plug-in functions to ensure no data is sent to Facebook prior to visitors being asked if they want their browsing to be tracked by the adtech giant.

The background to the case is a complaint against online clothes retailer, Fashion ID, by a German consumer protection association, Verbraucherzentrale NRW — which took legal action in 2015 seeking an injunction against Fashion ID’s use of the plug-in which it claimed breached European data protection law.

Like ’em or loath ’em, Facebook’s ‘Like’ buttons are an impossible-to-miss component of the mainstream web. Though most Internet users are likely unaware that the social plug-ins are used by Facebook to track what other websites they’re visiting for ad targeting purposes.

Last year the company told the UK parliament that between April 9 and April 16 the button had appeared on 8.4M websites, while its Share button social plug-in appeared on 931K sites. (Facebook also admitted to 2.2M instances of another tracking tool it uses to harvest non-Facebook browsing activity — called a Facebook Pixel — being invisibly embedded on third party websites.)

The Fashion ID case predates the introduction of the EU’s updated privacy framework, GDPR, which further toughens the rules around obtaining consent — meaning it must be purpose specific, informed and freely given.

Today’s CJEU decision also follows another ruling a year ago, in a case related to Facebook fan pages, when the court took a broad view of privacy responsibilities around platforms — saying both fan page administrators and host platforms could be data controllers. Though it also said joint controllership does not necessarily imply equal responsibility for each party.

In the latest decision the CJEU has sought to draw some limits on the scope of joint responsibility, finding that a website where the Facebook Like button is embedded cannot be considered a data controller for any subsequent processing, i.e. after the data has been transmitted to Facebook Ireland (the data controller for Facebook’s European users).

The joint responsibility specifically covers the collection and transmission of Facebook Like data to Facebook Ireland.

“It seems, at the outset, impossible that Fashion ID determines the purposes and means of those operations,” the court writes in a press release announcing the decision.

“By contrast, Fashion ID can be considered to be a controller jointly with Facebook Ireland in respect of the operations involving the collection and disclosure by transmission to Facebook Ireland of the data at issue, since it can be concluded (subject to the investigations that it is for the Oberlandesgericht Düsseldorf [German regional court] to carry out) that Fashion ID and Facebook Ireland determine jointly the means and purposes of those operations.”

Responding the judgement in a statement attributed to its associate general counsel, Jack Gilbert, Facebook told us:

Website plugins are common and important features of the modern Internet. We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools. We are carefully reviewing the court’s decision and will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.

The company said it may make changes to the Like button to ensure websites that use it are able to comply with Europe’s GDPR.

Though it’s not clear what specific changes these could be, such as — for example — whether Facebook will change the code of its social plug-ins to ensure no data is transferred at the point a page loads. (We’ve asked Facebook and will update this report with any response.)

Facebook also points out that other tech giants, such as Twitter and LinkedIn, deploy similar social plug-ins — suggesting the CJEU ruling will apply to other social platforms, as well as to thousands of websites across the EU where these sorts of plug-ins crop up.

“Sites with the button should make sure that they are sufficiently transparent to site visitors, and must make sure that they have a lawful basis for the transfer of the user’s personal data (e.g. if just the user’s IP address and other data stored on the user’s device by Facebook cookies) to Facebook,” Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, told TechCrunch.

“If their lawful basis is consent, then they’ll need to get consent before deploying the button for it to be valid — otherwise, they’ll have done the transfer before the visitor has consented

“If relying on legitimate interests — which might scrape by — then they’ll need to have done a legitimate interests assessment, and kept it on file (against the (admittedly unlikely) day that a regulator asks to see it), and they’ll need to have a mechanism by which a site visitor can object to the transfer.”

“Basically, if organisations are taking on board the recent guidance from the ICO and CNIL on cookie compliance, wrapping in Facebook ‘Like’ and other similar things in with that work would be sensible,” Brown added.

Also commenting on the judgement, Michael Veale, a UK-based researcher in tech and privacy law/policy, said it raises questions about how Facebook will comply with Europe’s data protection framework for any further processing it carries out of the social plug-in data.

“The whole judgement to me leaves open the question ‘on what grounds can Facebook justify further processing of data from their web tracking code?'” he told us. “If they have to provide transparency for this further processing, which would take them out of joint controllership into sole controllership, to whom and when is it provided?

“If they have to demonstrate they would win a legitimate interests test, how will that be affected by the difficulty in delivering that transparency to data subjects?’

“Can Facebook do a backflip and say that for users of their service, their terms of service on their platform justifies the further use of data for which individuals must have separately been made aware of by the website where it was collected?

“The question then quite clearly boils down to non-users, or to users who are effectively non-users to Facebook through effective use of technologies such as Mozilla’s browser tab isolation.”

How far a tracking pixel could be considered a ‘similar device’ to a cookie is another question to consider, he said.

The tracking of non-Facebook users via social plug-ins certainly continues to be a hot-button legal issue for Facebook in Europe — where the company has twice lost in court to Belgium’s privacy watchdog on this issue. (Facebook has continued to appeal.)

Facebook founder Mark Zuckerberg also faced questions about tracking non-users last year, from MEPs in the European Parliament — who pressed him on whether Facebook uses data on non-users for any other uses vs the security purpose of “keeping bad content out” that he claimed requires Facebook to track everyone on the mainstream Internet.

MEPs also wanted to know how non-users can stop their data being transferred to Facebook? Zuckerberg gave no answer, likely because there’s currently no way for non-users to stop their data being sucked up by Facebook’s servers — short of staying off the mainstream Internet.



source https://techcrunch.com/2019/07/29/europes-top-court-sharpens-guidance-for-sites-using-leaky-social-plug-ins/

Saturday, 27 July 2019

The Great Hack tells us data corrupts 

This week professor David Carroll, whose dogged search for answers to how his personal data was misused plays a focal role in The Great Hack: Netflix’s documentary tackling the Facebook-Cambridge Analytica data scandal, quipped that perhaps a follow up would be more punitive for the company than the $5BN FTC fine released the same day.

The documentary — which we previewed ahead of its general release Wednesday — does an impressive job of articulating for a mainstream audience the risks for individuals and society of unregulated surveillance capitalism, despite the complexities involved in the invisible data ‘supply chain’ that feeds the beast. Most obviously by trying to make these digital social emissions visible to the viewer — as mushrooming pop-ups overlaid on shots of smartphone users going about their everyday business, largely unaware of the pervasive tracking it enables.

Facebook is unlikely to be a fan of the treatment. In its own crisis PR around the Cambridge Analytica scandal it has sought to achieve the opposite effect; making it harder to join the data-dots embedded in its ad platform by seeking to deflect blame, bury key details and bore reporters and policymakers to death with reams of irrelevant detail — in the hope they might shift their attention elsewhere.

Data protection itself isn’t a topic that naturally lends itself to glamorous thriller treatment, of course. No amount of slick editing can transform the close and careful scrutiny of political committees into seat-of-the-pants viewing for anyone not already intimately familiar with the intricacies being picked over. And yet it’s exactly such thoughtful attention to detail that democracy demands. Without it we are all, to put it proverbially, screwed.

The Great Hack shows what happens when vital detail and context are cheaply ripped away at scale, via socially sticky content delivery platforms run by tech giants that never bothered to sweat the ethical detail of how their ad targeting tools could be repurposed by malign interests to sew social discord and/or manipulate voter opinion en mass.

Or indeed used by an official candidate for high office in a democratic society that lacks legal safeguards against data misuse.

But while the documentary packs in a lot over an almost two-hour span, retelling the story of Cambridge Analytica’s role in the 2016 Trump presidential election campaign; exploring links to the UK’s Brexit leave vote; and zooming out to show a little of the wider impact of social media disinformation campaigns on various elections around the world, the viewer is left with plenty of questions. Not least the ones Carroll repeats towards the end of the film: What information had Cambridge Analytica amassed on him? Where did they get it from? What did they use it for? — apparently resigning himself to never knowing. The disgraced data firm chose declaring bankruptcy and folding back into its shell vs handing over the stolen goods and its algorithmic secrets.

There’s no doubt over the other question Carroll poses early on the film — could he delete his information? The lack of control over what’s done with people’s information is the central point around which the documentary pivots. The key warning being there’s no magical cleansing fire that can purge every digitally copied personal thing that’s put out there.

And while Carroll is shown able to tap into European data rights — purely by merit of Cambridge Analytica having processed his data in the UK — to try and get answers, the lack of control holds true in the US. Here, the absence of a legal framework to protect privacy is shown as the catalyzing fuel for the ‘great hack’ — and also shown enabling the ongoing data-free-for-all that underpins almost all ad-supported, Internet-delivered services. tl;dr: Your phone doesn’t need to listen to if it’s tracking everything else you do with it.

The film’s other obsession is the breathtaking scale of the thing. One focal moment is when we hear another central character, Cambridge Analytica’s Brittany Kaiser, dispassionately recounting how data surpassed oil in value last year — as if that’s all the explanation needed for the terrible behavior on show.

“Data’s the most valuable asset on Earth,” she monotones. The staggering value of digital stuff is thus fingered as an irresistible, manipulative force also sucking in bright minds to work at data firms like Cambridge Analytica — even at the expense of their own claimed political allegiances, in the conflicted case of Kaiser.

If knowledge is power and power corrupts, the construction can be refined further to ‘data corrupts’, is the suggestion.

The filmmakers linger long on Kaiser which can seem to humanize her — as they show what appear vulnerable or intimate moments. Yet they do this without ever entirely getting under her skin or allowing her role in the scandal to be fully resolved.

She’s often allowed to tell her narrative from behind dark glasses and a hat — which has the opposite effect on how we’re invited to perceive her. Questions about her motivations are never far away. It’s a human mystery linked to Cambridge Analytica’s money-minting algorithmic blackbox.

Nor is there any attempt by the filmmakers to mine Kaiser for answers themselves. It’s a documentary that spotlights mysteries and leaves questions hanging up there intact. From a journalist perspective that’s an inevitable frustration. Even as the story itself is much bigger than any one of its constituent parts.

It’s hard to imagine how Netflix could commission a straight up sequel to The Great Hack, given its central framing of Carroll’s data quest being combined with key moments of the Cambridge Analytica scandal. Large chunks of the film are comprised from capturing scrutiny and reactions to the story unfolding in real-time.

But in displaying the ruthlessly transactional underpinnings of social platforms where the world’s smartphone users go to kill time, unwittingly trading away their agency in the process, Netflix has really just begun to open up the defining story of our time.



source https://techcrunch.com/2019/07/27/the-great-hack-tells-us-that-data-corrupts/

Friday, 26 July 2019

Muzmatch adds $7M to swipe right on Muslim-majority markets

Muzmatch, a matchmaking app for Muslims, has just swiped a $7 million Series A on the back of continued momentum for its community-sensitive approach to soulmate searching for people of the Islamic faith.

It now has more than 1.5 million users of its apps, across 210 countries, swiping, matching and chatting online as they try to find “the one.”

The funding, which Muzmatch says will help fuel growth in key international markets, is jointly led by U.S. hedge fund Luxor Capital and Silicon Valley accelerator Y Combinator — the latter having previously selected Muzmatch for its summer 2017 batch of startups. 

Last year the team also took in a $1.75 million seed led by Fabrice Grinda’s FJ Labs, YC and others.

We first covered the startup two years ago when its founders were just graduating from YC. At that time there were two of them building the business: Shahzad Younas and Ryan Brodie — perhaps an unlikely pairing in this context, given Brodie’s lack of a Muslim background. He joined after meeting Younas, who had earlier quit his job as an investment banker to launch Muzmatch. Brodie got excited by the idea and early traction for the MVP. The pair went on to ship a relaunch of the app in mid 2016, which helped snag them a place at YC.

So why did Younas and Brodie unmatch? All the remaining founder can say publicly is that its investors are buying Brodie’s stake. (While, in a note on LinkedIn — celebrating what he dubs the “bittersweet” news of Muzmatch’s Series A — Brodie writes: “Separate to this raise I decided to sell my stake in the company. This is not from a lack of faith — on the contrary — it’s simply the right time for me to move on to startup number 4 now with the capital to take big risks.”)

Asked what’s harder, finding a steady co-founder or finding a life partner, Younas responds with a laugh. “With myself and Ryan, full credit, when we first joined together we did commit to each other, I guess, a period of time of really going for it,” he ventures, reaching for the phrase “conscious uncoupling” to sum up how things went down. “We both literally put blood, sweat and tears into the app, into growing what it is. And for sure without him we wouldn’t be as far as we are now, that’s definitely true.”

“For me it’s a fantastic outcome for him. I’m genuinely super happy for him. For someone of his age and at that time of his life — now he’s got the ability to start another startup and back himself, which is amazing. Not many people have that opportunity,” he adds.

Younas says he isn’t looking for another co-founder at this stage of the business, though he notes they have just hired a CTO — “purely because there’s so much to do that I want to make sure I’ve got a few people in certain areas.”

The team has grown from just four people seven months ago to 17 now. With the Series A the plan is to further expand headcount to almost 30.

“In terms of a co-founder, I don’t think, necessarily, at this point it’s needed,” Younas tells TechCrunch. “I obviously understand this community a lot. I’ve equally grown in terms of my role in the company and understanding various parts of the company. You get this experience by doing — so now I think definitely it helps having the simplicity of a single founder and really guiding it along.”

Despite the co-founders parting ways, there’s no doubting Muzmatch’s momentum. Aside from solid growth of its user base (it was reporting ~200,000 two years ago), its press release touts 30,000+ “successes” worldwide — which Younas says translates to people who have left the app and told it they did so because they met someone on Muzmatch.

He reckons at least half of those left in order to get married — and for a matchmaking app, that is the ultimate measure of success.

“Everywhere I go I’m meeting people who have met on Muzmatch. It has been really transformative for the Muslim community where we’ve taken off — and it is amazing to see, genuinely,” he says, suggesting the real success metric is “much higher because so many people don’t tell us.”

Nor is he worried about being too successful, despite 100 people a day leaving because they met someone on the app. “For us that’s literally the best thing that can happen because we’ve grown mostly by word of mouth — people telling their friends ‘I met someone on your app.’ Muslim weddings are quite big, a lot of people attend and word does spread,” he says.

Muzmatch was already profitable two years ago (and still is, for “some” months, though that’s not been a focus), which has given it leverage to focus on growing at a pace it’s comfortable with as a young startup. But the plan with the Series A cash is to accelerate growth by focusing attention internationally on Muslim-majority markets versus an early focus on markets, including the U.K. and the U.S., with Muslim-minority populations.

This suggests potential pitfalls lie ahead for the team to manage growth in a sustainable way — ensuring scaling usage doesn’t outstrip their ability to maintain the “safe space” feel the target users need, while at the same time catering to the needs of an increasingly diverse community of Muslim singles.

“We’re going to be focusing on Muslim-majority countries where we feel that they would be more receptive to technology. There’s slightly less of a taboo around finding someone online. There’s culture changes already happening, etc.,” he says, declining to name the specific markets they’ll be fixing on. “That’s definitely what we’re looking for initially. That will obviously allow us to scale in a big way going forward.

“We’ve always done [marketing] in a very data-driven way,” he adds, discussing his approach to growth. “Up til now I’ve led on that. Pretty much everything in this company I’ve self taught. So I learnt, essentially, how to build a growth engine, how to scale and optimize campaigns, digital spend, and these big guys have seen our data and they’re impressed with the progress we’ve made, and the customer acquisition costs that we’ve achieved — considering we really are targeting quite a niche market… Up til now we closed our Series A with more than half our seed round in our accounts.”

Muzmatch has also laid the groundwork for the planned international push, having already fully localized the app — which is live in 14 languages, including right-to-left languages like Arabic.

“We’re localized and we get a lot of organic users everywhere but obviously once you focus on a particular area — in terms of content, in terms of your brand etc. — then it really does start to take off,” adds Younas.

The team’s careful catering to the needs of its target community — via things like manual moderation of every profile and offering an optional chaperoning feature for in-app chats — i.e. rather than just ripping out a “Tinder for Muslims” clone, can surely take some credit for helping to grow the market for Muslim matchmaking apps overall.

“Shahzad has clearly made something that people want. He is a resourceful founder who has been listening to his users and in the process has developed an invaluable service for the Muslim community, in a way that mainstream companies have failed to do,” says YC partner Tim Brady in a supporting statement. 

But the flip side of attracting attention and spotlighting a commercial opportunity means Muzmatch now faces increased competition — such as from the likes of Dubai-based Veil: A rival matchmaking app that has recently turned heads with a “digital veil” feature that applies an opaque filter to all profile photos, male and female, until a mutual match is made.

Muzmatch also lets users hide their photos, if they choose. But it has resisted imposing a one-size-fits-all template on the user experience — exactly in order that it can appeal more broadly, regardless of the user’s level of religious adherence (it has even attracted non-Muslim users with a genuine interest in meeting a life partner).

Younas says he’s not worried about fresh faces entering the same matchmaking app space — couching it as a validation of the market.

He’s also dismissive of gimmicky startups that can often pass through the dating space, usually on a fast burn to nowhere… though he is expecting more competition from major players, such as Tinder-owner Match, which he notes has been eyeing up some of the same geographical markets.

“We know there’s going to be attention in this area,” he says. “Our goal is to basically continue to be the dominant player but for us to race ahead in terms of the quality of our product offering and obviously our size. That’s the goal. Having this investment definitely gives us that ammo to really go for it. But by the same token I’d never want us to be that silly startup that just burns a tonne of money and ends up nowhere.”

“It’s a very complex population, it’s very diverse in terms of culture, in terms of tradition,” he adds of the target market. “We so far have successfully been able to navigate that — of creating a product that does, to the user, marries technology with respecting the faith.”

Feature development is now front of mind for Muzmatch as it moves into the next phase of growth, and as — Younas hopes — it has more time to focus on finessing what its product offers, having bagged investment by proving product market fit and showing traction.

“The first thing that we’re going to be doing is an actual refreshing of our brand,” he says. “A bit of a rebrand, keeping the same name, a bit of a refresh of our brand, tidying that up. Actually refreshing the app, top to bottom. Part of that is looking at changes that have happened in the — call it — ‘dating space’. Because what we’ve always tried to do is look at the good that’s happening, get rid of the bad stuff, and try and package it and make it applicable to a Muslim audience.

“I think that’s what we’ve done really well. And I always wanted to innovate on that — so we’ve got a bunch of ideas around a complete refresh of the app.”

Video is one area they’re experimenting with for future features. TechCrunch’s interview with Younas takes place via a video chat using what looks to be its own videoconferencing platform (correction: it was using a video room powered by appear.in), though there’s not currently a feature in Muzmatch that lets users chat remotely via video.

Its challenge on this front will be implementing richer comms features in a way that a diverse community of religious users can accept.

“I want to — and we have this firmly on our roadmap, and I hope that it’s within six months — be introducing or bringing ways to connect people on our platform that they’ve never been able to do before. That’s going to be key. Elements of video is going to be really interesting,” says Younas teasing their thinking around video.

“The key for us is how do we do [videochat] in a way that is sensible and equally gives both sides control. That’s the key.”

Nor will it just be “simple video.” He says they’re also looking at how they can use profile data more creatively, especially for helping more private users connect around shared personality traits.

“There’s a lot of things we want to do within the app of really showing the richness of our profiles. One thing that we have that other apps don’t have are profiles that are really rich. So we have about 22 different data points on the profile. There’s a lot that people do and want to share. So the goal for us is how do we really try and show that off?

“We have a segment of profiles where the photos are private, right, people want that anonymity… so the goal for us is then saying how can we really show your personality, what you’re about in a really good way. And right now I would argue we don’t quite do it well enough. We’ve got a tonne of ideas and part of the rebrand and the refresh will be really emphasizing and helping that segment of society who do want to be private but equally want people to understand what they’re about.”

Where does he want the business to be in 12 months’ time? With a more polished product and “a lot of key features in the way of connecting the community around marriage — or just community in general.”

In terms of growth, the aim is at least 4x from where they are now.

“These are ambitious targets. Especially given the amount that we want to re-engineer and rebuild but now is the time,” he adds. “Now we have the fortune of having a big team, of having the investment. And really focusing and finessing our product… Really give it a lot of love and really give it a lot of the things we’ve always wanted to do and never quite had the time to do. That’s the key.

“I’m personally super excited about some of the stuff coming up because it’s a big enabler — growing the team and having the ability to really execute on this a lot faster.”



source https://techcrunch.com/2019/07/26/muzmatch-adds-7m-to-swipe-right-on-muslim-majority-markets/