Posted by SimonPenson
Preface
This post serves a dual purpose: it's a practical guide to the realities of preparing for voice right now, but equally it's a rallying call to ensure our industry has a full understanding of just how big, disruptive, and transformational it will be — and that, as a result, we need to stand ready.
My view is that voice is not just an add-on, but an entirely new way of interacting with the machines that add value to our lives. It is the next big era of computing.
Brands and agencies alike need to be at the forefront of that revolution. For my part, that begins with investing in the creation of a voice team.
Let me explain just how we plan to do that, and why it’s being actioned earlier than many will think necessary….
Jump to a section:
Why is voice so important?
When is it coming in a big way?
Who are the big players?
Where do voice assistants get their data from?
How do I shape my strategy and tactics to get involved?
What skill sets do I need in a "voice team?"
Introduction
"The times, they are a-changing."
– Bob Dylan
Back in 1964, that revered folk-and-blues singer could never have imagined just what that would mean in the 21st century.
As we head into 2018, we're nearing a voice interface-inspired inflection point the likes of which we haven't seen before. And if the world’s most respected futurist is to be believed, it’s only just beginning.
Talk to Ray Kurzweil, Google’s Chief Engineer and the man Bill Gates says is the "best person to predict the future," and he’ll tell you that we are entering a period of huge technological change.
For those working across search and many other areas of digital marketing, change is not uncommon. Seismic events, such as the initial roll out of Panda and Penguin, reminded those inside it just how painful it is to be unprepared for the future.
At best, it tips everything upside down. At worst, it kills those agencies or businesses stuck behind the curve.
It’s for exactly this reason that I felt compelled to write a post all about why I'm building a voice team at Zazzle Media, the agency I founded here in the UK, as stats from BrightEdge reveal that 62% of marketers still have no plans whatsoever to prepare for the coming age of voice.
I’m also here to argue that while the growth traditional search agencies saw through the early 2000s is over, similar levels of expansion are up for grabs again for those able to seamlessly integrate voice strategies into an offering focused on the client or customer.
Winter is coming!
Based on our current understanding of technological progress, it's easy to rest on our laurels. Voice interface adoption is still in its very early stages. Moore’s Law draws a (relatively) linear line through technological advancement, giving us time to take our positions — but that era is now behind us.
According to Kurzweil’s thesis on the growth of technology (the Law of Accelerating Returns),
"we won’t experience 100 years of progress in the 21st century – it will be more like 20,000 years."
Put another way, he explains that technology does not progress in a linear way. Instead, it progresses exponentially.
"30 steps linearly get you to 30. One, two, three, four, step 30 you're at 30. With exponential growth, it's one, two, four, eight. Step 30, you're at a billion," he explained in a recent Financial Times interview.
In other words, we're going to see new tech landing and gaining traction faster than we ever realized it possible, as this chart proves:
Above, Kurzweil illustrates how we’ll be able to produce computational power as powerful as a human brain by 2023. By 2037 we’ll be able to do it for less than a one-cent cost. Just 15 years later computers will be more powerful than the entire human race as a whole. Powerful stuff — and proof of the need for action as voice and the wider AI paradigm takes hold.
Voice
So, what does that mean right now? While many believe voice is still a long ways off, one point of view says it's already here — and those fast enough to grab the opportunity will grow exponentially with it. Indeed, Google itself says more than 20% of all searches are already voice-led, and will reach 50% by 2020.
Let’s first deal with understanding the processes required before then moving onto the expertise to make it happen.
What do we need to know?
We’ll start with some assumptions. If you are reading this post, you already have a good understanding of the basics of voice technology. Competitors are joining the race every day, but right now the key players are:
- Microsoft Cortana – Available on Windows, iOS, and Android.
- Amazon Alexa – Voice-activated assistant that lives on Amazon audio gear (Echo, Echo Dot, Tap) and Fire TV.
- Google Assistant – Google’s voice assistant powers Google Home as well as sitting across its mobile and voice search capabilities.
- Apple Siri – Native voice assistant for all Apple products.
And (major assistants) coming soon:
- Samsung Bixby – Native voice assistant for Samsung products.
- (Yet to be named) Facebook assistant – They already have M for Messenger, and Mark Zuckerberg is personally testing "Jarvis AI" in his home.
All of these exist to allow consumers the ability to retrieve information without having to touch a screen or type anything.
That has major ramifications for those who rely on traditional typed search and a plethora of other arenas, such as the fast-growing Internet of Things (IoT).
In short, voice allows us to access everything from our personal diaries and shopping lists to answers to our latest questions and even to switch our lights off.
Why now?
Apart from the tidal wave of tech now supporting voice, there is another key reason for investing in voice now — and it's all to do with the pace at which voice is actually improving.
In a recent Internet usage study by KPCB, Andrew NG, chief scientist at Chinese search engine Baidu, was asked what it was going to take to push voice out of the shadows and into its place as the primary interface for computing.
His point was that at present, voice is "only 90% accurate" and therefore the results are sometimes a little disappointing. This slows uptake.
But he sees that changing soon, explaining that "As speech recognition accuracy goes from, say, 95% to 99%, all of us in the room will go from barely using it today to using it all the time. Most people underestimate the difference between 95% and 99% accuracy — 99% is a game changer... “
When will that happen? In the chart below we see Google’s view on this question, predicting we will be there in 2018!
Is this the end for search?
It is also important to point out that voice is an additional interface and will not replace any of those that have gone before it. We only need to look back at history to see how print, radio, and TV continue to play a part in our lives alongside the latest information interfaces.
Moz founder Rand Fishkin made this point in a recent WBF, explaining that while voice search volumes may well overtake typed terms, the demand for traditional SERP results and typed results will continue to grow also, simply because of the growing use of search.
The key will be creating a channel strategy as well as a method for researching both voice and typed opportunity as part of your overall process.
What’s different?
The key difference when considering voice opportunity is to think about the conversational nature that the interface allows. For years we've been used to having to type more succinctly in order to get answers quickly, but voice does away with that requirement.
Instead, we are presented with an opportunity to ask, find, and discover the things we want and need using natural language.
This means that we will naturally lengthen the phrases we use to find the stuff we want — and early studies support this assumption.
In a study by Microsoft and covered by the brilliant Purna Virji in this Moz post from last year, we can see a clear distinction between typed and voice search phrase length, even at this early stage of conversational search. Expect this to grow as we get used to interacting with voice.
The evidence suggests that will happen fast too. Google’s own data shows us that 55% of teens and 40% of adults use voice search daily. Below is what they use it for:
While it is easy to believe that voice only extends to search, it's important to remember that the opportunity is actually much wider. Below we can see results from a major 2016 Internet usage study into how voice is being used:
Clearly, the lion's share is related to search and information retrieval, with more than 50% of actions relating to finding something local to go/see/do (usually on mobile) or using voice as an interface to search.
But an area sure to grow is the leisure/entertainment sector. More on that later.
The key question remains: How exactly do you tap into this growing demand? How do you become the choice answer above all those you compete with?
With such a vast array of devices, the answer is a multi-faceted one.
Where is the data coming from?
To answer the questions above, we must first understand where the information is being accessed from and the answer, predictably, is not a simple one. Understanding it, however, is critical if you are to build a world-class voice marketing strategy.
To make life a little easier, I’ve created an at-a-glance cheat sheet to guide you through the process. You can download it by clicking on the banner below.
In it, you'll find an easy-to-follow table explaining where each of the major voice assistants (Siri, Cortana, Google Assistant, and Alexa) retrieve their data from so you can devise a plan to cover them all.
The key take away from that research? Interestingly, Bing has every opportunity to steal a big chunk of market share from Google and, at least at present, is the key search engine to optimize for if voice "visibility" is the objective.
Bing is more important now.
Of all the Big Four in voice, three (Cortana, Siri, and Alexa) default to Bing search for general information retrieval. Given that Facebook (also a former Bing search partner) is also joining the fray, Google could soon find itself in a place it's not entirely used to being: alone.
Now, the search giant usually finds a way to pull back market share, but for now a marketers’ focus should be on Microsoft’s search engine and Google as a secondary player.
Irrespective of which engine you prioritize there are two key areas to focus on: featured snippets and local listings.
Featured snippets
The search world has been awash with posts and talks on this area of optimization over recent months as Google continues to push ahead with the roll out of the feature-rich SERP real estate.
For those that don’t know what a "snippet" is, there’s an example below, shown for a search for "how do I get to sleep":
Not only is this incredibly valuable traditional search real estate (as I’ve discussed in an earlier blog post), but it's a huge asset in the fight for voice visibility.
Initial research by experts such as Dr. Pete Myers tells us, clearly, that Google assistant is pulling its answers from snippet content for anything with any level of complexity.
Simple answers — such as those for searches about sports results, the weather, and so forth — are answered directly. But for those that require expertise it defaults to site content, explaining where that information came from.
At present, it's unclear how Google plans to help us understand and attribute these kinds of visits. But according to insider Gary Illyes, it is imminent within Search Console.
Measurement will clearly be an important step in selling any voice strategy proposal upwards and to provide individual site or brand evidence that the medium is growing and deserving of investment.
User intent and purchase
Such data will also help us understand how voice alters such things as the traditional conversion funnel and the propensity to purchase.
We know how important content is in the traditional user journey, but how will it differ in the voice world? There's sure to be a rewrite of many rules we've come to know well from the "typed Internet."
Applying some level of logic to the challenge, it's clear that there's a greater degree of value in searches showing some level of immediacy, i.e. people searching through home assistants or mobiles for the location of something or time and/or date of the same thing.
Whereas with typed search we see greater value in simple phrases that we call "head terms," the world is much more complex in voice. Below we see a breakdown of words that will trigger searches in voice:
To better understand this, let’s examine a potential search "conversation."
If we take a product search example for, let’s say, buying a new lawn mower, the conversation could go a little like this:
[me] What’s the best rotary lawn mower for under £500?
[voice assistant] According to Lawn Mower Hut there are six choices [reads out choices]
Initially, voice will struggle to understand how to move to the next logical question, such as:
[voice assistant] Would you like a rotary or cylinder lawn mower?
Or, better still…
[voice assistant] Is your lawn perfectly flat?
[me] No.
[voice assistant] OK, may I suggest a rotary mower? If so then you have two choices, the McCulloch M46-125WR or the BMC Lawn Racer.
In this scenario, our voice assistant has connected the dots and asks the next relevant question to help narrow the search in a natural way.
Natural language processing
To do this, however, requires a step up in computer processing, a challenge being worked on as we speak in a bid to provide the next level of voice search.
To solve the challenge requires the use of so-called Deep Neural Networks (DNNs), interconnected layers of processing units designed to mimic the neural networks in the brain.
DNNs can work across everything from speech, images, sequences of words, and even location before then classifying them into categories.
It relies on the input of truckloads of data so it can learn how best to bucket those things. That data pile will grow exponentially as the adoption of voice accelerates.
What that will mean is that voice assistants can converse with us in the same way as a clued-up shop assistant, further negating the need for in-store visits in the future and a much more streamlined research process.
In this world, we start to paint a very different view of the "keywords" we should be targeting, with deeper and more exacting phrases winning the battle for eyeballs.
As a result, the long tail’s rise in prominence continues at pace, and data-driven content strategies really do move to the center of the marketing plan as the reward for creating really specific content increases.
We also see a greater emphasis placed on keywords that may not be on top of the priority list currently. If we continue to work through our examples, we can start to paint a picture of how this plays out…
In our lawnmower purchase example, we're at a stage where two options have been presented to us (the McCulloch and the BMC Racer). In a voice 1.0 scenario, where we have yet to see DNNs develop enough to know the next relevant question and answer, we might ask:
[me] Which has the best reviews?
And the answer may be tied to a 3rd party review conclusion, such as…
[voice assistant] According to Trustpilot, the McCulloch has a 4.5-star rating versus a 3.5-star rating for the BMC lawn mower.
Suddenly, 3rd party reviews become more valuable than ever as a conversion optimization opportunity, or a strategy that includes creating content to own the SERP for a keyword phrase that includes "review" or "top rated."
And where would we naturally go from here? The options are either directly to conversion, via some kind of value-led search (think "cheapest McCulloch M46-125W"), or to a location-based one ("nearest shop with a McCulloch M46-125WR") to allow me to give it a "test drive."
Keyword prioritization
This single journey gives us some insight into how the interface could shape our thinking on keyword prioritization and content creation.
Pieces that help a user either make a decision or perform an action around the following trigger words and phrases will attract greater interest and traffic from voice. Examples could include:
source https://moz.com/blog/voice-strategy-guide
No comments:
Post a Comment