In the past year alone, the search industry has evolved at a pace that has never before been achieved.
With every development, update, and expansion, however, there are a new series of unique problems or challenges that every SEO or webmaster must overcome.
In this article, I have highlighted a variety of issues that have recently come under discussion or are simply common SEO issues that webmasters face on a daily basis. Including problems caused by search engines themselves, or issues created through SEO strategies.
Google showing third party product review sites for branded terms inside the knowledge panel
Sometimes it can be the case that Google will show users product reviews on separate sites for branded terms.
This can be especially painful when a brand happens to be paying for PPC and a knowledge panel appears from another site during a search.
Matthew Howells-Barby, Director of Acquisition at HubSpot, pointed this issue out, as it is one that he has also encountered for one of HubSpot’s branded terms.
It doesn’t seem to follow any patterns I couldn’t replicate the same issue for other CRM software.
As we can see, when a user searches for “HubSpot CRM”, a knowledge panel is shown from FinancesOnline.com, which could lead the user away from the actual site that they require, which in this case, is HubSpot.
Competitors can also appear for your brand terms if they are bidding on them in paid search. You can read about this tactic, and why it might not always be the best idea, in this guide by Ad Espresso.
Sites suffering from keyword cannibalisation issues
A largely similar issue to the one mentioned above; if a site creates a lot of content that caters to the same keywords, this could result in serious cannibalisation issues.
For instance, if you have supporting articles that discuss the products sold on your site, the end result could be that these pages rank higher than the actual product pages.
This can cause a series of issues, as your link hierarchy could weaken, your site traffic could be diluted, and you could lose sales.
In this example for the keyword “Build model portfolio” a builder page and a guide page is ranking next to each other.
It could be also for mixed intent as Google cannot figure out the exact search intent and might be displaying similar pages for a given query.
Furthermore, Google could deindex your product page if it finds that the content is too similar to the supporting article, and for some reason, it thinks the supporting page is more important.
As you can imagine, keyword cannibalisation is a serious issue, especially for e-commerce websites, but there are (thankfully), a variety of solutions for solving keyword cannibalisation.
If you’re looking for a faster way to see your cannibalized keywords you can use Sistrix’s built-in feature.
Sistrix has an awesome filter to check cannibalization
After the above chart was published there has been some rather unpleasant tweets about the death of SEO and fearmongering.
The reality is even though the number of zero-click results is decreasing I feel it doesn’t really hurt businesses as much. Especially you have to remember most of the results could be informational.
Don’t be afraid. SEO is not dead. But, we have to remember it is becoming difficult to do SEO with the changes.
The only solution here would be to ensure that you offer accurate, compelling, and high-quality content to your users while considering technical SEO strategies, such as the implementation of structured data.
As you can now use structured data in FAQs and other elements of your site, you can help it get featured in rich results; often referred to as “position zero” by those working in digital marketing.
Shrinking SEO landscape
This ties in perfectly with the issue mentioned above, as when users conduct searches within Google, they are no longer faced with merely organic and paid results.
Over the past few years, Google has begun implementing a vast range of rich results that are designed to give users the information that they need as quickly as possible. These include:
Numbered list snippet
Bulleted list snippet
Table featured snippet
YouTube featured snippet
The knowledge graph
Local map packs
In the above example, you can clearly see that for the query Google is showing various rich results. So, capturing the first spot in organic SERPs means very little.
This meme from
clearly reflects my sentiments
For sites that don’t embrace technical SEO, or even paid search, it means that they will struggle to get the same reach or exposure that they will have attained as little as five years ago.
It was reported as far back as 2018 that some knowledge panels are beginning to look more like featured snippets.
This can be quite confusing, as to the untrained eye, it might give the
The issue here is that if the difference between the two becomes less distinct, it can create confusion for both users and webmasters over what panel is being provided to offer information and what panel is being offered to present a product or service.
Google no longer supports the noindex directive in robots.txt
The search engine said: “In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019.”
Instead of using the directive, therefore, this means that webmasters must use alternative techniques, including:
Implementing noindex in robots meta tags.
Using 404 and 410 HTTP status codes.
Using password protection to hide a page.
Implementing disallow rules in robots.txt.
Use the Search Console remove URL tool.
It’s worth noting that the latter option will only remove the desired URL on a temporary basis.
The Rich Results Test will replace the Structured Data Testing Tool
The latter is, of course, a brilliant tool to ensure that you are implementing clean and accurate markup to your site so that it can feature in rich results.
There have, however, been several reports that there are inconsistencies between the two tools regarding the highlighting of errors and warnings within inputted markup.
As Google Search Console is in the middle of a significant revamp, this risks a period where the ability to accurately evaluate structured data becomes clouded.
There’s no doubt that inconsistencies will improve over time, but for now, it is worth double and triple-checking markup before it is implemented.
Ads in Google Assistant
Having undergone testing in February, it was confirmed in April that Google Assistant now provides answers to users in the form of ads.
Writing in The Keyword, Danielle Buckley, Google’s Senior Product Manager, said: “For some questions, the most helpful response might be showing you links to a variety of sources from across the web to learn more. In these cases, you’ll see the full set of search results from the web.”
She continued, saying that: “When relevant, these results may include the existing ads that you’d see on Search today.”
As Google Assistant grows in popularity (it is currently installed on over a billion devices), it is increasingly lucrative to get content featured in Google Assistant.
The issue, however, is that Google seems to be increasingly monetising its results, and it means that users might not be aware that the information they are receiving is from an advertisement.
Over time it might also result in more advertisements appearing over organic answers as Google looks to increase its revenues.
Ads in Google My Business listing
It was reported recently that Google has started showing ads in Google My Business pages.
I think Google went a bit far with this one. I’m surprised there is no backlash from the SEO community and in the long run, this can hurt local businesses as bigger brands will be able to buy their way into competitors listings.
Spam in Google My Business
Google My Business is a brilliant way for local businesses to reach potential customers in their area.
Launched in 2014, the service is used by millions of companies and is now an integral part of online marketing strategies for small businesses.
A study carried out in February found that 46% of marketers often see spam in Google My Business. Furthermore, 45% of marketers said that the issue makes it harder for businesses to rank in local listings.
Here is an interesting GMB spam I noticed recently. Unlike normal spam where a company/individual creates a fake listing with an address for a commercial query, this business uses a different tactic.
By using a trading name this company is able to create a GMB listing for a commercial query. You can’t blame them since Google is allowing it to happen. I’ve reached out to Danny Sullivan (Google’s public liaison) and this is what he said:
A closer inspection reveals this company is using an exact matching trading name to bypass the GMB rules.
For users who spot multiple cases, you also have the option to submit an Excel spreadsheet. Google also published an article in June 2019, explaining how the search engine combats fake businesses within Google Maps.
Emoji spamming in SERPs
Perhaps a little more light-hearted than most kinds of spam found in search, but emojis featured within title tags and meta descriptions have always received mixed reactions from digital marketers.
Some advocate that in the right time and place, emojis are powerful tools, while others believe that they are a little too childish to be taken seriously.
Emoji spamming is not new. Back in 2015 Google killed Emojis from showing up in search results. However recent times Google has allowed emojis back into SERPs and people are already taking advantage of it.
When asked whether emojis conform to Google’s guidelines in 2016, John Mueller said: “So we generally try to filter those out from the snippets, but it is not the case that we would demote a web site or flag it as web spam if it had emojis or other symbols in the description.”
Emojis will be filtered out by Google, however, if they’re considered misleading, look too spammy, or are completely out of place.
However, there are instances where we can still see Emojis misused in the SERPS.
Here is a classic example of multiple sites using emojis in their meta description
As mentioned earlier, it’s worth remembering that meta descriptions can change depending on the query inputted by the user.
Google setting the canonicals automatically
It is often the case that a page can be found through multiple URLs. If a webmaster does not identify the best URL for Google to use in the search, it means that the search engine will try to identify the best one possible.
Although this can be very useful, it is not always the case that Google chooses the URL that you want to use.
You can check which URL Google is using by inputting the page address into the URL Inspection tool within Search Console. Here, it will show you the canonical that Google has selected.
There are, however, multiple ways that you can identify canonical URLs for Google to use, including:
Using the rel=”canonical” link tag: By using this you can map an infinite number of pages, although this technique can be a complex method of mapping on larger websites.
Using a rel=canonical HTTP header: Using a rel=canonical header in your page response will let you map an infinite number of duplicate pages, although again, this can be harder with larger websites or on sites that change URLs often.
Using the sitemap: You can identify the canonical URL in your site’s sitemap, which is easy to do, although this method is not quite as powerful as using the rel=canonical link tag.
Using a 301 redirect: This will tell Googlebot that a URL is a better version than a given URL, though this should only be used when depreciating a duplicate page.
Using an AMP variant: If a variant page happens to be an AMP page, to indicate the canonical page.
Having your PDFs indexed by Google is not always bad, as for example, you might want to host complex and vast instruction manuals for your products. Especially things like user manuals or restaurant menus.
The problem, however, is when Google crawls and indexes PDF documents that you don’t want to be found through search engines.
For instance, if you have duplicate PDFs or ones that contain similar content or information to your webpages, you would not want them competing in the search engine results. Furthermore, you don’t want any sensitive PDF files in your server to be indexed and show up in Google search.
You can prevent PDF documents from appearing being indexed by adding an X-Robots-Tag: noindex in the HTTP header that serves the file. Files that are already indexed will disappear from SERPS within a few weeks. You can remove PDFs quickly by using the URL Removal tool in Google Webmaster Tools.
It’s worth noting that Google will also use your PDF data in featured snippets when possible.
I hope you found this article helpful and if you have any constructive criticism and feedback please leave a comment. Also, if you like the article give it a share. Thanks.
I did a little bit of post-performance analysis of my last blog post "Log file analysis for SEO." Link to the article: https://www.suganthan.com/blog/logfile-analysis-seo/ I had roughly 3626 hits in the 45 days. Top traffic sources came from Twitter, Linkedin and all the awesome newsletters. Unsolicited tips post on Linkedin drove the highest referral traffic. Sitebulb featured the article on their newsletter (Thanks for the kind words Patrick), and it directly led to a conversion of a large travel brand in the US. The ROI is massive. The article attracted some excellent links. (I did shamelessly asked for links in my social media posts…
First of all, however, let’s look at the basics of log files, how to obtain them, and what challenges they present. What is a log file? A log file is a file output contained within a web server that records any request that the server receives. It is useful to investigate log files to see what resources are crawling a website. Every time a user or a crawler visits a website, a bunch of recordings is entered into the log file. Log files exist for example for technical auditing, error handling and troubleshooting, but as many SEOs will tell you,…