Skip to content

Frustrating SEO problems

SEO-Problems-scaled

In the past year alone, the search industry has evolved at a pace that has never before been achieved.

With every development, update, and expansion, however, there are a new series of unique problems or challenges that every SEO or webmaster must overcome.

In this article, I have highlighted a variety of issues that have recently come under discussion or are simply common SEO issues that webmasters face on a daily basis. Including problems caused by search engines themselves, or issues created through SEO strategies.

Google showing third party product review sites for branded terms inside the knowledge panel

Sometimes it can be the case that Google will show users product reviews on separate sites for branded terms. 

This can be especially painful when a brand happens to be paying for PPC and a knowledge panel appears from another site during a search.

Matthew Howells-Barby, Director of Acquisition at HubSpot, pointed this issue out, as it is one that he has also encountered for one of HubSpot’s branded terms.

It doesn’t seem to follow any patterns I couldn’t replicate the same issue for other CRM software.

Hubspot-CRM-issue-1024x632
seo-problems

Really Google?

As we can see, when a user searches for “HubSpot CRM”, a knowledge panel is shown from FinancesOnline.com, which could lead the user away from the actual site that they require, which in this case, is HubSpot.

Competitors can also appear for your brand terms if they are bidding on them in paid search. You can read about this tactic, and why it might not always be the best idea, in this guide by Ad Espresso.

Sites suffering from keyword cannibalisation issues

A largely similar issue to the one mentioned above; if a site creates a lot of content that caters to the same keywords, this could result in serious cannibalisation issues.

For instance, if you have supporting articles that discuss the products sold on your site, the end result could be that these pages rank higher than the actual product pages.

This can cause a series of issues, as your link hierarchy could weaken, your site traffic could be diluted, and you could lose sales.

keyword-cannibalisation-1024x354

In this example for the keyword “Build model portfolio” a builder page and a guide page is ranking next to each other.

It could be also for mixed intent as Google cannot figure out the exact search intent and might be displaying similar pages for a given query.

Furthermore, Google could deindex your product page if it finds that the content is too similar to the supporting article, and for some reason, it thinks the supporting page is more important.

As you can imagine, keyword cannibalisation is a serious issue, especially for e-commerce websites, but there are (thankfully), a variety of solutions for solving keyword cannibalisation.

If you’re looking for a faster way to see your cannibalized keywords you can use Sistrix’s built-in feature.

Keyword-canibalization-1024x484

Sistrix has an awesome filter to check cannibalization

There are other ways and I have recently gone into great depth about solving keyword cannibalisation issues in this article for G2 Learning.

Searches offer zero-click results

It was reported in June that up to 49% of searches result in zero clicks, with organic clicks outnumbering paid clicks at a ratio of 12:1.

The research was carried out by marketing analytics company, Jumpshot, and the firm found that searches with zero clicks had risen over the last three years.

In the first quarter of 2019, the data showed that 48.96% of US searches in Google had resulted in zero clicks, representing a 12% increase from the first quarter of 2016.

Furthermore, 5.9% of searches ended with users heading to other Google-owned websites. This figure rose to 12% when a search resulted in a click.

This should be concerning to all webmasters and SEOs, especially when we consider that 12% of clicks lead to another Google affiliated company.

It is often said that Google is working its way to becoming the source of information, rather than the gateway to it, as we have seen with the travel sector.

After the above chart was published there has been some rather unpleasant tweets about the death of SEO and fearmongering.

The reality is even though the number of zero-click results is decreasing I feel it doesn’t really hurt businesses as much. Especially you have to remember most of the results could be informational.

Don’t be afraid. SEO is not dead. But, we have to remember it is becoming difficult to do SEO with the changes.

The only solution here would be to ensure that you offer accurate, compelling, and high-quality content to your users while considering technical SEO strategies, such as the implementation of structured data.

As you can now use structured data in FAQs and other elements of your site, you can help it get featured in rich results; often referred to as “position zero” by those working in digital marketing.

Shrinking SEO landscape

This ties in perfectly with the issue mentioned above, as when users conduct searches within Google, they are no longer faced with merely organic and paid results.

Over the past few years, Google has begun implementing a vast range of rich results that are designed to give users the information that they need as quickly as possible. These include:

  • Rich answers
  • Rich snippet
  • Numbered list snippet
  • Bulleted list snippet
  • Table featured snippet
  • YouTube featured snippet
  • The knowledge graph
  • Image carousels
  • Video carousels
  • Local map packs
  • Top stories
Zero-click-results-1024x1014

In the above example, you can clearly see that for the query Google is showing various rich results. So, capturing the first spot in organic SERPs means very little.

EEGj HtXUAI3OdM

This meme from 

 clearly reflects my sentiments

For sites that don’t embrace technical SEO, or even paid search, it means that they will struggle to get the same reach or exposure that they will have attained as little as five years ago.

This is especially important as Google has also tested double and even triple featured results in the past.

double-featured-snippets-890x1024

Here is an example of a double featured snippet taking over a large amount of space.

Learn more about developing your search presence in this Google Search guide.

Knowledge panels designed like featured snippets

It was reported as far back as 2018 that some knowledge panels are beginning to look more like featured snippets.

This can be quite confusing, as to the untrained eye, it might give the 

The issue here is that if the difference between the two becomes less distinct, it can create confusion for both users and webmasters over what panel is being provided to offer information and what panel is being offered to present a product or service.

Google no longer supports the noindex directive in robots.txt

From 1 September 2019 Google will no longer support the noindex directive found in robots.txt.

The search engine said: “In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019.”

Instead of using the directive, therefore, this means that webmasters must use alternative techniques, including:

  • Implementing noindex in robots meta tags.
  • Using 404 and 410 HTTP status codes.
  • Using password protection to hide a page.
  • Implementing disallow rules in robots.txt.
  • Use the Search Console remove URL tool.

It’s worth noting that the latter option will only remove the desired URL on a temporary basis.

The Rich Results Test will replace the Structured Data Testing Tool

It was announced at Google I/O this year that the Rich Results Test is going to replace the Structured Data Testing Tool.

The latter is, of course, a brilliant tool to ensure that you are implementing clean and accurate markup to your site so that it can feature in rich results.

There have, however, been several reports that there are inconsistencies between the two tools regarding the highlighting of errors and warnings within inputted markup. 

As Google Search Console is in the middle of a significant revamp, this risks a period where the ability to accurately evaluate structured data becomes clouded.

There’s no doubt that inconsistencies will improve over time, but for now, it is worth double and triple-checking markup before it is implemented.

Ads in Google Assistant

Having undergone testing in February, it was confirmed in April that Google Assistant now provides answers to users in the form of ads.

Writing in The Keyword, Danielle Buckley, Google’s Senior Product Manager, said: “For some questions, the most helpful response might be showing you links to a variety of sources from across the web to learn more. In these cases, you’ll see the full set of search results from the web.”

She continued, saying that: “When relevant, these results may include the existing ads that you’d see on Search today.”

As Google Assistant grows in popularity (it is currently installed on over a billion devices), it is increasingly lucrative to get content featured in Google Assistant.

The issue, however, is that Google seems to be increasingly monetising its results, and it means that users might not be aware that the information they are receiving is from an advertisement.

Over time it might also result in more advertisements appearing over organic answers as Google looks to increase its revenues.

Ads in Google My Business listing

It was reported recently that Google has started showing ads in Google My Business pages.

I think Google went a bit far with this one. I’m surprised there is no backlash from the SEO community and in the long run, this can hurt local businesses as bigger brands will be able to buy their way into competitors listings.

Spam in Google My Business

Google My Business is a brilliant way for local businesses to reach potential customers in their area.

Launched in 2014, the service is used by millions of companies and is now an integral part of online marketing strategies for small businesses.

In recent years, however, Google My Business has become increasingly susceptible to spam and even fraudulent activity.

A study carried out in February found that 46% of marketers often see spam in Google My Business. Furthermore, 45% of marketers said that the issue makes it harder for businesses to rank in local listings.

Google-my-business-spam-1024x591

Here is an interesting GMB spam I noticed recently. Unlike normal spam where a company/individual creates a fake listing with an address for a commercial query, this business uses a different tactic.

By using a trading name this company is able to create a GMB listing for a commercial query. You can’t blame them since Google is allowing it to happen. I’ve reached out to Danny Sullivan (Google’s public liaison) and this is what he said:

A closer inspection reveals this company is using an exact matching trading name to bypass the GMB rules.

If you spot spam in Maps or Google My Business, however, you can take action, as Google released a Business Redressal Complaint Form in February, where cases are reviewed in accordance with published guidelines.

For users who spot multiple cases, you also have the option to submit an Excel spreadsheet. Google also published an article in June 2019, explaining how the search engine combats fake businesses within Google Maps.

Emoji spamming in SERPs

Perhaps a little more light-hearted than most kinds of spam found in search, but emojis featured within title tags and meta descriptions have always received mixed reactions from digital marketers.

Some advocate that in the right time and place, emojis are powerful tools, while others believe that they are a little too childish to be taken seriously.

Emoji spamming is not new. Back in 2015 Google killed Emojis from showing up in search results. However recent times Google has allowed emojis back into SERPs and people are already taking advantage of it.

When asked whether emojis conform to Google’s guidelines in 2016, John Mueller said: “So we generally try to filter those out from the snippets, but it is not the case that we would demote a web site or flag it as web spam if it had emojis or other symbols in the description.”

Earlier in the year, Mueller clarified that Google does not support emojis within search results, although they might be displayed if they are relevant to a query (for example, if a query asks about emojis within meta descriptions).

Emojis will be filtered out by Google, however, if they’re considered misleading, look too spammy, or are completely out of place.

However, there are instances where we can still see Emojis misused in the SERPS.

Emoji-spamming-1024x988

Here is a classic example of multiple sites using emojis in their meta description

As mentioned earlier, it’s worth remembering that meta descriptions can change depending on the query inputted by the user.

Google setting the canonicals automatically

It is often the case that a page can be found through multiple URLs. If a webmaster does not identify the best URL for Google to use in the search, it means that the search engine will try to identify the best one possible.

Although this can be very useful, it is not always the case that Google chooses the URL that you want to use. 

You can check which URL Google is using by inputting the page address into the URL Inspection tool within Search Console. Here, it will show you the canonical that Google has selected.

Canonical-urls-1024x850

There are, however, multiple ways that you can identify canonical URLs for Google to use, including:

  • Using the rel=”canonical” link tag: By using this you can map an infinite number of pages, although this technique can be a complex method of mapping on larger websites.
  • Using a rel=canonical HTTP header: Using a rel=canonical header in your page response will let you map an infinite number of duplicate pages, although again, this can be harder with larger websites or on sites that change URLs often.
  • Using the sitemap: You can identify the canonical URL in your site’s sitemap, which is easy to do, although this method is not quite as powerful as using the rel=canonical link tag.
  • Using a 301 redirect: This will tell Googlebot that a URL is a better version than a given URL, though this should only be used when depreciating a duplicate page.
  • Using an AMP variant: If a variant page happens to be an AMP page, to indicate the canonical page.

You can also read more about how to implement the above techniques in this Search Console Help guideline.

Google picking its own meta description and ignoring yours

If a page contains a badly written meta description or one that is too wordy, Google can decide to ignore the one chosen by a webmaster. 

Other issues can involve incorrect source code, if the Google cache is outdated, or if Google suspects there is search term manipulation.

Google can decide which snippet to show based on:

  • The existing meta description in the HTML source code.
  • The on-page copy
  • The Open Directory Project (ODP) data.

Usually, however, the replacement text might consist of text within the page itself, which is not always useful from a search perspective.

For example, the following URL for an Amazon webpage appears to have been chosen by Google:

image1

As you can see, however, this offers little useful information for users, and such a description would probably not work well for any other website aside from Amazon.

Although it is hard to influence which meta description Google might choose, there are basic guidelines that you can abide by in the hopes that the search engine will use your preferred meta description.

Set-your-meta-descriptions-correctly-768x773

That said, a meta description can still be affected by what search terms have been used by a user.

Here is an example. As you see above I have set the above meta description but if I search for ‘outsource seo suganthan’ then Google is changing the meta to better serve the user.

Different-meta-1024x293

PDFs getting crawled and indexed by Google

It might be news to some webmasters that Google can crawl and index a site’s PDF files, but the search engine has had the ability to do this for nearly 18 years.

Having your PDFs indexed by Google is not always bad, as for example, you might want to host complex and vast instruction manuals for your products. Especially things like user manuals or restaurant menus.

The problem, however, is when Google crawls and indexes PDF documents that you don’t want to be found through search engines.

For instance, if you have duplicate PDFs or ones that contain similar content or information to your webpages, you would not want them competing in the search engine results. Furthermore, you don’t want any sensitive PDF files in your server to be indexed and show up in Google search.

You can prevent PDF documents from appearing being indexed by adding an X-Robots-Tag: noindex in the HTTP header that serves the file. Files that are already indexed will disappear from SERPS within a few weeks. You can remove PDFs quickly by using the URL Removal tool in Google Webmaster Tools.

It’s worth noting that Google will also use your PDF data in featured snippets when possible.

I hope you found this article helpful and if you have any constructive criticism and feedback please leave a comment. Also, if you like the article give it a share. Thanks.

Let’s talk & work together

Whether through my agency or as an independent consultant, I am here to help your business or project succeed.

Related articles

Stay updated!

Subscribe to get latest news, insights, technology, and updates.

" " indicates required fields