What is the worst kind of broken link?

Broken internal links that stop crawlers finding pages on your site is generally worse than outbound dead links, which are just inevitable on large sites that have a lot of content and outbound links. It is always nice to find and fix broken outbound links, but its not a critical issue unless the majority of such links are broken. That said it’s probably one of the next best things to fix, after critical issues are solved as it shows you’re keeping on top of content; keeping dead links is a sign of editorial neglect.

We would usually bunch infinite redirect loops in with the worst kind of broken links, as they are very similar in effect and slightly worse even.

When a redirect is being used, and page A links to B which links to C, it’s not technically a broken link, as there is a join, and it’s a very minor issue to update the link to point directly to the end destination. It increases page speed and maximises link equity transfer, but it’s not as major as a broken link. In fact when people use URL shortening tools they’re willingly causing exactly this problem.

Why should agencies examine domain ownership?

Gathering information which verifies domain ownership such as registrant name, the official owner of the site and registrar address details is all important as if there’s any debate, these details will be critical in proving ownership

Using domain privacy services could in theory hijack the domain, saying they are the real owner, hence it is important to check whether they are a trustworthy organisation, such as if they are owned by an ‘Icann accredited’ registrar. this is a useful check, as it guarantees a level of security. Any disputes in ownership must follow Icann procedure.

It is worth noting expiry date not just for security but because Google also slightly trusts longer-registered domains more, as spammers tend to register for single years expecting their domain to be banned, thus saving money. There has been evidence suggesting it is one of the many factors considered in ranking any given domain, so it’s worth having on any audit report even if one doesn’t personally take measures to extend the registration period of this domain.

How to test server uptime?

Testing server uptime can be done in theory over perhaps a period of 1 week, every few seconds trying to load the site from various parts of the world. There are services that help you to do this but it can also be arranged by signing up with a few different hosts and writing a script for it, which is definitely a good idea because all the PageSpeed stuff that people normally focus on in technical SEO reports is nothing compared to server speed and failure issues, which the Google PageSpeed tool rightly flags as a priority when it’s an issue.

Factors such as when the site is up and how fast is it in seconds would be different according to locations within the world. Generally a UK-based host is faster for UK and European customers than a US host is. Content delivery networks attempt to solve this but they can go offline themselves and their quality varies between regions. They also dilute asset attribution, for instance a picture coming from cdn.xyz is less useful for the authority building of blaa.com even though blaa.com may own the picture if it is using cdn.xyz to serve pictures and not masking the domain properly.

If you find a site with three practically identical pages why is this a problem and what is the solution?

If a site has 3 practically identical pages then crawlers would have difficulty identifying the canonical version of the page, which would prevent the correct version from ranking as highly as it should. In explaining the equity advantage of a single distinct page over a set of duplicated pages, is people will be linking to the same URL instead of link equity being spread across many. Once the issue has been corrected ten links would be pointing to one URL instead of one to each of the ten copies, thus giving ten times more powerful link equity. Due to the content being more unique, it also has less of a spam factor / a higher content quality score. Note that if the pages were slightly less unique but still too similar for desirability, ‘keyword cannibalisation’ becomes the issue.

Solution

One potential solution would be to use the rel=”canonical” tag to identify the correct page, this is the right solution if separate pages are really needed for site functionality or personalisation, but often it’s done in error due to some versions being a bit older, or the site code being messy, in which case the best solution (where possible) is actually to pick a canonical (preferred) version then do a 301 redirect from the others to the chosen one. Canonical tags consolidate equity well but not perfectly, whereas 301 redirects are more definitive and universal.

What is the problem with A>B>C redirects?

The problem with directing page A to B to C is that essentially you’re slowing crawlers down and giving users a poor experience by taking them through several pages to get to the proper information, the way to solve it would be to redirect page A directly to C (as well as B>C as before). If the link from A > B is one you have editorial control over you should also update it to point to C.

What are the most common technical SEO mistakes?

Broken redirects

Broken redirects is certainly a major SEO mistake. They range from unnecessary hops from one URL to another to another, which slows the final page load down but doesn’t cause a major problem, right through to infinite redirect loops which completely crash the user experience and can get sites totally de-indexed as if banned.

Duplicate content

Duplicate content is definitely a common one and while its technical aspect is small it is still something that people often do not think of until they know that Google penalises sites for it. It can get quite technical when considering what percentage of each page is duplicated vs unique content, and how many words in a chain are repeated, etc.

Broken links

Another very common technical SEO mistake is broken links, ie they just simply point to a non-existent location that may have once existed or may be a typo. Redirects often then kick in to send people who land on error pages to more useful pages, and this can get messy. Aside from broken links, which return ‘error 404 – page not found’ unless the server has bigger issues or redirects have kicked in to tackle the error. There are other errors like 500 (server error), this is common on messy large ecommerce sites and is terrible for SEO.
‘403 – forbidden’ is another common one which denotes blocked content by robots.txt.

Spam signals

Dangerous spam signals like hidden text, especially hidden links to external sites, are also about as bad an error as one can get but they are still very common. Small text external links and low contrast / hard to read links are almost as bad and are also very common.

Failure to add pages to the menu

Failure to add pages to the menu or other internal linking structure is also a prevalent issue which can cause solid content not to rank at all. There is a similar thing observed where the homepage is a splash page with a minimal design of its own, then there’s a large site behind it but no link from the homepage to the inner site.

Old site pages still available when new page is live

Having old sites’ pages still there when new pages are live is also a common problem. Old URLs should be dealt with, usually by redirection to the new equivalent URLs, rather than being left stale because search engines still send traffic to those old pages.

Non-responsive layout

Here’s a huge one: non responsive layout (not a mobile friendly design). Google indexes sites based on how the mobile version loads and performs, so even if you have superb content which loads correctly on a desktop, you may still lose ranking if your site loads poorly on mobile devices.

SEO time management

Another key technical seo error is spending too much time on trivial technical issues when there is more rewarding work to be done like creating new content or promoting it.

Missing keywords

We can’t miss the classic: not getting the relevant keywords on the page. The main mistake in SEO is having no site, or no crawlable site; next is having no keywords on pages. This kind of top level ‘main mistake, next main mistake’ is rarely said. SEO is really a complex messy industry that few people take the time to organise to such a level as this.

Are meta descriptions outdated or worthwhile?

The use of meta descriptions is certainly worthwhile. Its main purpose is to boost click through rates (CTR) in search results, and it can be effective in doing this. It can also be used as a branding tool to get certain messages into search results to people who see but don’t click, and can assist with legal compliance issues. It also has a small influence on ranking but CTR is the main purpose for it.

Is buying links outdated or worthwhile?

In a 100% white hat campaign that you’re confident about, buying links is never recommended as Google can pick up on this and penalise the site accordingly. Most agencies do buy links though, but they are aware it’s strictly against Google’s rules so instead of paying with cash they usually offer something and don’t explicitly ask for a dofollow link even though that’s their goal. They play the numbers game knowing that if they give free products to bloggers to try and review they will usually place a good link in their review page. Some simply offer to write for the blog, and slip links in using their own editorial discretion, pretending it’s a useful part of the article.

Ultimately though, if you were offered a link from adobe.com’s homepage, with all it’s link equity, and it would cost $1m for one year, and you were confident enough about SEO to know that link would make your site popular enough to earn many millions in the first year, it’s really a no-brainer, so long as it’s confidential and not pointing to a domain that needs protecting at all costs. So long as link equity has a value, there will be a market for dealing with it, and people who can earn good profit from it.

Owned media is so much more reliable than ‘rented’ media though, which is what we’d be doing by paying for a link on someone else’s site. Thus acquiring that site, and linking to your own other site, is a better strategy if the numbers add up provided you keep both sites independently registered, hosted, link profiled, etc.

Are XML sitemaps outdated or worthwhile?

XML sitemaps are still one of the main features in Google Search Console, however their true relevance is a hotly debated topic in the SEO industry. It is critical for the 1% of sites that are inaccessible to crawlers on their main pages. but it’s of little to no value to sites that are easily crawled naturally. However this is a controversial viewpoint which may differ from one SEO professional to another.

That applies to just the basic sitemaps, which is simply lists of URLs with a date of last update. There are plenty of advanced features for special metadata that are sometimes worthwhile using though, if you have complex content formats. Another point to consider is that the vast majority of people using a date of last update are auto-generating them incorrectly so are giving false information to Google anyway, thus they’d be better off not using them. Google knows this so takes sitemap data with a huge pinch of salt.

Is pagerank sculpting outdated or worthwhile?

Pagerank is basically generic authority / ranking power and it is gradually declining in significance as bounce rate analysis is improving, but links still have a significant impact on rankings and that’s mostly due to pagerank. However it is also due to transfer of keyword relevance (which is weighted by pagerank).

Something to note is people in the white-hat scene, especially big brands with a reputation to consider, usually snarl at the idea of pagerank sculpting, because Google advises to focus on users rather than robots, and it’s kind of a grey hat tactic, but Google says a lot of things to deter non white hat SEO – this doesn’t mean only white hat works though; and Google even includes a pagerank sculpting feature within it’s heavily promoted sitemaps technology. The idea of being user-focused vs robots-focused is a good one for white hat sites but the ultimate approach is to consider users and robots as one, and account crawler flow in the same way as we account for user flow around our site. If the content is important both crawlers and humans should be exposed to it fast, and if it’s trivial the back pages is a good place for it.

What skills and traits make a good technical SEO expert?

A good technical SEO expert needs to rely on various skills and personal traits to identify areas of improvement in site optimisation. The main discipline/skill people demand in a dedicated technical SEO role, beside general understanding of best practice, is a solid understanding of coding. This mainly entails frontend (html, css, js) but serverside languages are also impressive when applying for technical SEO poistions, and varied CMS familiarity (wp, joomla, magento, etc) is also extremely useful for agencies.

What is the purpose of technical SEO?

Technical SEO is often poorly defined and can often refer to several disciplines, therefore it can be helpful to break it down into “onsite technical SEO” and “offsite technical SEO”.

Onsite technical SEO

The first purpose of onsite technical SEO is permitting basic access to your content (not blocking it conclusively) so that search crawlers can add your pages to their index. The second purpose, which is often overlooked, is to make it easier and more rewarding for search crawelers to access. This can be done in a number of ways such as having a smaller file size, greater content to code ratio, more efficiently organised inter-page relationships, etc.

Offsite technical SEO

Offsite technical SEO is concerned with link detox, which is the process of disavowing (removing) toxic links pointing towards your site. These links can range from unnatural links, for instance purchased links, to backlinks received from spam/virus sites.