Designing for SEO Friendly Design and Development

In this section of the guide, you are going to focus on specific technical guidelines for Friendly Design SEO, building or even modifying existing web pages so that they are a structure for both the search engine and users alike.

Friendly Design SEO-crawling-links

As we already mentioned, search engines are still limited in how they crawl the web and interpret the content.


Friendly Design SEO is a skill we need to develop.


The way a web page appears to us, it is very different to what it looks to the search engine.


When planning your website, you should share this guide with your programmers, designers, or anyone involved in the site construction.


One of your primary objectives when doing web development is to make sure that your site does not look dated.

Make sure it does provide the information or functionality needed to convert your visitors into paying customers at an optimum level.


Trends are continually involving; you need to stay ahead of the brand new trends to bring new users to your website.

Let’s start by seeing the importance of index-able content

For your site to perform better in the search engine rankings, your most important content should be in the HTML text format.

Java applets, flash files, and other non-text contents are often ignored or not properly indexed by search engine crawlers, despite the vast advances in crawling technology by the search engines.

For you to make sure that all the content you display on your website is visible to search engines.

You will need to make sure that the content is also placed in HTML text somewhere on the page.

There are some advanced methods of designing for SEO that is currently available for those who need to have higher flexibility in formatting or visual display styles:

For example, you can provide alt text for images; you can sign image in gift, JPEG or PNG format ALT attributes in HTML.

Giving the ability of the search engines of having a text description of the visual content.

You can implement supplement search boxes with navigation and crawl-able links.

How Friendly Design SEO look to the search engines

Indexing content is a complex task.

Therefore, significant problems can result from the process, so you should double check if everything is correctly indexed on your website.

You can use tools like Google cache, Moz toolbar or SEO-browser.com.

You can then see what elements of your contents are visible and indexable by the search engines.

This is why, if the website is built entirely in flash, the search engine will not be able to index any of the text content.

Any web page without any HTML text will have a tough time ranking in the search results.

This is why you should use SEO tools to double check if the pages you are building are truly being visible to the search engines.

Search engine crawl-able link structures

“Spiders,” “Crawlers,” or “Bots” are nothing to be afraid of. They are just programs that search engines use to index the content found on websites. 

What we need to do is to make sure that those programs can do the job in the most comfortable way as possible. 

You can even help those programs know which of your web pages should index it or not, by configuring the robots.txt file correctly. 

As we mentioned before spiders speak HTML. 

Images, flash files, Java applets and other non-text content are all invisible to the spiders. 

Don’t forget to use basic pointers for helping the search engines understand the non-text parts of your website.  

  • check
    Video and audio should include a written transcript of any video or audio files.
  • check
    Images should use the alt text attribute and a title in the HTML

build roads for the spiders

Basically, what you are doing is to build roads for the spiders with a crawl-able link structure.

You should have that in mind when structuring your site.

If you have pages that are hidden without any links pointing at them.

You are making very hard for the spiders to find the content.

You should have direct and crawl-able links pointing to the pages you want to show up in the search engine results.

In the same way, as search engines need to see content to list pages in their massive keyword-based indexes.

They use the links they find on the pages to find the content in the first place

crawl-able link structure

Designing your website with a crawl-able link structure is one way of letting the crawlers run freely throughout your site and find all the content that you have.

In the following example, we have illustrated how this problem can happen on your website:

diagram

In this example, Google crawler have reached your homepage page labeled A, and he sees links pointing to pages B and E.

The problem is that pages C and D could be of immensely importance on your website bots.

Google’s crawlers have no way to reach them or even know that they exist for that matter.

This is because you have no crawl-able links pointing pages C and D for the search engine.

They just don’t exist, and all that hard work you put on creating great content, doing good keywords research and all that marketing is wasted just because Google’s crawlers cannot reach your pages.

Making a web page human-friendly is making it search engine friendly

Your primary objective is to make your pages human, readable and appealing, designing for SEO.

Nevertheless, that is no reason for not having a suitable page structure and formatting; this way search engines will crawl them correctly.

Your <head> tag is for the bots, then you will have your <title> and <description> tags. You want to make sure that those tabs are unique for each page of your website.

This is what makes possible for search engines to distinguish between the different content on your website.

The <body> is what your site visitors can see, and you need to carefully structure it in the easy to read and pleasing way.

Your main headline is your <H1>, do not neglect it should use a decent keyword rich headline straight to the point so that the readers and the search engines have a good picture of what the article is all about.

This is why I recommend using only one <H1> tag per page. This way will not confuse the search engines.

Other body elements include <H2> to <H6>, bullet points, both types, etc.

You should use those elements correctly so that the search engines know what your page is all about, and what is most important.

Popularity contest. I want to be found!

When you create a website, you are entering into a popularity contest, and you are on the last spot.

Search engines need backlinks which are a link from another site to yours, as an indication of authority and popularity.

Search engines will use this information to rank web pages; this is why the more credible backlinks you can get to your website will most likely rank higher on the search engine result (SERP).

Moreover, once you do, you will be find-able by your ready to buy customers.

It would be incredible to get related one-way links from authority sites like CNN, US today, BBC, etc.

Not only are there hard to get what they need to be related to your niche. Try not to engage in spamming link building activities.

The composition of a link

link-anatomy

The link tag is of great use to us because they can contain images, text, or any other type of object, from which we can provide a clickable area of the page that users can engage to another page.

Those links are the navigational elements of the Internet and are called hyperlinks. In the picture above, the "<a" tag indicates the start of a link.

The browser can analyze the link referral location and to where the link is pointing.

For example, in the illustration above the URL http://www.incomemoney.com is referenced.

The anchor texts are the visible position of the links that the visitor sees; it describes the page the link points to. The "</a>" tag closes the link.

Common Reasons Why Web Pages May Not Be Reachable

  • check
    1.Submission required forms: Let’s say that your website needs users to complete an online form before accessing specific content; this limitation also affects search engines, since they are not able to complete its the online form. They will not be able to access your protected content.
  • check
    2. Forms: Those can even include the password-protected login or a survey. In either case, the search engine's crawlers will not attempt to submit forms, any content or links that are accessible via a form are simply invisible to the search engines.
  • check
    3. JavaScript and links: If you are using JavaScript for links on your website, you will see that search engines do not crawl or will give very little weight to the links embedded in it. You should try to misplace JavaScript links with standard HTML links so that the crawlers will not have any problem indexing your content.
  • check
    4. Links pointing to pages that are blocked by your robots.txt: The Meta Robots tag and the robots.txt file both permits that the site owner restricts access to specific pages to the crawlers. Sometimes the Webmaster intention is to block a particular type of robots and unfortunately is blocking access to the search engine crawlers.
  • check
    5. Iframes, or frames: Technically, links in both Iframes, or frames are crawlable, but unfortunately, those are structural issues for the search engines regarding organization and following. If you are not an advanced user with an excellent technical understanding of how search engines index and follow links in frames you better off avoiding using them.



Common Reasons Why Web Pages May Not Be Reachable Part 2

  • check
    1. Search forms and robots do not mix: As I mentioned before, search forms are not very accessible to search engines; some people believe that if they place a search box on their site, that the search engines will be able to find everything that the visitor's search for. Unfortunately, that is not the case; crawlers do not perform searches to find content, which will leave millions of pages inaccessible and lost until a crawled page links to them.
  • check
    2. Java, flash and other plug-ins are not the places to put links: Links that are embedded in Java, flash and other plug-ins are invisible to the search engines. Therefore, they will be hidden from user search queries.
  • check
    3. Hundreds or even thousands of links: Search engines will only crawl so many links on a given page. Moreover, this is a necessary restriction to cut down on spam and preserve rankings. If your page has hundreds of links to them, then they are more at risk of not getting all those links crawled and indexed. You want to avoid those pitfalls, and have clean, crawl-able HTML links that allow the spiders to access your content.

nofollow-link

Rel="no-follow" are no-follow links, bad?

Links can have lots of different attributes.

The search engines will ignore almost all of them, the main attribute that they find important is the Rel="no-follow attribute.

Let’s use the following example:

<a href=" https://incomeseoreview.com/" rel="nofollow">No Juice For You!</a>

In this example, adding the Rel="no-follow" attribute to the link tag tells the search engines that we do not want this link to be interpreted as an endorsement for the target page.

No-follow, means literally that we do not want the search engine to follow the link, that does not mean that the search engine is not going to follow it.

The no-follow tag was introduced as a way to help stop automated block comments, guestbook, and link injection spam.

However, it has evolved over the time into a way of explaining to the search engines that we do not want to pass link value from that particular link.

Links tagged with no-follow attribute are interpreted a little differently by each of the search engines, but one thing is clear is that they do not pass as much link juice as standard links do.

Although they do not pass as much link juice as the do-follow links, no-followed links are a good thing to have on your website, because it provides you a way to create a good link profile.

The website that has lots of inbound links will also have many no-followed links.


links

Google no-follow policy

Google has stated that in most cases, they do not follow no-follow links and that those links do not transfer any link value to PageRank or anchor text.

Basically, by using no-follow will make will drop the target links from their overall graph of the web

No-follow links carry no weight and are therefore interpreted has simple HTML text, it is like the link never exist.

That being said, many people still believe that even a no-follow the link from a high authority site will be interpreted as a sign of trust by the search engine.

Bing & Yahoo! no-follow policy

Bing, has also stated that they do not include no-follow links in the link graph.

But their crawlers might still use no-follow links as a way to find new pages.

 This means that they still follow the links, but they do not use them in the ranking calculations.

seo-sidebarlow

Learn Keyword usage and targeting

Keywords are essential to the search process.

They are in fact the building blocks of language and search.

The entire field of information retrieval, including web-based search engines like Google, are entirely based on keywords.

The search engine bots search and index the contents of pages around the web, to keep track of the magnitude of web pages that exist.

They use the keyword-based index in one database.

Search Engines creates millions and millions of smaller databases, and each is created around a particular keyword term or phrase.

This allows the search engines to reach the data they need at a mere fraction of a second.

Moreover, if you want your page to have a chance of ranking in the search results for “money” you have to make sure that you use the keyword “money” in content that is crawl-able for the search engine.

google-crawler

How to have Keyword domination

We now have a firm grasp that keywords dominate the way we communicate our search intent and the way we interact with the search engines.

When we enter words to search for, the search engines will then refer those keywords to their databases and matched pages that are related to the search.

The search engine will verify the order of the words, the spelling, punctuation, and capitalization to retrieve additional information that will help the search engine to retrieve the right pages and rank them according.

Search engines then measure how the keywords are being used on pages to determine the relevance of a particular document to the search.

This way, one of the best strategies to optimize a page ranking is to ensure that the keywords you want to rank for our being used in the page title, text, and made the data.

Think of it this way when you make your keywords more specific; you are narrowing down the competition in the search results.

This way you are improving your chances of achieving a higher ranking.

Keywords-Selection

Keyword abuse

Since the beginning of the Internet, people have abuse keywords in a misguided effort to manipulate the search engines.

They use tactics like "stuffing" keywords into text, URLs, meta tags, and links.

Nowadays this type of tactics will do more harm than good to your website.

In the early days, search engines rely on almost entirely on keyword usage.

Today, search engines are very complex, and although they cannot read and comprehend text as well as a human, they are getting pretty close.

Keywords best practices

The best practice is to use your keywords in a natural and strategic way. If your page targets the keyword phrase.”chicken nuggets” then you might consider including content about “chicken nuggets”. 

For example, you can tell the story of how they were created; you can even recommend other complementary foods.

However, on the other hand, if you simply put the words “chicken nuggets.” into a page without the relevant content, like for example, a page about cats.

Then all of your efforts to try to rank for “chicken nuggets” are wasted.

You want to use keywords to rank highly for the content that people are searching for, and that your site is providing them.


On-page optimization

Keyword usage and targeting are part of the search engine ranking algorithms, this way we can apply some practical techniques for keyword usage that will help us to create pages that are well optimized.

When working on our websites, we use the following process:

Use the keyword phrase:

In the title tag at least one time. Use the keyword phrase as close to the beginning of the title tag as possible.

Use the keyword phrase near the top of the page.

You should use it at least two or three times, with some variations, in the body copy on the page.

You can use it a few more times if you have a lot of text content.

Try to use it at least once in the alt attributes of an image on the page. This will not only help you with web search but also with image search, which can occasionally bring some traffic.

Once in the URL.

Finally, try to use the keywords at least once in the metadata description tag.

Furthermore, try to avoid using keywords in the link text pointing to other pages on your site because this can cause a Google penalty by a process known as Keyword Cannibalization.


Title tags the role

Title tags what’s the role?

The title element of a page is meant to be an accurate, precise description of a page's content.

This needs to be true to both user experience, and search engine optimization in a way. You are designing for SEO the title.

Title tags are used to tell search engines and users, want any given page on no website is all about.

You should try to be the most precise, accurate on the title tag in relations to the content of the page.

One can even say that the title tag is the boldest, the most noticeable element in the search results.

And it is a significant factor in the decision-making process of wherever a user will click on your result or not.

Since title tags are such an essential part of the SEO optimization process, you should have in mind the following best practice for the title tag creation.

Please try to follow the recommendations below since they will come over all the critical steps for you to optimize title tags for search engines and users.

You need to be mindful of the length

You need to understand that search engines display only the first 65 to 75 characters of the title tag in the search results.

These are also the general limits allowed by most social media sites, so sticking to this limit is a wise choice.

However, if you are in the situation that you are targeting multiple keywords like, for example, long-tail keywords.

It is vital for you to have them in your title tag, then you can try to go for a little longer.

You want to have your most important keywords close to the front.

The closer your keywords are at the start of the title tag, the more helpful they will be in the ranking, and the more likely the user will click on them on the search results.

How should you write a great title tag

write a great title tag

Looking from an SEO perspective, the title tag should contain all the keywords you wish to rank for.

You should use the most important keywords at the beginning, followed by the second most important, and if possible try to include your brand name.

We all should write title tags for humans, not for the search engines.

That being said, you could use the following reference as a guide:

You need to be mindful of the length.

You need to understand that search engines display only the first 65 to 75 characters of the title tag in the search results.

These are also the general limits allowed by most social media sites, so sticking to this limit is a wise choice.

Use branding

It is an excellent practice to use, and we try to end every title tag with the brand name mentioned.

As it will help to increase brand awareness and create a higher click-through rate for people who are familiar with your brand.

It makes sense to place a brand name mentioned at the beginning of the title tag, like, for example, on the homepage.

The words that are at the beginning of the title tag will carry more weight in the search engines, so be strategic with the keywords for what you are trying to rank. 

Readability and emotional impact

Readability and emotional impact

Let ’s face it; title tags should be descriptive and readable.

The title tag is your visitor first interaction with your brand and should give the most favorable impression as possible.

Try to create a compelling title tag that will grab attention on the search results page, attracting more visitors to your site.

You should think about the entire user experience.

Keyword placement: you should put your most important keywords first in your title tag, leaving your last important words in the end.


Don’t do keyword stuffing in your title tags: if you are trying to rank for everything or are repeating words over and over again, you will most likely get penalized by the search engine.


Do not duplicate the title tags: you should have different title tag for every page, if you mess replicate your title tags, it will have a negative effect on the search visibility.


seo-persona

Use branding

It is an excellent practice to use, and we try to end every title tag with the brand name mentioned, as it will help to increase brand awareness.

It will also create a higher click-through rate for people who are familiar with your brand.

It makes sense to place a brand name mentioned at the beginning of the title tag, like, for example, on the homepage.

The words that are at the beginning of the title tag will carry more weight for the search engines, so be strategic with the keywords for what you are trying to rank.

Readability and emotional impact

Let ’s face it; title tags should be descriptive and readable.

The title tag is your visitor first interaction with your brand and should give the most favorable impression as possible.

Try to create a compelling title tag that will grab attention on the search result's page, attracting more visitors to your site.

You should think about the entire user experience.


Meta-tags

Meta-tags

Meta-tags were unusual intended as a proxy for information about a site's content.

We can see several of the primary meta-tags listed below, along with a brief description of their use.

Let’s start with the meta-robots tag that can be used to control the search engine crawler's activities on a per page level.

They are many ways to use meta-robots to control how search engines will use a page:

Index/no-index: those tags will tell the search engines wherever the page should be crawled and kept in the search engine's index for retrieval.

Let’s say that you choose to use the “no-index.” This will make the page be excluded from the index.

Search engines will assume by default that they should index all pages so by using the “index” value. You are doing an unnecessary step.

Follow/no-follow: the stack tells the engines wherever links on the page should be crawled.

If you use the “no-follow” attribute, the search engine will then disregard the links on the page for discovery, rankings, or both.

By default, all pages are presumed to have the “follow” tag attribute.

Example: <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">

No-archive: It is used to restrict search engines from saving a cached copy of the page.

Why and when would you want to prevent Google from saving and showing your pages?

When you want to prevent scrapers from ripping your content.

For e-commerce sites.

For paid membership sites.

Although Google does not seem to penalize your site for using no-archive on your pages, some people will see your site with some suspicion.

Many people will think that you are cloaking your web pages.

There are significant benefits of having Google caching your pages, for example. People can still assess your pages even if your site is down.

Google also provides a text-only version of the page that shows us how Google sees your page.

no-snippet: it signals the search engines that they should refrain from displaying a descriptive block of text next to the page title and URL in the search results.

noodp/noydir: that is a specialized tag that tells the search engines not to grab a descriptive snippet about the page from the Open Directory Project or the Yahoo directory for display in the search results.

The X-Robots-Tag HTTP header directive also accomplishes these same objectives.


The meta-descriptions

meta-descriptions

The meta-description or commonly called the meta-description attribute or tag is the HTML element that describes and summarize the content of your page.

Not only for the benefit of users but also the search engines.

While the metadata is not that much relevant as it used to be, it still plays a significant role in the on-page SEO.

We can say that the meta-description is an attribute within your meta-tags that help describe your page.

The information has a few purposes, one of which is to serve as a snippet of the text is going to appear on SERP’s when the user performs a search query that your page ranks for.

You should always do the meta-description for the web-page because if you do not do so, the search engine will use a brief description of what they find throughout your site.

meta-description best practices

Your meta-description is recommended to be 275 characters or less in length. Meta-description text can have a significant impact on your search engine optimization efforts.

The meta-description tag serves a function of advertising copy; it helps your readers have a small preview of what’s your site is all about.

It is a significant part of search marketing.

You should put some effort into creating a compelling, readable description using relevant keywords that will result in the much higher click to rate of searchers your page

Meta-keywords: the meta-keywords tag has lost its value and has fallen into disuse.

Meta Refresh, Meta Revisit-after, Meta Content-type, and others:

Those meta-keywords still have some uses for search engine optimization, but they are far less critical to the process, so you do not need to put that much effort in them.


URL structures

URL structures

Let ’s talk about first impressions; when it comes to your website, your URLs are the first thing that Google and your users will see.

Also, they are the foundation for an efficient site hierarchy, since you want to pass link juice throughout your domain and direct users to the desired destinations.

This Is one thing you should get it right from the start, not only can be extremely difficult to correct it later if you do not plan, you could create endless redirect loops.

Moreover, your users and even the search engines will not like those.

You should have in mind when planning your URL structure to get a blend of usability and accessibility factors, along with good SEO practices.

Use your keywords

Every time you create a brand new page on your website, it should have a purpose.

Whether it is informational, for a new product, you are selling, you should have an excellent reason for its existence, and that reason should be apparent.

You want this page to be discovered by the search engine crawlers, and by the right users.

So you should do some keyword research and include the relevant terms of the content of the page.

URL SEO guidelines

Employee empathy towards the user

When choosing your URL, put yourself in your user’s place and by looking at your URL.

See if you can quickly and accurately predict the content you are expecting to find on the page.

If so, then your URL is appropriately descriptive.

You cannot describe everything in your URL, but if you can pass the main idea, then you are on a good starting point.

The shorter the URL, the better

It is important to have a descriptive URL, but the short your URL the easy it is for the user to copy and passel into emails, blog posts, and even to remember it.

Using keywords is important, but overusing them. You risk a penalty.

Let’s say that your pages are starting a specific term or phrase; you should include that particular term in the URL.

Have in mind that you do not want to go overboard with over optimization.

If you try to stuff multiple keywords for SEO purpose, the keyword impact will be reduced in effectiveness, and you can trigger spam filters.

Go for human readable

A great URL should be human readable, and he should not have lots of parameters, numbers, or symbols.

Try to transform dynamic URLs into more readable static versions.

An example of a dynamic URL will be something like this:

https:// incomeseoreview.com/ blog?id=123

While the more readable static version would be something like this:

https:// incomeseoreview.com/SEO-Great-Content/

Single dynamic parameters in the URL can potentially result in the lower overall ranking and indexing.

Hyphens are a great way to separate words

You need to have in mind that not all web applications can interpret separations like: 

Plus signs (+), underscores (_), or spaces, so instead try to use the hyphen character (-) to separate words in a URL, as in the “SEO-Great-Content.”

URL example above.


Canonicalization and duplicate versions of contents

Duplicate content is one of the most significant problems any website can face.

Search engines have evolved tremendously in the last few years; the new algorithms allow the search engines to crack down on pages with thin or duplicate content.

Which then results in lower rankings for your website.

Canonicalization is when two or more duplicate versions of a web-page appear on different URLs.

This is a common problem with modern Content Management Systems.

Let’s say, for example, that you have a web-page that you have created a print-optimized version of it.

Duplicate content because of CMS

Moreover, let’s say you have uploaded that’s print version to several websites.

Now, the search engines will see that duplicate content and will have to decide which version of the content they will show to the users.

The search engines try to choose which version of the duplicate content they should show to provide the best user experience.

That is why the search engine will try to show merely the version it believes to be the original.

Now with Canonicalization, you are mainly organizing all of your content in such a way that every unique piece has simply one URL.

Let’s say that you have multiple pages that are competing with each other’s, but you have the option to aggregate them into one single page.

Not only, this will make them stop competing with each other, but this will also create a stronger relevancy and popularity signal.

This is a preferred strategy to improve your rankings in the search engines.

Rich snippets

Do you remember seeing a five-star rating in search results?

Well, you have seen that five stars because of the search engines have received information from rich snippets that were embedded on the web-page.

Rich snippets are basically structured data that allow webmasters to markup content it is a way to provide certain types of information to the search engines.

Rich snippets are not a required element of search engine friendly design, while their use is growing.

It merely means that many webmasters are starting to use them to improve the user experience and to have some advantage in some circumstances.

This way you are having your data is structured in such a way that the search engine can quickly identify what type of content it is.

Schema.org provides some examples of data that can benefit immensely from structure markup, products, reviews, recipes, etc.

Moreover. Search Engines are including structure data in the search results, especially in the case of user reviews, the commonly used stars, and author profiles.

You can consult excellent resources for learning more about the rich snippets online, and those resources include information at Schema.org, Google's Rich Snippet Testing Tool, and by using the MozBar.


Protecting your hard work

Scrappers are looking forward to taking your rankings.

Yes. We have to worry about designing for SEO and have to worry about scrappers trying to steal our hard work.

The web is filled with those little scrappers, and they are programmed by unscrupulous webmasters.

Whose business and traffic models depend on the ripping content from other websites and reusing it then on their own domains.

When someone copies and republish your content on other sites.

They are using a practice that we call scraping, and the scrappers actually perform very well in search engine rankings.

Sometimes even outrank the original sites.

Let’s say you are publishing some content; it could be any type of formats such as RSS or XML.

You need to make sure that you ping the major blogging and tracking services like, for example, Google, Yahoo, etc.

publishing CONTENT

It is straightforward to find the instructions for ping in the service like Google and Yahoo directly from their websites.

You can even use a service like, for example, Pingomatic that will help you automate the process.

What happens is that most of the scrappers on the Internet will republish content without even really editing or adding any more content to it.

In this is an advantage to you because you can include links back to your website.

More specifically to the post to have created, ensuring this way that search engines will see that most of the copies are linking back to your site.

This way, the search engine will be able to know that you are the most probable creator of the post.

However, how can he do this without too much fuss?

Luckily for you, it is not difficult. You just need to use absolute, rather than relative links in your internal linking structure.

<a href="../">Home</a>

You would instead use:

<a href="https:// incomeseoreview.com">Home</a>

By doing this, you are going to trick the scrapper that just picks up the copies of the content because the link remains pointing to your site.

Of course, that there are more dense ways to protect your website against scraping, but even those are not entirely bulletproof.

You should expect that the more popular to your site becomes, the more often you will find people trying to steal your content and republish it.


Sergio
 

>