Welcome To Fusion SEO Delhi

Fusion SEO Delhi provides best SEO Services and web Development Services in India and also Worldwide.

What is SEO?

SEO is acronym for “Search Engine Optimization”. It is a process to improve your websites visibility on the famous search engines like Google, yahoo, Bing etc.

What We are?

We are dedicated to our work and very professional towords our clients. We provide services till 100% clients satisfied.

Our Team

We have a small and diffrent type of professional team working for online development and Marketting.

Our customer Support.

We provide a dedicated customer support to our clients so that we can make a good professional link.

Welcome to Fusion SEO Delhi

Fusion SEO Services Provider is one of the best SEO service provider situated in Delhi. We provide our services not in Delhi but all over the India and World Wide. We also provide new website creation and their maintenance services to our clients. We set up a basic campaign structure for our new customers regarding SEO services and how these are helpful to expand their business.

We pride ourselves on being a friendly and approachable, creating long lasting relationships with our clients. Our high quality and innovative approach in addition to our affordable prices separates us from typical web design and software companies.We believe in 100 % satisfaction of our customers. Thus we provide quality services to our clients. We have so many satisfied customers all over the India.

An SEO Guide to HTTP Status Codes

One of the most important assessments in any SEO audit is determining what hypertext transfer protocol status codes (or HTTP Status Codes) exist on a website.
These codes can become complex, often turning into a hard puzzle that must be solved before other tasks can be completed.
For instance, if you put up a page that all of a sudden disappears with a 404 not found status code, you would check server logs for errors and assess what exactly happened to that page.
If you are working on an audit, other status codes can be a mystery, and further digging may be required.
These codes are segmented into different types:
  • 1xx status codes are informational codes.
  • 2xx codes are success codes.
  • 3xx redirection codes are redirects.
  • 4xx are any codes that fail to load on the client side, or client error codes.
  • 5xx are any codes that fail to load due to a server error.

1xx Informational Status Codes

These codes are informational in nature and usually have no real-world impact for SEO.

100 – Continue

Definition: In general, this protocol designates that the initial serving of a request was received and not yet otherwise rejected by the server.
SEO Implications: None
Real World SEO Application: None

101 – Switching Protocols

Definition: The originating server of the site understands, is willing and able to fulfill the request of the client via the Upgrade header field. This is especially true for when the application protocol on the same connection is being used.
SEO Implications: None
Real World SEO Application: None

102 – Processing

Definition: This is a response code between the server and the client that is used to inform the client side that the request to the server was accepted, although the server has not yet completed the request.
SEO Implications: None
Real World SEO Application: None

2xx Client Success Status Codes

This status code tells you that a request to the server was successful. This is mostly only visible server-side. In the real world, visitors will never see this status code.
SEO Implications: A page is loading perfectly fine, and no action should be taken unless there are other considerations (such as during the execution of a content audit, for example).
Real-World SEO Application: If a page has a status code of 200 OK, you don’t really need to do much to it if this is the only thing you are looking at. There are other applications involved if you are doing a content audit, for example.
However, that is beyond the scope of this article, and you should already know whether or not you will need a content audit based on initial examination of your site.

How to Find All 2xx Success Codes on a Website via Screaming Frog

There are two ways in Screaming Frog that you can find 2xx HTTP success codes: through the GUI, and through the bulk export option.
Method 1 – Through the GUI
  1. Crawl your site using the settings that you are comfortable with.
  2. All of your site URLs will show up at the end of the crawl.
  3. Look for the Status Code column. Here, you will see all 200 OK, 2xx based URLs.
How to find 2xx HTTP success codes through the ScreamingFrog GUI
Method 2 – The Bulk Export Option
1. Crawl your site using the settings that you are comfortable with.
2. Click on Bulk Export
3. Click on Response Codes
4. Click on 2xx Success Inlinks
How to find 2xx HTTP success codes through the ScreamingFrog Bulk Export

201 – Created

This status code will tell you that the server request has been satisfied and that the end result was that one or multiple resources were created.

202 – Accepted

This status means that the server request was accepted to be processed, but the processing has not been finished yet.

203 – Non-Authoritative Information

A transforming proxy modified a successful payload from the origin server’s 200 OK response.

204 – No Content

After fulfilling the request successfully, no more content can be sent in the response payload body.

205 – Reset Content

This is similar to the 204 response code, except the response requires the client sending the request reset the document view.

206 – Partial Content

Transfers of one or more components of the selected page that corresponds to satisfiable ranges that were found in the range header field of the request. The server, essentially, successfully fulfilled the range request for said target resource.

207 – Multi-Status

In situations where multiple status codes may be the right thing, this multi-status response displays information regarding more than one resource in these situations.

3xx Redirection Status Codes

Mostly, 3xx Redirection codes denote redirects. From temporary to permanent. 3xx redirects are an important part of preserving SEO value.
That’s not their only use, however. They can explain to Google whether or not a page redirect is permanent, temporary, or otherwise.
In addition, the redirect can be used to denote pages of content that are no longer needed.

301 – Moved Permanently

These are permanent redirects. For any site migrations, or other situations where you have to transfer SEO value from one URL to another on a permanent basis, these are the status codes for the job.
How Can 301 Redirects Impact SEO?
Google has said several things about the use of 301 redirects and their impact. John Mueller has cautioned about their use.
“So for example, when it comes to links, we will say well, it’s this link between this canonical URL and that canonical URL- and that’s how we treat that individual URL.
In that sense it’s not a matter of link equity loss across redirect chains, but more a matter of almost usability and crawlability. Like, how can you make it so that Google can find the final destination as quickly as possible? How can you make it so that users don’t have to jump through all of these different redirect chains. Because, especially on mobile, chain redirects, they cause things to be really slow.
If we have to do a DNS lookup between individual redirects, kind of moving between hosts, then on mobile that really slows things down. So that’s kind of what I would focus on there.
Not so much like is there any PageRank being dropped here. But really, how can I make it so that it’s really clear to Google and to users which URLs that I want to have indexed. And by doing that you’re automatically reducing the number of chain redirects.”
It is also important to note here that not all 301 redirects will pass 100% link equity. From Roger Montti’s reporting:
“A redirect from one page to an entirely different page will result in no PageRank being passed and will be considered a soft 404.”
John Mueller also mentioned previously:
“301-redirecting for 404s makes sense if you have 1:1 replacement URLs, otherwise we’ll probably see it as soft-404s and treat like a 404.”
The matching of the topic of the page in this instance is what’s important. “the 301 redirect will pass 100% PageRank only if the redirect was a redirect to a new page that closely matched the topic of the old page.”

302 – Found

Also known as temporary redirects, rather than permanent redirects. They are a cousin of the 301 redirects with one important difference: they are only temporary.
You may find 302s instead of 301s on sites where these redirects have been improperly implemented.
Usually, they are done by developers who don’t know any better.
The other 301 redirection status codes that you may come across include:

300 – Multiple Choices

This redirect involves multiple documents with more than one version, each having its own identification. Information about these documents is being provided in a way that allows the user to select the version that they want.

303 – See Other

A URL, usually defined in the location header field, redirects the user agent to another resource. The intention behind this redirect is to provide an indirect response to said initial request.

304 – Not Modified

The true condition, which evaluated false, would normally have resulted in a 200 OK response should it have evaluated to true. Applies to GET or HEAD requests mostly.

305 – Use Proxy

This is now deprecated, and has no SEO impact.

307 – Temporary Redirect

This is a temporary redirection status code that explains that the targeted page is temporarily residing on a different URL. It lets the user agent know that it must NOT make any changes to the method of request if an auto redirect is done to that URL.

308 – Permanent Redirect

Mostly the same as a 301 permanent redirect.

4xx Client Error Status Codes

4xx client error status codes are those status codes that tell us that something is not loading – at all – and why.
While the error message is a subtle difference between each code, the end result is the same. These errors are worth fixing and should be one of the first things assessed as part of any website audit.
  • Error 400 Bad Request
  • 403 Forbidden
  • 404 Not Found
These statuses are the most common requests an SEO will encounter – the 400, 403 and 404 errors. These errors simply mean that the resource is unavailable and unable to load.
Whether it’s due to a temporary server outage, or other reason, it doesn’t really matter. What matters is the end result of the bad request – your pages are not being served by the server.

How to Find 4xx Errors on a Website via Screaming Frog

There are two ways to find 4xx errors that are plaguing a site in Screaming Frog – through the GUI, and through bulk export.
Screaming Frog GUI Method
  1. Crawl your site using the settings that you are comfortable with.
  2. Click on the down arrow to the right.
  3. Click on response codes.
  4. Filter by Client Error (4xx).
An SEO Guide to HTTP Status CodesAn SEO Guide to HTTP Status Codes
Screaming Frog Bulk Export Method
  1. Crawl your site with the settings you are familiar with.
  2. Click on Bulk Export.
  3. Click on Response Codes.
  4. Click on Client error (4xx) Inlinks.
How to find 4xx error codes - ScreamingFrog Bulk Export
These are other 4xx errors that you may come across, including:
  • 401 – Unauthorized
  • 402 – Payment Required
  • 405 – Method Not Allowed
  • 406 – Not Acceptable
  • 407 – Proxy Authentication Required
  • 408 – Request Timeout
  • 409 – Conflict
  • 410 – Gone
  • 411 – Length Required
  • 412 – Precondition Failed
  • 413 – Payload Too Large
  • 414 – Request-URI Too Long
  • 415 – Unsupported Media Type
  • 416 – Requested Range Not Satisfiable
  • 417 – Expectation Failed
  • 418 – I’m a teapot
  • 421 – Misdirected Request
  • 422 – Unprocessable Entity
  • 423 – Locked
  • 424 – Failed Dependency
  • 426 – Upgrade Required
  • 428 – Precondition Required
  • 429 – Too Many Requests
  • 431 – Request Header Fields Too Large
  • 444 – Connection Closed Without Response
  • 451 – Unavailable For Legal Reasons
  • 499 – Client Closed Request

5xx Server Error Status Codes

All of these errors imply that there is something wrong at the server level that is preventing the full processing of the request.
The end result will always (in most cases that serve us as SEOs) be the fact that the page does not load and will not be available to the client side user agent that is viewing it.
This can be a big problem for SEOs.

How to Find 5xx Errors on a Website via Screaming Frog

Again, using Screaming Frog, there are two methods you can use to get to the root of the problems being caused by 5xx errors on a website. A GUI method, and a Bulk Export method.
Screaming Frog GUI Method for Unearthing 5xx Errors
  1. Crawl your site using the settings that you are comfortable with.
  2. Click on the dropdown arrow on the far right.
  3. Click on “response codes”.
  4. Click on Filter > Server Error (5xx)
  5. Select Server Error (5xx).
  6. Click on Export
An SEO Guide to HTTP Status CodesAn SEO Guide to HTTP Status Codes
Screaming Frog Bulk Export Method for Unearthing 5xx Errors
How to find 5xx error codes - ScreamingFrog Bulk Export
  1. Crawl your site using the settings you are comfortable with.
  2. Click on Bulk Export.
  3. Click on Response Codes.
  4. Click on Server Error (5xx) Inlinks.
This will give you all of the 5xx errors that are presenting on your site.
There are other 5xx http status codes that you may come across, including the following:
  • 500 – Internal Server Error
  • 501 – Not Implemented
  • 502 – Bad Gateway
  • 503 – Service Unavailable
  • 504 – Gateway Timeout
  • 505 – HTTP Version Not Supported
  • 506 – Variant Also Negotiates
  • 507 – Insufficient Storage
  • 508 – Loop Detected
  • 510 – Not Extended
  • 511 – Network Authentication Required
  • 599 – Network Connect Timeout Error

Making Sure That HTTP Status Codes Are Corrected On Your Site Is a Good First Step

When it comes to making a site that is 100% crawlable, one of the first priorities is making sure that all content pages that you want the search engines to know about are 100% crawlable. This means making sure that all pages are 200% OK.
Once that is complete, you will be able to move forward with more SEO audit improvements as you assess priorities and additional areas that need to be improved.
“A website’s work is never done” should be an SEO’s mantra. There is always something that can be improved on a website that will result in improved search engine rankings.
If someone says that their site is perfect, and that they need no further changes, then I have a $1 million dollar bridge to sell you in Florida.

Image Credits
Featured Image: Paulo Bobita
All screenshots taken by author

Orginallly this was posted on Search Engine Journal

SEO Tips - How To Best Use Forums & QA Sites Effectively for Improving SERP


SEO strategy is crafted around many activities, including, website optimization, on page optimization, off page optimization, and more. The off page optimization is one of the most important SEO activities. A professional SEO performs many activities as part of off page optimization. One of the most effective off page SEO activities is participating in forums and QA (Questions Answer) websites such as Quora. However, a majority of SEO people still uses an old schooled tactic of posting 3 posts in a forum and get the benefits of Forum signature. This is an obsolete and outdated technique and doesn’t provide required benefits. In fact, this way the SEO executives open the doors of possible Google penalty this will harm website and its rankings in the Google. This article will share a few quick tips and best practices an SEO professional can practice to gain benefit of Forum posting and QA activities. Also, it will share the benefits of these activities.
Best Practices to use of the Forums and QA sites to favor your SEO activities:
Steps to be followed for effective use of forums and QA sites
  • Join relevant forums or QA sites with an active community. There is no meaning of joining a fashion forum while you are selling digital marketing services. Also, there is no meaning of joining a forum there is no audience or everyone is thriving to sell their own stuff.
  • Fill your profile with all the details. Don’t forget to add Company Website, Blog and Social profile link details
  • Find active or recent threads. There is no meaning of digging the dead threads and answering those.
  • Read the question and answers carefully. See if the question is already answered or still open to get possible solutions.
  • If the question is not answered yet, then, use your knowledge or collected information to answer the question with all possible details and reference links, if there are any. Don’t hesitate to give reference link to other websites.
  • See if anyone questioned your answer or ask to provide more details. Answer back. Take participation in the ongoing discussion.
  • Ask questions which make sense. Don’t ask questions just to start a new thread. Ask questions which arouse interest of others.
  • Actively contribute
The takeaway here is joining few yet relevant forums, QA sites and communities; be an active contributor and actively participate in discussions.

What are the benefits of above mentioned practice?
You will establish your profile as an expert. People will find you reliable for solutions. This will encourage people to generate leads when they are looking for the services offered by you.
So you will get:
  • User engagement
  • Branding
  • Increased flow of relevant visitors to your website
  • Increased lead generation
  • Increased lead conversion

What SEO professionals must stop doing?
The SEO (Search Engine Optimizer) must stop following old schooled Forum practices for off page SEO optimization as it will harm you/your client website. Below are a few commonly followed bad practices:
  • Joining hundreds of forums (relevant and irrelevant)
  • Giving one liner answer
  • Giving random answers or copy pasted answers
  • Digging dead threads
  • Never look back to the answered thread
  • Mass forum link building
Please understand that these types of activities will get your website penalized under the penguin algorithm of Google.

Advanced Insights on Google Indexing, crawling and Javascript Rendering


This blog post is a summary of the “Deliver search-friendly JavaScript-powered websites” session at Google I/O 2018, with an e-commerce lens applied plus few personal opinions thrown in. This talk is so important, I thought it was worth its own blog post. This presentation describes about how Google crawls and indexes sites, including how it deals with client-side rendered sites as made common by frameworks like React, Vue, and Angular. (Client side rendering refers to when JavaScript in the browser (the client) forms the HTML to display on the page, as distinct from server side rendering when code on the web server forms the HTML to be returned to the browser for display.)
This discussion is particularly relevant to e-commerce websites that have a Progressive Web App (PWA) built with a technology such as React/Vue/Angular that want all the product and category pages indexed.
Crawling, Rendering, and Indexing
How does Google collect web content to index? This consists of three steps that work together: crawling, rendering, and indexing.
Crawling is the process of retrieving pages from the web. The Google crawler follows <a href=”…”> links on pages to discover other pages on a site. Sites can have a robots.txt file to block particular pages from being indexed and a sitemaps.xml file to explicitly list URLs that a site would like to be indexed.
For example, an e-commerce site might put all product pages into the sitemaps.xml file in case products are not reachable by crawling. (For example, if JavaScript is used for category navigation UI, there may be no <a href=”…”> links in the HTML for the crawler to discover the product pages). An e-commerce site may also block indexing of the checkout page using robots.txt as that page does not contain valuable content to index.
To play well with crawlers, sites should have a canonical URL for each page so that a crawler can determine if two URLs lead to the same content. (A site might have multiple URLs that return the same page. One of the URLs should be nominated as the “canonical” URL.)
There was a period of time where the rage for client side rendered pages was to use ‘#’ (and ‘#!’ for an even shorter period of time) as a way to distinguish multiple pages. This worked with the “back” button in the browser history. Normally following ‘#’ links causes the browser to stay on the current page, which is the desired behavior with client side rendering. However to index PWA pages (e.g. product pages), the modern norm is to use the JavaScript browser history API, allowing different URLs to be recorded in the browser history without having to reload the current page. This is the best approach to use for your site to be indexed as Googlebot (and most other crawlers) ignore what comes after the ‘#’ character on the assumption that the ‘#’ identifies a different place to start the user on a page (the original purpose of ‘#’), not that the page will be different.
After a page is retrieved by a crawler, indexing extracts all the content to send off to the search indexes. This is also when <a href=”…”> links to other pages are identified and sent back to the crawler to add to its queue of pages to retrieve.
One useful tip – if your page uses JavaScript to capture button clicks (without <a href=”…”> markup), use <a href=”…” onclick=”…”> so the indexer will still see the URL, even though the user click will be intercepted by the onclick JavaScript handler.
Another tip – you can also use <noscript><a href=”…”>…</a></noscript> to embed other links you want crawled, but don’t want displayed.
Rendering is a step between crawling and indexing, created by the challenge of client side rendered pages. If a page is server side rendered, the crawler will have all of the content to be indexed already – no further rendering is required. If the page is client side rendered, JavaScript must be run to form the DOM (the HTML for the page) before the indexer can do its job.
At Google, that rendering is currently done using a farm of machines running Chrome 41, a somewhat old version of Chrome. This will be updated at some stage (maybe late 2018). That means if the JavaScript on a site uses newer JavaScript features today, it will fail to render.
A second problem is client side rendering takes up more CPU. Rather than doing such rendering in real time, Google currently sends available markup immediately for indexing, and then also sends the page to a secondary queue for additional processing by running the JavaScript on the page. Spare CPU capacity is used to perform such rendering, which could result in a client side rendered page being delayed by multiple days before its content is available for the indexer. (No time guarantees are provided – you can imagine the queue getting longer if multiple major sites rolled out new PWA support at the same time.) The old version of the page is then replaced by the enriched version of the page when available. This makes client side rendering less desirable for sites with frequent updates – the index may continuously lag behind the current content. It also means crawled links to other pages on a site may take multiple crawl iterations, each one incurring a potentially multi-day delay (if the pages are not all listed in the sitemap.xml file).
Another issue with client side rendering is not all non-Google crawlers support running the JavaScript to do client side rendering. Thus some indexers may not pick up all the content on your site.
So how best to build a PWA that can also be indexed?
Server Side, Client Side, Dynamic, and Hybrid / Universal Rendering
Server side rendering, as mentioned before, is where the web server returns all the HTML ready for display. This provides a fast first page load experience for users, is very friendly to indexers, but by definition is not a PWA.
Client side rendering of pages in comparison requires for all the relevant JavaScript files to be downloaded, parsed, and executed before the HTML to display is available. There are lots of clever tools around that try to break up the JavaScript into smaller files so the code can be downloaded incrementally as the user traverses from page to page on a site. Client side rendering is often slower for the first page, but faster for subsequent pages once JavaScript and CSS files start to get cached on in the browser.
Dynamic rendering is introduced in the presentation where a web server looks at the User-Agent header and then returns a server side rendered page when the Google crawler fetches a page and a client-side rendered version for normal users. (The server side rendered page can probably be relatively plain looking, but should contain the same content as the client side rendered page.) You just look for “Googlebot” (or equivalent for other crawlers) in the User-Agent header to work out if the request is coming from a crawler. (For extra safety you can also perform a reverse DNS lookup on the inbound IP address to make sure it is coming from the Googlebot crawler.)
Hybrid / Universal rendering is also becoming more widely supported by frameworks such as React, Vue, and Angular. Hybrid rendering is where the web server performs server side rendering of the first page (resulting in faster page display in the browser, as well as simplifying the job for crawlers) then uses client side rendering for subsequent pages. Today, this is easiest to implement when the web server runs JavaScript. (Magento for example runs PHP on the server side, which makes it harder to server side render React components as planned in the upcoming PWA Studio.)
Projects like VueStorefront.io and FrontCommerce do this today, and it could be added to PWA Studio in the future or by a helpful community member.
Other Tools
There are other tools that can be worth checking out.
  • Puppeteer is JavaScript library that can control a headless version of Chrome, allowing interesting automation projects.
  • Rendertron is an open source middleware project which can act as a proxy in front of your web site, doing client side rendering and returning the resultant page.
  • The Google Search Console allows you to explore how Google indexes your site. It has a number of new tools such as “show me my page as Googlebot sees it” which is useful for debugging. It also contains a tool to see how mobile-friendly a website is (you should try out multiple pages on your website to try out). This can also be useful to see if robots.txt is blocking files you thought were not necessary, but negatively affect the rendering of a page by Googlebot. (There is a desktop tool as well.)
Other Gotchas
Some other common issues that arise when pages are crawled include:
  • If you lazy load images using JavaScript, the images may not be found and included in the image search indexes. You can consider using <noscript><img src=”…”></noscript> to include references to such images without displaying them, or embedded “Structured Data” markup on the page.
  • Infinite scroll style applications that load more content as you scroll down the page (using JavaScript) require thought as to how much of the page Googlebot should see for indexing purposes. One approach is to have the longer page, but hide it using CSS, or creating separate pages for Google to index.
  • Make sure your pages are performant. Google will timeout and skip pages that are too slow to return.
  • Make sure your pages don’t assume the user first visited the home page (to set up “browser data” or similar). Googlebot performs stateless requests – no state from previous requests is retained, to mimic what a user landing on the site will see.
Conclusions
If you care about your site being visible in search indexes such as Google, and you are going to build a PWA, you need to think about how it is going to be indexed. If you need indexes to be updated promptly, the current best practice is to have the first page of the PWA server side rendered (using Hybrid/Universal rendering). This will work across the widest range of crawlers, with an additional benefit of the first page (normally) being faster to display (a traditional weakness of pure client side rendered solutions). Luckily the major PWA frameworks have Universal rendering support to reduce the effort required to get this going, as long as you can run a web server with JavaScript support.

Length of search results snippets Decreased - Google Confirmed


Google has confirmed that only about five months after increasing the search results snippets, it has now decreased the length of these snippets. Danny Sullivan of Google wrote, “Our search snippets are now shorter on average than in recent weeks.” He added that they are now “… slightly longer than before a change we made last December.”
Google told Search Engine Land in December that writing meta descriptions doesn’t change with longer search snippets, telling webmasters back then that there is “no need for publishers to suddenly expand their meta description tags.”
Sullivan said, “There is no fixed length for snippets. Length varies based on what our systems deem to be most useful. He added, Google will not state a new maximum length for the snippets because the snippets are generated dynamically.
RankRanger’s tracker tool puts the new average length of the description snippet field on desktop at around 160 characters, down from around 300+ characters…

… while mobile characters for the search results snippets are now down to an average of 130 characters:

Here is Danny Sullivan’s confirmation:
If you went ahead and already lengthened your meta descriptions, should you go back and shorten them now? Google’s advice is to not focus too much on these, as many of the snippets Google chooses are dynamic anyway and not pulled from your meta descriptions. In fact, a recent study conducted by Yoast showed most of the snippets Google shows are not from the meta description, but rather they are from the content on your web pages.