Tuesday, December 30, 2008

Google Gift for a New Year - PR Updates

Hi All,

This is good gift from Google, Google updates PR today. Morning when i came to the office i noticed changes in PR and this probably a good for every new website we launched.

Any one noticed PR updates then let us know.

Regards,
Bhavesh Goswami.

Friday, December 5, 2008

More about 404, 301, 302, 304 Code – HTTP Status Code

Hi All,

It is quite tough to indicate search engines to crawl pages or not to crawl pages. It is often happen with small and large websites. There are certain reasons like if you have removed old page from your website or moved old page to new page or you have updated your page content and in many other way.

If you are in the process of cleaning up your website and want to remove outdated pages or unwanted pages from your website. It is advisable to return 404 Page Not Found error if you have removed page from your website this tell search engine the particular page you are looking for is no longer exist on the website. Do not print simple 404 Page Not Found error code for your outdated web page, make your 404 page user friendly through your hosting control panel. Most of all web hosting companies offer customize 404 page.

If you have changed your existing page that you think your user might be looking for then you can use 301 and 302 response code to let search engine know your page moved to the new destination and your user will redirect to the new page. If you are re-organize your website its better to use 301 redirect to let search engine know that you move permanently to the new location.

302 redirect is useful in case when you have website that frequently update and the user can find their older post somewhere else on your website.

Your website is a static website and you have not updated your website from long time then you can tell search engine not crawl page again to avoid usage of extra bandwidth and http request. You can send ‘304 Not Modified’ through HTTP header to tell search engine that this page is not modified.

Regards,
Bhavesh Goswami.

Wednesday, September 17, 2008

Google Reveal: Duplicate Content Penalty

Hi,

In my SEO Blog I am only interested to post the host and important topics that will be useful to my blog readers. One more important news came from the Google search team. Google reveal all secrets about duplicate content penalty and how to get rid of it.

There are certain things that might affect your website search engine ranking

1. Do not create multiple page with same content.
2. Do not republish content from another website, write content of your own and
add value in your content.
3. Avoid “Cookie Cutter”.
4. Affiliate websites no more original content face hard time to get ranking.
Try to add value to your website like forum, blog, reviews, etc.
5. Do not create multiple URLs with same content. For example
www.example.com/fragrance-brand-kelvin-price-0-10.htm and another page with
same information www.example.com/fragrance- price-0-10-brand-kelvin.htm.
This type of URLs lowers your website performance.

What Google will do if find duplicate content

1. If Google find duplicate content caused by URL parameters then they group
URLs in to one bunch.
2. Select the best URL that provides value to the user.
3. Then check for link popularity of the URL.

Submit your website in Google webmaster tool and create a XML sitemap to let Google know that this is my page I want to rank for.

You can also stop such duplicate page using wildcard entry through robots.txt.

Regards,
Bhavesh Goswami.

Sunday, August 10, 2008

What is Google Infinity Space and how to get rid out of if?

Hi,

Torrey Hoffman from Google Webmaster said: When Googlebot crawling website, it find website that have very large number of links that do not provide any useful content (information) to the crawler, this is called “infinity space”. Crawling these types of URLs use unnecessary bandwidth and Googlebot do not like to crawl this type of links. If your website having same unnecessary links then its possible that Google fail to crawl the original content that you like Googlebot to crawl.

Google recently started to notify webmasters about this problem through Google Webmaster Tool Message Center.

Shopping website is the example of Infinity space, as they provide many ways to filter result. Like you can filter result by brand name, price range, store name, color, size, gender, age, etc. So, Googlebot find thousands of unnecessary number of links.

To rectify this problem you can use robots meta tags to tell Google and other search engine not to crawl this page. You can use following robots meta tag to stop crawling such unnecessary link


<meta name="robots" content="noindex, nofollow">

This meta tag tell search engines not to crawl and follow links in this page.

Regards,
Bhavesh Goswami.

Monday, June 23, 2008

Duplicate Content Issue – Important Google Tips

Hi,

This is the hottest topic among every webmaster is duplicate content issue. Every webmaster (include me ) always thinking about how to get rid of Duplicate content and what is actual duplicate content in the eye of search engine.

Sven Naumann from Google Search Quality Team gave some tips on duplicate content and what is consider as duplicate content. Here you will find some basic tips that Google recommend.

Sven Naumann mentioned that there are mainly two types of duplicate content

Same domain duplicate content: - If a website having same content for multiple pages then Google consider it as a duplicate content.
What Google will do once it finds duplicate content? It is clearly written on Google blog that during their crawling if Google find any duplicate content issue then they filter duplicate pages and show only one result in their index.
You can avoid duplicate content issue by blocking such pages using robots.txt file, let us Google crawl page which you like to get listed in search result and block other using noarchive or robots.txt.
Same content written in different language will not count as duplicate content.
If you have restructured your site then use 301 redirects through your .htaccess file. You can also set the preferred domain features available in webmaster tool.

Cross domain duplication :-
If someone directly copied your content and placed it on their website then Google probably search for the original copy of the content and will give higher weight age to the original site compare to the other.
Also if you syndicating your content then you need to ask your syndicating partners to put a link to your original content.

Bhavesh Goswami.

Friday, June 6, 2008

Wild Card Support ($ and *) - Google, Yahoo and MSN Robots.txt Exclusion Protocol

Hi All SEOs,

I am sure all webmasters (SEO) reading this block know about Robots.txt and how to use it. With robots.txt you can block any url, path or directory that you don’t want search engine to crawl. Also you can even block search crawler to crawl your entire site. Before few weeks all major search engines like Google, Yahoo and MSN announced that they all are now supporting Wild Card. Here I want to discuss about wild card support, what is wild card and how wild card is useful and how to use it?

$ Wild Card Support – This tells crawler to match everything from the end of a url. With $ Wild Card support webmaster can block certain types of urls, so now you don’t need to write every file type you want to block through robots.txt. You can block file types with specific patterns, you can specify special type of file extensions like PDF in your robots.txt file and search engines will not access that page and will not include in their database.

$ sign is used to block certain files types. For example if you want to block a file with .pdf extension then you need to write following syntax in your robots.txt file

User-agent: Googlebot
Disallow: /*.pdf$

* Wild Card Support – This tells crawler to match a sequence of characters. * Wild Card will block certain type of URL patterns like if you don’t want search engines to crawl URLs with session ids or other extraneous parameters. So from now specify the parameters that you don’t want to index by search engine using wild card and you have done, no need to create long list of URLs 

You can use * sign to block URLs with session IDs. For example if you want to block URLs with session IDs then you need to write following syntax in your robots.txt file

User-agent: *
Disallow: /*?

This will block all urls with Session IDs.

Regards,
Bhavesh Goswami.

Wednesday, May 28, 2008

Yahoo Search Index and Ranking Algorithm Updates

Hi All,

I dont know it seems to be good news or bad news for SEO but Priyanka Garg and Sharad Varma from Yahoo Search Team said "Yahoo going to roll out some changes in their crawling, indexing and ranking algorithm. During this process your website ranking may fluctuate."

Regards,
Bhavesh.

Wednesday, May 21, 2008

Some Basic SEO (Search Engine Optimization) but Important Tactics

Hi All,

SEO is the most effective part for the success of any online business. SEO is living and breathing process that will change accordingly but never ends. Before you start SEO campaign for your website you should first create plan and need to set Goal. You need consider following things while you are starting optimization

>> Title of the page
>> Meta tags
>> Content
>> Website Navigation System
>> Graphics of website
>> Anchor Text
>> Links
>> Location of keywords
>> Frequency of keywords

These are the main factors for website optimization. Success of your website highly depends on the keywords or keyword phrase. Never start with high competitive or generic keywords, try to optimize for more specific keyword phrases that internet surfer looking for.

Meta tags are most important for the success of your website, many other factors are important. Use descriptive and keyword reach meta tags for your website

Website navigation must be easy to understand for both user and search engines. If you want to use flash enabled links then dont forget to put a simple text link some where on the page so Search Engine can find all of your web pages easily and include more and more pages in their index.

When starting SEO campaign set the importance for main pages. Example if you have 10 pages that you want to optimize but the most important pages is your Index page then first set the goal for this page. Set the priority for the page that will generate for revenue.

Location of keywords, means include most important keywords for your website throughout content of website.

Bhavesh.

Tuesday, April 29, 2008

PR Update 30th April 08

Hi All,

Today morning i noticed PR fluctuation in all my website. PR updates for all my websites index page by 1+. All inner pages of my website lost its PR. So, this PR updates is only for Index page.

Any one noticed PR Updates?

Bhavesh Goswami.

Wednesday, April 23, 2008

Tips while moving to a new domain

Hi All,

This is very important and crucial task from SEO point of view while you are moving your entire old domain to a new domain name. This is always a very difficult task to perform for a webmaster but following these steps will prevent your website to hurt from natural search engine.

1. Once you complete with moving all content to a new domain name then write a code that will redirect your old domain to newer one. You are permanently moving your website to a new domain so make sure you are using 301 Redirect (Permanent Redirect).

2. Once this is done, check if its work properly. You need to create page by page 301 redirect so Search Engine and Users can find exact corresponding page.

3. Add your new domain to google webmaster tool and verify it and then upload sitemap, so google find your new domain and crawl every pages.

4. Check regularly crawling report in Google Webmaster tool and check if google showing any 404 page for your old domain, if you find any 404 page then create 301 redirect page.

5. Once everything done with Google and other search engines check back link of your old domain and contact webmasters to let me know that your website moved to a new domain and ask them to change the link back url.

6. keep control on your old domain up to 180 days.

Bhavesh Goswami.

Wednesday, April 16, 2008

Create easily accessible and crawlable website

This is very crucial for webmaster to make site easy to accessible and crawlable for getting top ranking in search engines. To improve website ranking in search engine it is important to make website that is easy to navigate and easy to accessible.

To make content easy to crawl by search engine make sure you hyperlinked all pages. By this way search engine can find deep pages of your website. This is not all but you also need to write content in plain html hyperlinks and also don’t use javascript or flash for content. All search engines do not crawl javascript and flash content. Use simple html tag to create a hyperlink like
<a href="http://seofuture.blogspot.com/">Seo Blog</a>. Avoid use of javascript for hyperlink
<a href="#" onclick="javascript:void(...)">Anchor Text</a>

Generally search engines crawl websites from top right to bottom left. Write content at the top right of the page if possible so search engine can see content when page is loading and also include your main keyword or content at the end of the page.

Once you implement above changes in your website your website natural ranking will be improved.

Post your comment if you have any question.

Friday, March 28, 2008

Google Robots.txt Generator – New Feature Introduced


Google announced a new tool in Google Web Master Tool for robots.txt generation. This Robots.txt generator tool gives and easy and interactive way to create a robots.txt file. To create a Robotos.txt file using Google webmaster tool simple go to Google Webmaster Tool, after login to webmaster tool go to Tools section and then click on Generate robots.txt. This tool is easy to use, enter the url or directories that you don’t want to crawled by search engines.

Isn’t it simple?

Once you are finished with creating robotos.txt don’t forget to analysis with robots.txt analyzer, this tool is also available in Google Web Master Tool under Tools section.

Regards,
Bhavesh Goswami.

Sunday, March 2, 2008

PR Update - 29th Feb Sat 2008

Hi All,

At saturday morning when i came to my office i noticed some changes in PR of my websites. If anyone noticed PR update then let me know and also let me know the web url of your website so i can check it.

Regards,:-)
Bhavesh Goswami.

Wednesday, February 13, 2008

MSN Announced: New Improved Live Search Engine Crawler

On 12th Feb MSN officially announced a new improved MSN crawler that is more powerful in caching and indexing more pages of website. Many webmaster complaints about MSN crawler crawling less pages but with this new improved crawler you will see more pages indexed by MSN. May be this new improved feature will help webmaster to make website more visible to MSN Search Engine.

HTTP Compression: This feature compressed the file size of webpage and reduces time of downloading data from server to their crawler.

Condition Get: This feature will ask the server if the page has been changed before last time crawl the page. MSN crawler will include “If-Modified-Since” header and get the time of last download using GET request. If the page has not been changed after the last download the server will respond 304 HTTP Response.

To reflect these changes they upgraded their user agent to msnbot/1.1.

Saturday, January 12, 2008

PR Updation has taken place

Hi All,

Today in the morning when i visited my website i found all my website get Google page rank.

Also Google now showed up more link back to all my websites.

This is great and very quick.

Regards,
Bhavesh Goswami.

Friday, January 11, 2008

How to remove URL from Google and other search engines?

There are many ways to remove pages from Google cache

.htaccess
Do not link to the page
Robots.txt
Nofollow Meta tag
Noindex
url removal tool

.htaccess:- Using htaccess file you can redirect one url to another url. Also using htaccess you can make sub directory password protected so Google and other search engines can’t guess the password. You can find ready tools that generate .htaccess password protected file. This is very handy tool to remove your web urls from Google and other search engine.

Do not link to the page: - This is not a good way to remove pages from search engine. Do not link to the page that you don’t want to search engine to find. This is not recommended because if someone link to the page that you didn’t link then also search engine find such page from other website that linked to that particular page.

Robots.txt: - Here you can mention pages or files or sub directories that you don’t want search engine to crawl. Google provide robotos.txt feature in webmaster tool to check weather Google allowed to get to the url or not.

Nofollow attribute: - Nofollow link is another week approach to avoid search engine crawling. Suppose if you have some pages that you don’t want them to crawl by search engine like your sing up page or contact us page then simple put nofollow link to that page. But if someone link to those pages of your website then search engine find that link and crawl the page and showed up in their index. So, this also not a secure way to remove pages from search engine.

NoIndex: - Noindex Meta tag is simple and easy to remove pages from search engine. Pt a simple noindex Meta tag in the page that you don’t want search engine to find. Even other websites linked to that particular page you don’t want search engine to crawl and if you have noindex Meta tag on that page then search engine will not know about that page and don’t include in cache.

Google URL Removal Tool: - Google introduced this tool before 5 years but before it was in Google service but now you can find this handy tool in Google Webmaster Tool. Now you can remove the page that was cached by Google but now it is useless or unnecessary page for your website. So make that page 404 and then submit it Google URL removal toll.

Regards,
Bhavesh Goswami.