Saturday, October 30, 2010

Robots.txt File : SEO


Robots.txt files are often mentioned as being an important foundation of a search friendly web site. To site owners and small businesses who are new to search marketing, the robots.txt file can sound daunting. In reality, it's one of the fastest, simplest ways to make your site just a little more search engine friendly.
(SEG Bootcamp articles are no-frills content designed to bring small business owners up to speed on the concepts and techniques needed to market their businesses online.)
What is Robots.txt?
Robots.txt is a simple text file that sits on the server with your web site. It's basically your web site's way of giving instructions to search engines about what how they index your web site.
Search Engines tend to look for the robots.txt file when they first visit a site. They can visit and index your site whether you have a robots.txt file or not; having one simply helps them along the way.
All of the major search engines read and follow the instructions in a robots.txt file. That means it's a pretty effective way to keep content out of the search indexes.
A word of warning. While some sites will tell you to use robots.txt to block premium content you don't want people to see, this isn't a good idea. While most search engines will respect your robots.txt file and ignore the content you want to have blocked, a far safer option is to hide that premium content behind a login. Requiring a username and password to access the content you want hidden from the public will do a much more effective job of keeping both search engines and people out.
What Does Robots.txt Look Like?
The average robots.txt file is one of the simplest pieces of code you'll ever write or edit.
If you want to have a robots.txt file for the engines to visit, but don't want to give them any special instructions, simply open up a text editor and type in the following:
User-Agent: *
Disallow:

The "User-Agent" part specifies which search engines you are giving the directions to. Using the asterisk means you are giving directions to ALL search engines.
The "disallow" part specifies what content you don't want the search engines to index. If you don't want to block the search engines from any area of your web site, you simply leave this area blank.
For most small web sites, those two simple lines are all you really need.
If your web site is a little bit larger, or you have a lot of folders on your server, you may want to use the robots.txt file to give some instructions about which content to avoid.
A good example of this would be a site that has printer-friendly versions of all of their content housed in a folder called "print-ready." There's no reason for the search engines to index both forms of the content, so it's a good idea to go ahead and block the engines from indexing the printer-friendly versions.
In this case, you'd leave the "user-agent" section alone, but would add the print-ready folder to the "disallow" line. That robots.txt file would look like this:
User-Agent: *
Disallow: /print-ready/

It's important to note the forward slashes before and after the folder name. The search engines will tack that folder on to the end of the domain name they are visiting.
That means the /print-ready/ file is found at www.yourdomain.com/print-ready/. If it's actually found at www.yourdomain.com/css/print-ready/ you'll need to format your robots.txt this way:
User-Agent: *
Disallow: /css/print-ready/

You can also edit the "user-agent" line to refer to specific search engines. To do this, you'll need to look up the name of a search engine's robot. (For instance, Google's robot is called "googlebot" and Yahoo's is called "slurp.")
If you want to set up your robots.txt file to give instructions ONLY to Google, you would format it like this:
User-Agent: googlebot
Disallow: /css/print-ready/

How do I Put Robots.txt on my Site?
Once you've written your robots.txt file to reflect the directions you want to give the search engines, you simply save the text file as "robots.txt" and upload it to the root folder of your web site.

Basic Optimization Techniques

Believe it or not, basic SEO is all about common sense and simplicity. The purpose of search engine optimization is to make a website as search engine friendly as possible. It's really not that difficult. SEO 101 doesn't require specialized knowledge of algorithms, programming or taxonomy but it does require a basic understanding of how search engines work.

For the purposes of brevity this piece starts with a few assumptions. The first assumption is a single, small business site is being worked on. The second assumption is the site in question is written using a fairly standard mark-up language such as HTML or PHP. The last assumption is that some form of keyword research and determination has already taken place and the webmaster is confident in the selection of keyword targets.

There are two aspects of search engines to consider before jumping in. The first is how spiders work. The second is how search engines figure out what pages relate to which keywords and phrases.

In the simplest terms, search engines collect data about a unique website by sending an electronic spider to visit the site and copy its content which is stored in the search engine's database. Generally known as 'bots', these spiders are designed to follow links from one page to the next. As they copy and assimilate content from one page, they record links and send other bots to make copies of content on those linked pages. This process continues ad infinitum. By sending out spiders and collecting information 24/7, the major search engines have established databases that measure their size in the tens of billions.

Knowing the spiders and how they read information on a site is the technical end of basic SEO. Spiders are designed to read site content like you and I read a newspaper. Starting in the top left hand corner, a spider will read site content line by line from left to right. If columns are used (as they are in most sites), spiders will follow the left hand column to its conclusion before moving to central and right hand columns. If a spider encounters a link it can follow, it will record that link and send another bot to copy and record data found on the page the link leads to. The spider will proceed through the site until it records everything it can possible find there.

As spiders follow links and record everything in their paths, one can safely assume that if a link to a site exists, a spider will find that site. There is no need to manually or electronically submit your site to the major search engines. The search spiders are perfectly capable of finding it on their own, provided a link to your site exists somewhere on the web. Search engines have an uncanny ability to judge the topic or theme of pages they are examining, and use that ability to judge the topical relationship of pages that are linked together. The most valuable incoming links, come from sites that share topical themes.

Once a search spider finds your site, helping it get around is the first priority. One of the most important basic SEO tips is to provide clear paths for spiders to follow from point A to point Z in your website. This is easily accomplished by providing easy to follow text links directed to the most important pages on the site in the navigation menu or simply at the bottom of each page. One of these text links should lead to a text-based sitemap, which lists and provides a text link to every page in the site. The sitemap can be the most basic page in the site as its purpose is more to direct spiders than help lost site visitors though designers should keep site visitors in mind when creating the sitemap. Google also accepts more advanced, XML based sitemaps, which can be read about in their Webmaster Help Center.

There will be cases where allowing spiders free access to every page on a site is not always desirable. Therefor you'll need to know how to tell spiders that some site content is off limits and should not be added to their database using "robots.txt" files. 

Offering spiders' access to the areas of the site one wants them to access is half the battle. The other half is found in the site content. Search engines are supposed to provide their users with lists of pages that relate to the search terms people enter in their search box. Search engines need to determine which of billions of pages is relevant to a small number of specific words. In order to do this, the search engine needs to know your site relates to those words.

To begin with, there are a few elements, a search engine looks at when examining a page. After the URL of a site, a search spider records the site title. It also examines the description meta tag. Both of these elements are found in the "head" section of the source code.

Titles should be written using the strongest keyword targets as the foundation. Some titles are written using two or three basic two-keyword phrases. A key to writing a good title is to remember that human readers will see the title as the reference link on the search engine results page. Don't overload your title with keyword phrases. Concentrate on the strongest keywords that best describe the topic of the page content.

The description meta tag is also fairly important. Search engines tend to use it to gather information on the topic or theme of the page. A well written description is phrased in two or three complete sentences with the strongest keyword phrases woven into each sentence. As with the title tag, some search engines will display the description on the search results pages, generally using it in whole or in part to provide the text that appears under the reference link. 

Due to abuse by webmasters, such as using irrelevant terms, search engines place minor (if any) weight in the keywords meta tag. As such, it is not necessary to spend a lot of time worrying about the keywords tag. 

After reading information found in the "head" section of the source code, spiders continue on to examine site content. It is wise to remember that spiders read the same way we do, left to right and following columns.

Good content is the most important aspect of search engine optimization. The easiest and most basic SEO rule is search engine spiders can be relied upon to read basic body text 100% of the time. By providing a search engine spider with basic text content, you offer the engines information in the easiest format for them to read. While some search engines can strip text and link content from Flash files, nothing beats basic body text when it comes to providing information to the spiders. You can almost always find a way to work basic body text into a site without compromising the designer's intended look, feel and functionality.

The content itself should be thematically focused. In other words, keep it simple. Some pages cover multiple topics on each page, which is confusing for spiders. The basic SEO rule here is if you need to express more than one topic on a page, you need more pages. Fortunately, creating new pages with unique topic-focused content is one of the most basic SEO techniques, making a site simpler for both live-users and electronic spiders.

When writing page content, try to use the strongest keyword targets early in the copy. For example, a site selling "Blue Widgets" might use the following as a lead-sentence; 

"Blue Widgets by Smith and Co. are the strongest construction widgets available and are trusted by leading builders and contractors."

The primary target is obviously construction applications for the blue widget. By placing the keyword phrases "blue widgets" and "construction widgets" along side other keywords such as the singular words, "strongest", "trusted" and "builders" and "contractors", the sentence is crafted to help the search engine see a relationship between these words. Subsequent sentences would also have keywords and phrases weaved into them. One thing to keep in mind when writing page copy is unnecessary repetition of keywords (keyword stuffing) is often considered spam by search engines. Another thing to remember is that ultimately, the written copy is meant to be read by human eyes as well as search spiders. Read your copy out loud. Does is make sense and sound natural? If not, you've overdone the use of keyword phrases and need to make adjustments.

Another important element a spider examines when reading the site (and later relating the content to user queries), is the anchor text used in internal links. Using relevant keyword phrases in the anchor text is a basic SEO technique aimed at solidifying the search engine's perception of the relationship between pages and the words used in the link. For example... we also have a popular series of articles on the basics of SEO written by Stoney deGeyter. Linking the term "basics of SEO" is an example of using keyword phrases in the anchor text. Terms such as "SEO 101" or "SEO for beginners" could also have been used. 

Remember, the foundation of successfully optimizing your site is simplicity. The goal is to make a site easy to find, easy to follow, and easy to read for search spiders and live-visitors, with well written topical content and relevant incoming links. While basic SEO can be time consuming in the early stages, the results are worth the effort and set the stage for more advanced future work. 

SIMPLE SEO PROCESS


The Google optimization process is not fully structured, but could result in the following steps:
1) Analysis of competition.
A competitive analysis will help us to know what strategies they are using those "fight" with our company to win the segment. Through this research we know your keywords, its strengths and weaknesses. This does not mean it will copy all of its attributes, but watching you learn. Learn about our competitors will be very useful to know where we can fight and how we improve.
2) Survey of Keywords.
Through a detailed study we know the keywords in place and largely used by the public they will target your company. Match in the keywords is an essential step for the success of their "virtual branch", since they will transport visitors from search engines or search engines to your website.
3) Development of a database for the site.
We must improve the site's pages, and if we start from scratch, create them. This step will optimize both the HTML coding (headings, metatags, etc..) And "site architecture" (internal structure of links to Web pages that compose your site) as the graphical interface of the website (content and design). Recall that a website that has a great design but not provide content that visitors need, is a sterile site.
4) Incorporation of keywords.
Once you choose the most effective keywords, we put on our website. These should appear on the most important pages (Home Page and Services page) and the right balance. An excessive repetition of the same would result in some Search Engines erase the site from their lists.
5) Review and building links.
Acquiring and quality related links pointing to your website generates 95% of success in trying to get high rankings in Search Engines. While increasing the Network Status for search engines is also important to have lots of links pointing to your pages, we see that the really important matter is the quality and degree of relationship of the same with your site.

BLACK HAT SEO


In the realm of SEO, black hat is the unethical approach to SEO and should always be avoided at all costs. Webmasters think the black hat SEO will bump their rankings quickly and sit on top of Google. That's not the case. At some point will be caught and bumped right out of the search engines and permanently banned.
Avoid shortcuts and clearly address of any company that promises you grim search engine ranking # 1 at night for a ridiculously low price (the fair TIMO cries!) Any practice that deliberately deceive the search engines should be avoided . The black hat only hurts the long term.
The black hat SEO includes the following unethical practices:
  • Filling the keyword - putting too many keywords in your content / website (it has no such sense)
  • Hidden text - that is putting white text against a white background
    The mask - one thing visitors see on websites but the content is presented in a different way to search engine spiders.
  • Doorway pages - these pages for a way to hide (these "fake" pages do not have any?? No real content and are created to trick spiders and to align for keywords only).
  • Filling the cookie - cookies are strings of text saved in the Web browsers on computers (as a result of third-party sites). The catch: Consumers visit that Web site within 60 days (a cookie is saved for 60 days) and make a purchase - as a result, the affiliate receives a commission from the sale fraudulent.
  • Redirects - creating a full page of unique content and once you have aligned high, you redirect to another page on your site (ie your shopping cart page)
Aligning # 1 on Google
You can get to the top when it comes to Google rankings. First, you need to figure out how much you know about SEO and investigates the competition in its market. When you do so at the top of Google, the great things happening for you and your company! You start to get busy and you have MORE than enough business / customers.
Check out these white tips of the hat following:
  • Spend at least 30 minutes a day working on your SEO - it pays to do this everyday worthwhile investment for your site / company!
  • You do not have to know everything about SEO - there is much conflicting information out there. Investigate the various sites, resources, etc.. and choose what works best for your site / company!
  • Outsource SEO work to a professional if you feel like you do not know enough about SEO - however, it helps to know the basics of SEO. A good SEO expert will always guide you in the right direction and investigate the companies / agencies reputable and make sure you hire an SEO pro!

WHITE HAT SEO ?


WHITE HAT (ETHICAL) SEO

A White hat SEO (white hat) is called an "ethical stance" that opposes the abuse of electronic media. Make the right thing: Optimize the texts (Locate the title and description where it should, just hover over the search that has real substance and consistently). Implements accessibility standards (an example to understand what a search engine that contains an image parameter should add alt = "text" and a short description). Creates a link structure: The sites need to redo the form of a link between itself.
A Black hat (black hat) is the opposite of White Hat, this attempt to run an intrusion on these, is considered an ethical SEO strapped. You can use low quality links or unwanted as pornographic sites, pharmacists, etc. Those who practice Black Hat SEO, "inflate" PR placing a link on a page with high page rank, but if this is removed the PR of the page fall into the next Google update.
A Blackhat SEO (spam) does the following: Cloacking: (a sewer hidden behind an apparently normal web, search engines or traveling without javascript dirty look at the content which is usually plain text, with h1, bold and lots of links that would never on the cover. This is the most widely used technique among spammers and is considered counterproductive because it is hidden text that should reasaltar web for users). Buy links (backlinks Google shows that it is illegal).
Establishment of farms (It is automatically generated content or copying garbage, try putting your pages in search engine indices, scattering in different servers, domains, different content structures). Abuses of keywords (A spam puts a title search words all types of one-to corner the greater number of web searches without coherence). Duplicate content (takes a text and places it on multiple pages to get more pages indexed). Hidden text (hidden links and keywords on a website, use hidden divs, fonts invaluable trick or white text on white background).
The biggest problem with Black Hat SEO are the consequences which may arise for the owner of the page. The most ominous is the complete elimination of Google results. This is usually due to SEO that Google detects and penalizes malicious page in this way.
Another important term to know is Gray hat, which refers to those practices that are between one place and another. Using both methods, as seen as more beneficial for the page.
Electronic Document Management Content Central ™ the All-In-One Document Management Solution makes document capture, retrieval, and management simple. It gives organizations quick and easy access to information. This all-in-one, browser-based document management system provides Access, Speed, and Security, what matters most to the organizations.

WHITE HAT SEO TIPS


When it comes to ethical SEO technique, the white hat is how aggressive you need to approach SEO. Just think about the "value" when it comes to white hat SEO. The white hat is ethical search engine techniques adopted by major players such as Google.
With aggressive tactic white hat SEO, "not until you reach it sooner" - every time! The white hat SEO is each element you add to your sites, blogs, etc.. that add value and credibility.
  • Write content GRANDE, single
  • Set the video / audio sites innovative
  • Create a "constant community of merit"
  • You want your site visitors see while a source of value-packed information (keep them coming back for more!)

SEO CHECK LIST


Here are 100 tips to get Top Rank in all search engines. It is advised to act upon one step daily and achieve your goal.

1ST TIP:

Decide the keywords you are going to include in your website and use some keyword suggestion tool and also check some other sites related to your site theme. It will help you determine the keywords. Write the keywords for each page. Use these keywords according to the nature of the content and use not more than 15 keywords in a page. Check the keywords density, with keyword density analyzer. Your first day should be to work only for keywords.

2ND TIP:

Set your goal and determine a keyword phrase, you want to promote for top engine ranking. Now write this phrase at least three times exactly and at least two times closely to that phrase e.g. search engine ranking tips 3 times and search engine ranking 2 times. If you are a new SEO, you should choose some keyword phrase with medium or low search volume because the high volume phrases have a lot of competition and it will take you time to get ranking.
Now use your phrase “search engine ranking tips” at least for 30% as your backlink names and three to five other supporting phrase for your chosen phrase for 70 percent of your title. When you will get 17 to 20 top ranking at any search engine, select some additional keyword title. You should keep record of your backlink titles in Google Webmaster Tools. According to google
  • First Tip: To keep your keyword phrase at the top of an engine(s), get one backlinks with your title after every 17 days.
  • Second Tip: Calculate your backlinks without a title as equivalent, in Search Engine Optimization effectiveness, to one with a title.

3RD TIP:

You should get 6 to 17 to quality, your site relevant backlinks placed on dissimilar website. Every week, place 5 new backlinks.

4RTH TIP:

Make some free hosting pages and blogs on other website URLs. I recommend you should own at least 17 sites on different servers. Keep it mind that your sites must be updated to get top search engine raking. Web sites get search engine quality as they become older. So, it is best to own many websites and put your attention to get 5 back links per high engine hitter first. You should study the Google Analytics stats to analyze your work. If you can, then try to maintain at least 50 websites by hiring an extra web masters. You can make many of them free including blogs, spaces, forum, subdomains and groups.

SOME KEYWORD TIPS : SEO



Let me tell you guys that how I use my keywords:
  1. Best Place For Keywords
    The best places of keywords in your pages include at the top of your page, in the headline, sub headline, in title, first and last paragraph. Search engine spiders are to as wise as you, they search for the exact keyword. Therefore if you really interested to boost up your traffic, then use specific keywords in the exact phrase. Try to use the keyword in the first paragraph, in the first line or sentence.
  2. Keywords And META Tags
    Using keywords in META tags always a good technique to enhance the site and keyword traffic. It doesn’t matter which tool you are using to design your pages, after you have finished your design, save your work and open the HTML coding. Check the HTML coding,
    Title: The sites title something that you show at the top of the browser screen in blue button. You can change the site title with very shot description and include one keyword.
    Description: Now do something same like title for the keyword and description. For keyword type some appropriate keyword and for site description, write a short sentence, this will be shown in browser when visitor will search your site and the search list will appear on the browser screen. You can write this description at least 25 words and include one or two keywords. It is better to use different site title and description and keyword for each page.
  3. Keyword And ALT Tag
    If you add an image or picture, you can include an ALT tag containing a short description with a keyword. The search engine robot will read your ALT tag like META tag.
  4. Put Keyword in Domain Name And Page Title
    Try to find a domain name containing keyword such as seotips or seo_tips. This will get you additional fairy points same like including keywords in your page names.

WHAT ARE KEYWORDS? : SEO


Keywords are those specific words which are searched in a search engine. When you search any information on the web in any order, you open a search engine and write some keyword describing what you are looking for. At the other end, the search engine checks your desired keyword related information in its database and shows the results listing pages according to the words submitted. These words that are used to search anything called keywords or search terms. If you have used some search terms, you will know that even a single keyword shows a broad list of required information. To make your search better, you need to use keyword along with other words or a phrase. This gives more relevant search results. You can get some keyword suggestion from different online tools.

IMPORTANCE OF KEYWORDS


Keywords play an important role to increase traffic in your website. Keywords can be used in your website related to the search engine to offer you targeted traffic to you web pages. Keyword facilitates people who need your information or any product to reach you. For example, keywords are the contact numbers for your business and search engines like Google and Yahoo are like the telephone directory, this telephone directory list your contact and address in shape of keyword.

WHICH KEYWORDS ARE THE BEST FOR MY WEBSITE?


Now, how you will decide that which keywords the best and relevant to your web pages? This is the most important to use top paying keywords and good keywords related to your website. For this, for a while consider you as a customer who goes to the shoe shop and asks for their required shoe. What type of questions you will ask to demand your required design, size, company and she price? And what words or phrase you will use? These will be your keyword and appropriate use of keyword in a phrase. Now in the same way, use keywords in phrases relevant to your product or web content.
List your keywords and phrases or get some help from adwords keyword tool, keyword selector tool or another keyword tool and also use keywords analyzer to find top paying keywords related to your website. Now use your keywords in your web content in an appropriate phrase and wait for the result. Remember, that a single popular keyword have a lot of competition to get top ranking in the search result.

ARE YOU USING POPULAR KEYWORDS?


Do not expect that after using proper keywords and submitting your site to the search engines ensures the flood of traffic to your website. Seo is not a part time job, it require time and concentration as well as seo skill. Firstly, search engines can take several weak to process your submission. Secondly, some search engine considers many other factors such as site popularity and traffic to rank your site. Site popularity is determined by the back links to your site. Therefore, it is necessary to enhance your site popularity; you need to establish reciprocal links techniques.
Another important factor that most people forget while putting keyword position, the people may not enter the keyword or phrase in search engine as your keyword. So using keywords according to their nature is very necessary.

It is advised you use some keyword search tool or google keyword tool to find some good adwords keywords, ppc keywords and adsense keywords relevant to your pages. Don’t forget that your purpose is to get targeted traffic therefore, the more accurate and appropriate keywords, the more chances your site will be reviewed. https://adwords.google.com/select/KeywordToolExternal can help you get some good keyword according to your site.

KEYWORDS OPTIMIZATION : SEO


FORMULA OF KEYWORD DENSITY IN SEO


An integral part of any campaign for search engine optimization (SEO) is the research and choose keywords for your website. This is basically the words or terms that you are trying to defend in the search engine. Do you sell PEZ dispensers? Then you may want to use "fish", the "providers", "candy flavored" or the most popular items, such as "Star Wars PEZ. Once you do choose these keywords, you need to enter keywords in the content and tags of your website is considered relevant for search engines when ranking your site. This includes the copy on your website, and all page titles, heading tags and links. It is not an easy process, but necessary for SEO.
When you insert your keywords throughout the content of the on-site, it is vital to understand the density at which the keywords are optimized on a page. The "keyword density" is referring specifically to how often a keyword is used in your copy of the in-page. Search engines compute the percentage of keyword density, and found the more often a search term in content, most will think that your page is relevant and should be ranked higher. However, there is a fine line between having an optimum density of the keyword and over-stuffing your content with keywords and terms.
Please note: Just because you have the right amount of keyword density in your SEO campaign, it means that the higher you will automatically align. The new Google algorithm favors really related terms to search only the keyword density. Focus also on the quality of its content, which only focus on your keyword density. If you exceed the optimal density of the keyword, you can face a penalty of over-optimization.
There's percentage of "perfect" in the keyword density in SEO, but strives to keep our Volacci density at or below 5 percent. The percentage of a page is relative to its length happy, so it is recommended that you know the formula for calculating each page. The formula for the keyword density is quite simple.
  1. Count how many words you have on your page.
  2. Count how many times you have used your keyword
  3. Apply this formula

    Keyword density = ([key word count]) * 100 / (total word count]
For example, let's say you have a 500 word article about PEZ dispensers, and you are optimizing for "FISH"
  • - Total Word Count: 500
  • - Applications of FISH keyword: 12

    Density of "fish" = (12 * 100) / 500
    The keyword density for "FISH" on page is 2.4%
SEO campaigns, have keyword optimization in content on-page is very important to be relevant to the search engines. However, if you focus too much on keyword density, you can potentially lose focus on other important elements in the dynamic approach to optimizing your site. Use the formula for keyword density to ensure that you are applying relevant applications of your keywords in quality content and links so that you are getting the most out of your research. If you abuse your keywords, Google will get you gods, so the stuffers are not saved.
Volacci is the main company of Drupal SEO and very passionate about their online success. By the end of your contract you will have at least much additional business from your website as you spend on our services ... or work for free until you do.

WHAT IS ADSENSE : SEO


AdSenseis a Google service for which the webmaster inserts a website text-based ads, called AdWords, and get a lot of money for each 'click' that visitors click on the ad.

HOW DOES ADSENSE?:

Operation is very simple. The webmaster inserts a JavaScript code that calls a Google server. This server analyzes the page where you insert the code and, depending on the contents therein, generates a code that displays a series of ads for companies that are related to the theme of the page.
In this way, you get the 'clickthrough rate' (the percentage of visitors who click the link) is higher, since readers are more interested in the topic. Thus, the benefits are greater for the webmaster.
Moreover, the formats of the ads are highly customizable, both in color and size. Even you can prevent ads from your competitors web sites appearing in this advertisement posted.
What I need to participate in the service Adsense?: Only need to be the administrator of a website and make a request to the Google team. That yes, Google has some very clear rules with the quality of the websites, not to insert AdSense on pages that have pop-ups or dialers, for example.
Once your site has been accepted, you can start inserting advertising in minutes. However, it may be that the ads are not visible for a few hours since the Google ad server have to crawl your site.

HOW MUCH MONEY WILL I MAKE?

Google does not provide any amount of money for each 'click', varies with each advertiser, and it is Google that decides this amount. However, in every moment you can see how much money you're winning.
Google sends a check to your home at the end of each month provided you earn more than $ 100. Anyway, the end of each year sends you a check for whatever amount of money you'd be.

CAN YOU INSERT ADSENSE ON PAGES IN ANOTHER LANGUAGE?:


Of course. Check out this article in which Google announced the start of the AdSense program in Spanish, Arabic or another language.

HOW DO I KNOW WHICH TYPE OF ADS TO INSERT ON MY WEBSITE?:


Insert the URL of your website and see the result

GOOGLE OPTIMIZATION : SEO


10 tips / tips to consider optimizing your site before sending it to Google. Following these tips can get Google to record better your website.
  1. IF YOUR WEBSITE HAS THE WELCOME SCREEN,

    make sure you have a text link that allows visitors access the site. It is common to see many sites with a welcome screen very striking and full of effects thrown into Flash but no other way to access the site. It is advisable to have a text link that gives access to the site "traditionally" because Google can not read Flash pages and therefore can not access the rest of the site.
  2. SURE NOT TO HAVE BROKEN LINKS.

    Sounds pretty obvious. But it's impressive the number of errors experienced by the Google engine daily due to broken links. So we would have to check all internal links on our site.
  3. CHECK THE LABELS TITLE

    The title of the pages is very important to Google, so you should check that the TITLE tag is relevant to the content of the page in question. This does not mean you have to put a title of more than 20 words, but rather in keeping with the content and easy to read by search engines and surfers.
  4. CHECK THE META TAGS

    Rumors that Google is not interested in the META tags are not entirely certain. Google uses these tags to describe a website when there is too much code to read. So enter some valid as META tags KEYWORDS and DESCRIPTION for keywords and site description respectively.
  5. CHECK ALT TAGS

    ALT tags should be the least likely used by webmasters. We add these tags to describe images for them. They are not a factor, but a plus for Google.
  6. CHECK YOUR FRAMES (FRAMES)

    A frame is a frame or separate box in which we can load a web page. If you use frames that Google can not index us 100%. I personally recommend not using frames, but if you decide to use them read this article.
  7. HAVE YOU GOT DYNAMIC PAGES?

    It is known that the web evolved greatly in recent years, and that more and more pages based on dynamic scripting languages (PHP, ASP, etc.).. But it seems that Google is limiting the amount of dynamic pages it indexes, so we could include some static pages ... when the momentum is not necessary.
  8. REGULARLY UPDATE

    This is a very important aspect that you should consider, as quickly Google indexes more pages that are updated with some regularity. Podes note that the number of pages indexed by the search engine is increased if you update daily, but may stagnate or decrease if not provide new content. I recommend you put a META option in the header to tell Google how often you should return to reindex.
  9. ROBOTS.TXT

    This file can be very helpful if we use it correctly. With robots.txt you can filter search engines recorded our website and to restrict access to certain URL's that do not want to be indexed (login page, file folders, etc.).
  10. "CAHE CACHE OR NOT?

    Google maintains a cache of some pages to have a faster access to them. Some webmasters prefer not to be cached, or Google cachee our pages all we have to do is place the following META tag between the heads:
  • META NAME = "ROBOTS" CONTENT = "NOARCHIVE"
  • With that prevent robots from caching and archiving our pages.

SEO TERMS


Log File
This is a file created by a web server or proxy server which contains all information about the activity on that server.
PPC (Pay Per Click)
Search engines where advertisers pay present the results of that search every time someone clicks on your link. The major PPC search engines today are Google Adwords, Overture and Espotting.
Crawler / Spider / Robot / Spider
A program designed to crawl the web by following links between pages. Is the usual form used by the major search engines to find pages later part of their databases
Click Through Rate (CTR) 
Ratet for clickks is the number of times a link is followed by sailors, divided by the number of times that the link is shown (in turn called printing).
Content: 
Information available on one page including images, texts and any other information provided regardless of its format.
Dynamic Content:
Pages generated usually from the information in a database and the demands made by the navigator. This kind of content usually has the character "?" Within the URL.
IP Address 
Internet Protocol Address: identifies a computer connected to the Internet.
Directories:
Databases are made by humans. Web pages indexed and are found in directories or subdirectories cataloging at their own discretion. The largest and most important are Yahoo and Open Directory Project.
Doorway Page:
Page designed and optimized for a browser and a particular keyword. The use of multiple pages of this type allows the same content is positioned properly in different browsers. This strategy is highly penalized by the major search engines.
Deep Links
are links to pages that are several levels below the domain root.
Cross Links:
Link consists of multiple websites in order to improve search engine positioning. If detected by search engines domains can be penalized.
Filters:
Specialized software to detect fraudulent practices or penalized by search engines. If the filter detects a search engine one of these practices directly apply a penalty to the website.
Frames:
Most search engines can manage web sites with frames (frames). Although difficulties have yet to try a website for 'frames' that one without 'frames'.
HTML:
Hyper Text Markup Language is the language used to write static pages for the World Wide Web and to specify links to pages and objects.
JavaScript:
Known script programming language that has broad support in browsers and Web development tools with this language you can write scripts.
Keywords Or Keyword:
Found on Web pages that describes the product, service or information presented by Web sites. Placed into the Meta tag "Keywords". Many search engines obviate the meta tag "keywords" due to misuse that has been made of them, so it is very important to clarify the standards used by search engines and directories.
Link Popularity Or League:
It is a measure of the importance of an Internet website based on the number of external links is the website. It is one of the most important factors used by the major search engines to sort the results presented to sailors. The popularity of leagues (link popularity) defines how important a website. Search engines believe that if many sites have links to your site, its content must be of high quality. Conclusion: the higher the popularity of links, the higher the position in Google. Another important thing is that a connection of a website with a 'link popularity' very high at your site has a connection higher than a site that borders low.
Site Map:
The availability of a site map has two advantages: it is much easier for visitors to find their way around the site, and makes it much easier for a search engine indexes your site.
Meta Search:
Websites specializing in consulting various search engines simultaneously and present the information in a comprehensible and orderly.
Meta Tags:
They are HTML tags that contain keywords for which your website will be found. The met tags are very important when positioning a Web site because search engines rank pages according to the met tags they find. The most used are the title, description and keywords.
Search Engines
A search engine or unbuscador is a system that seeks and indexes websites and allows users to find what is on those sites through keywords. One Stat lets you know from which search engines visitors are coming to your site.
Much Content:
In most text (context) that is available on a website, more information than a search engine can collect from a Web site. This has a positive effect on the trial of the website.
Domain Name:
This is the text name corresponding to an IP number from a computer connected to the Internet.

Indexed Pages:
This number represents the total number of pages visited and indexed by Google in its website. This number can conclude to what extent the site is indexed by a search engine.
WebSite Optimization:
A thorough analysis of the HTML code, tags, keywords, web statistics, etc.That is, the design, structure and contents that make up a Web page in order to bring it to the top of search engines, or the first results of search engines, most popular.
Website Optimization For Search Engines:
The process of modification and analysis of web pages to get to position the page in the highest positions within the major search engines. The analysis is comprehensive because it includes labels and tags titles, codes and Web design. Check out our promotions website optimization or search engine optimization.
Page Rank:
The Google Page Rank is numerical value assigned by Google to each page present in its database. This value is calculated by Google using special algorithms based on qualitative and quantitative evaluation of the external links on each website.
Entry Pages:
Indicates the number of times a page is the first to be seen within the route of a visitor clicks on your site. Typically the home page should have a high number, if not the top of the list.
Exit Pages
Indicates the number of times a page is the first to be seen within the route of a visitor clicks on your site. Typically the home page should have a high number in this list, unless you have a lot of dynamic content on your site that users see only one time.
Search Keywords / Phrases:
These are the words that users employ search engines in order to reach a website. One Stat lets you know what search words your visitors are using to find your site. As the list of keywords can increase you know that keywords important to find your site are ignored by search engines and added to the META KEYWORD tag of each page to improve the performance of your site in search engines.
Criminalization:
Punishment imposed on a particular page from a search engine as a result of using positioning tactics contrary to the editorial standards of that form. Such punishment often results in the loss of positions and the disappearance of the website, sometimes all, of that form. All projects are NeoMinds search engine positioning tactics accepted by Google and other search engines without penalty futuristic.
PFI (Pay for Inclusion):
Some search engines and directories charge a fixed amount to consider and review the inclusion of a specific page in its database. This payment does not guarantee in any case, a particular position for the revised website.
Platform:
The operating system (Windows XP, Windows 98, MacOS, Linux, etc.).
Flash Portal: 
Flash can speak with a marketing campaign with search engines. Search engines are always looking for text in a webpage, and some sites display designs search engines with no text to index. When a search engine views a page with flash during indexing, the site can be indexed only when the search engine can go around this introduction (e.g. a link that says 'skip intro').
Web Positioning in Search Engines:
Web positioning in search engine optimization is the why certain techniques are seeking high positions under certain searches on search engines.
Relevance: 
Affinity of a page including a listing of results of a search with the subject or information sought by a navigator
Server:
A computer that hosts information available to users (called clients) on the Internet or other network.
Referring Site: 
The URL of a website that has a link that serves as a reference for visitors to come to a site.
Titles Different Pages:
Each page on a website has its own subject, it is very important that the title of the pages tells whether your context contains. A search engine can judge by its title page.
Top 10 Seekers:
Page listed in the top 10 search engine results for a specific keyword or phrase.
First Time Visitors: 
The number of visitors accessing your site first. It identifies a visitor for the first time by the absence of a cookie.
Visitors Returners:
The number of visitors who stayed at their site in a period prior to being returned and have returned. It determines whether a visitor is no returner through a cookie. Visitors returnees are counted only once in the period but have multiple entries to the Site. port period.
Unique Visitors:
Unique visitors are day visitors only on a given day. A visitor can only be a first-time visitor or a visitor nonreturner. Unique visitors are counted only once during the period several times while accessing a website. Number of visitors accessing our website over a specific period of time from a particular IP address.
Page Views:
Each time a page is downloaded by the user. In terms of entries to the site only HTML pages, dynamic pages and forms considered page views, no access to images, audio, video or advertisements.
Average Page Visitors:
The number of pages each visitor views on average.
3-Way Link Exchange:
How to link exchange agreement between three sites, a site of links to site b -> b link to another site c -> c link to another site.
Algorithm: 
Algorithm is a formula that is used by the search engine to categorize sites
Anchor text: 
A text is a hyperlink.
Backlinks: 
Backlinks are links from other websites to your website or web page. In the world of SEO, the more backlinks you have, the higher the page-rank of your site.
Black Hat SEO:
This is an unethical SEO method used to classify your site as a site with hidden letters, black background, etc.
BLS:
backlinks.
Cache: 
Cache is a storage area engine search database where you store all web pages.

Cgi-bin: 
cgi-bin is the name of the folder that contains the interface of the gateway binaries and scripts.
Cloaking: 
A black-hat system delivers customized content to a website search engine spider but hiding the code or information to visitors.
DoFollow:
Standard of incoming links that do not have the attribute "nofollow".
Firefox:
A web browser developed by Mozilla that is free to download and offers an alternative to Microsoft Internet Explorer.
Folksonomy:
Social networks managed by the same social network users. Examples include del.icio.us, "technorati.com" and "flickr.com".

Friday, October 29, 2010

What is Client-Server Architecture?

Businesses of various sizes have various computer needs. Larger businesses necessarily need to use more computers than smaller businesses do. Large businesses routinely have large computer setups, such as mainframes and networks. A network for a large business commonly has a client-server architecture, also known as a two-tier architecture. No matter what it is called, this type of architecture is a division of labor for the computing functions required by a large business.


Under the structure of the client-server architecture, a business's computer network will have a server computer, which functions as the "brains" of the organization, and a group of client computers, which are commonly called workstations. The server part of the client-server architecture will be a large-capacity computer, perhaps even a mainframe, with a large amount of data and functionality stored on it. The client portions of the client-server architecture are smaller computers that employees use to perform their computer-based responsibilities.
Servers commonly contain data files and applications that can be accessed across the network, by workstations or employee computers. An employee who wants to access company-wide data files, for instance, would use his or her client computer to access the data files on the server. Other employees may use a common-access application by accessing the server through their client computers.
This type of server is called an application server. It takes full advantage of the client-server architecture by using the server as a storage device for applications and requiring the clients to log in to the server in order to use those applications. Examples of this kind of application are numerous; among the most popular are word processors, spreadsheets, and graphic design programs. In each case, the use of the applications illustrates the client-server architecture.
The server is not just for storage, however. Many networks have a client-server architecture in which the server acts as a processing power source as well. In this scenario, the client computers are virtually "plugged in" to the server and gain their processing power from it. In this way, a client computer can simulate the greater processing power of a server without having the requisite processor stored within its framework. Here, the client-server architecture describes a virtual sort of power plant.

What is MVC?


MVC, or model view controller, is a technique used in software. Its fundamental purpose is to build a distinction between the way the software handles data, and the way the software interacts with the user. This distinction means that the processes can be handled, developed and checked separately, which can be more efficient.
The process is based on the concept that, at the simplest level, all software carries out the same three-step function. First a user inputs data, then software processes the data, and finally the software outputs the results as a new set of data. A very basic example of this is a user typing “2+2=” into a calculator, the calculator working out the answer, and then the calculator displaying “4.”

In the MVC system, the way the computer processes the data is known as the model. The output of the results is known as the view. The input of data by the user is known as the controller. It’s important to remember that the view and the controller are the sections of the program which control the input and the output. The terms don’t usually refer to physical objects such as a keyboard or monitor.
The purpose of using MVC is to make it simpler to isolate different elements of a software process. By using the system, a program is effectively divided into three parts: the data processing, the input process and the output process. This means that changes to one part of the program can be made more smoothly without having to also rewrite the other parts of the program.
The model view controller system is widely regarded to have been pioneered in a programming language titled Smalltalk. Created in the 1970s at Xerox, Smalltalk was partially designed to teach people about the object model of computing. Put simply, that involves breaking down a computing task into separate parts and building the program around the way those parts interact. Smalltalk was also an example of dynamic programming, in which a program can be revised even while it is operating.
The MVC system is often used in web-based software such as that used in dynamic, or interactive, websites. In these situations, the view is the code, such as HTML, which is generated by the software after processing a query. For example, on a search engine, the search query box would be the controller and the results page the view.

Model-View-Controller


Flexibility in large component based systems raise questions on how to organize a project for easy development and maintenance while protecting your data and reputation, especially from new developers and unwitting users. The answer is in using the Model, View, Control (MVC) architecture. An architecture such as MVC is a design pattern that describes a recurring problem and its solution where the solution is never exactly the same for every recurrence.
To use the Model-View-Controller MVC paradigm effectively you must understand the division of labor within the MVC triad. You also must understand how the three parts of the triad communicate with each other and with other active views and controllers; the sharing of a single mouse, keybord and display screen among several applications demands communication and cooperation. To make the best use of the MVC paradigm you need also to learn about the available subclasses of View and Controller which provide ready made starting points for your applications.
In the MVC design pattern , application flow is mediated by a central controller. The controller delegates requests to an appropriate handler. The controller is the means by which the user interacts with the web application. The controller is responsible for the input to the model. A pure GUI controller accepts input from the user and instructs the model and viewport to perform action based on that input. If an invalid input is sent to the controller from the view, the model informs the controller to direct the view that error occurred and to tell it to try again.
A web application controller can be thought of as specialised view since it has a visual aspect. It would be actually be one or more HTML forms in a web application and therefore the model can also dictate what the controller should display as input. The controller would produce HTML to allow the user input a query to the web application. The controller would add the necessary parameterisation of the individual form element so that the Servlet can observe the input. This is different from a GUI, actually back-to-front, where the controller is waiting and acting on event-driven input from mouse or graphics tablet.
The controller adapts the request to the model. The model represents, or encapsulates, an application's business logic or state. It captures not only the state of a process or system, but also how the system works. It notifies any observer when any of the data has changed. The model would execute the database query for example.
Control is then usually forwarded back through the controller to the appropriate view. The view is responsible for the output of the model. A pure GUI view attaches to a model and renders its contents to the display surface. In addition, when the model changes, the viewport automatically redraws the affected part of the image to reflect those changes. A web application view just transforms the state of the model into readable HTML. The forwarding can be implemented by a lookup in a mapping in either a database or a file. This provides a loose coupling between the model and the view, which can make an application much easier to write and maintain.


By dividing the web application into a Model, View, and Controller we can, therefore, separate the presentation from the business logic. If the MVC architecture is designed purely, then a Model can have multiple views and controllers. Also note that the model does not necessarily have to be a Java Servlet. In fact a single Java Servlet can offer multiple models. The Java Servlet is where you would place security login, user authentication and database pooling for example. After all these latter have nothing to do with the business logic of the web application or the presentation.