Wednesday, October 31, 2012

How Pay Per Click Works

What is Pay Per Click?

Pay-per-click marketing is an advertising method that allows you to buy search engine placement by bidding on keywords or phrases. There are two different types of PPC marketing.

In the first, you pay a fee for an actual SERP ranking, and in some cases, you also pay a per-click fee meaning that the more you pay, the higher in the returned results your page will rank.

The second type is more along true advertising lines. This type of PPC marketing involves bidding on keywords or phrases that appear in, or are associated with, text advertisements. Google is probably the most notable provider of this service. Google’s AdWords service is an excellent example of how PPC advertisements work

PPC advertisements are those advertisements that you see at the top and on the sides of search pages.

Putting pay-per-click to work:

Now you can begin to look at the different keywords on which you might bid. Before you do, however, you need to look at a few more things. One of the top mistakes made with PPC programs is that users don’t take the time to clarify what it is they hope to gain from using a PPC service. It’s not enough for your PPC program just to have a goal of increasing your ROI (return on investment). You need something more quantifiable than just the desire to increase profit. How much would you like to increase your profit? How many visitors will it take to reach the desired increase?

Let’s say that right now each visit to your site is worth $.50, using our simplified example, and your average monthly profit is $5,000. That means that your site receives 10,000 visits per month. Now you need to decide how much you’d like to increase your profit. For this example, let’s say that you want to increase it to $7,500. To do that, if each visitor is worth $.50, you would need to increase the number of visits to your site to 15,000 per month. So, the goal for your PPC program should be “To increase profit $2,500 by driving an additional 5,000 visits per month.” This gives you a concrete, quantifiable measurement by which you can track your PPC campaigns.

Once you know what you want to spend, and what your goals are, you can begin to look at the different types of PPC programs that might work for you. Although keywords are the main PPC element associated with PPC marketing, there are other types of PPC programs to consider as well.

Source : Search Engine Optimization Bible By Wiley

Thursday, October 11, 2012

How to do Competitive Analysis in SEO

Competitive Analysis:

Competitive analysis is a step you should take in the very beginning of your SEO efforts. It should be right at the top of your to-do list, along with keyword analysis and tagging your web site. In fact, you should probably do a competitive analysis even before you begin tagging your site.

But did you know that your competitive analysis doesn’t end there? Like analyzing your web statistics, conversions, and other elements of your web site, your competitive analysis should be ongoing. Your competitors will change. They’ll learn how to reach a search engine better. They may even change their customer approach just enough to always stay ahead of you. They’ll keep you guessing, and the only way to figure out what they’re doing that you’re not is to spend the time it takes to analyze what they’re doing.

As you’re going through this analysis process, the first thing to keep in mind is that you’re not checking out only your direct competitors. You need to look at those competitors who are ahead of you in search rankings, even if their offerings are different from yours.

Plan to spend a few hours a week on this analysis. You should look at all the sites that are ahead of you, but specifically those sites that rank in the top five to ten position in the SERPs. Look for the same indications that you examined during your original competitive analysis. These include:

Site rankings: Where in the SERPs is the site ranked? Make note, especially, of the top three to five sites.

Page saturation: How many of the competition’s pages are indexed? Not every page on a site will be indexed, but if your competition has more or fewer pages ranked, there may be a factor you haven’t taken into consideration about how to include or exclude your site pages.

Page titles: Are page titles consistent? And what keywords do they contain, if any at all? How your competition uses titles can give you an indication of what you’re doing right or wrong with your own.

Meta data: What meta data is your competition including? How is it worded? And how does it differ from your own? Remember that you can access the source code of a web site by selecting Source from the View menu of your web browser.

Site design: How is the competition’s web site designed? Site architecture and the technology that is used to design and present the site are factors in how your site ranks. Learn what the competition is doing and how that differs from what you’re doing.

A robots.txt file: The robots.txt file is accessible to you, and looking at it could give you some valuable insight to how your competition values and works with search engines.

Content quality and quantity: How much quality is included on your competitor’s site and is it all original, or is it re-used from some other forum? If a site is ahead of you in search rankings, its content is probably performing better than yours. Analyze it and find out why.

Link quality and quantity: Your competitors’ linking strategies could hold a clue about why they rank well. Look at the link structure. If they’re using legitimate linking strategies, what are they? If they’re not, don’t try to follow suit. Their actions will catch up with them soon enough.

Source : Search Engine Optimization Bible by Wiley

Thursday, September 13, 2012

SEO and Programming Languages

SEO and Programming Languages

One aspect of web-site design you might not think of when planning your SEO strategy is the programming language used in developing the site. Programming languages all behave a little differently. For example, HTML uses one set of protocols to accomplish the visuals you see when you open a web page, whereas PHP uses a completely different set of protocols. And when most people think of web-site programming, they think in terms of HTML.

But the truth is that many other languages also are used for coding web pages. And those languages may require differing SEO strategies.

JavaScript :

JavaScript is a programming language that allows web designers to create dynamic content. However, it’s also not necessarily SEO-friendly. In fact, JavaScript often completely halts a crawler from indexing a web site, and when that happens the result is lower search engine rankings or complete exclusion from ranking.

To overcome this, many web designers externalize any JavaScript that’s included on the web site. Externalizing the JavaScript creates a situation where it is actually run from an external location, such as a file on your web server. To externalize your JavaScript:

1. Copy the code, beginning at the starting tags, and paste it into a Notepad file.
2. Save the Notepad file as filename.js.
3. Upload the file to your web server.
4. Create a reference on your web page to the external JavaScript code. The reference should be placed where the JavaScript will appear and might look like this:

<script language=”JavaScript” type=”text/javascript” src=”filename.js”></script>

This is just one of the solutions you can use to prevent JavaScript from becoming a problem for your SEO efforts. There are many others, and depending on your needs you should explore some of those.

Flash :

Flash is another of those technologies that some users absolutely hate. That’s because Flash, though very cool, is resource intensive. It causes pages to load slower, and users often get stuck on an opening Flash page and can’t move forward until the Flash has finished executing. If the user is in a hurry, it’s a frustrating thing to deal with.

Flash is also a nightmare when it comes to SEO. A Flash page can stop a web crawler in its tracks, and once stopped, the crawler won’t resume indexing the site. Instead, it will simply move on to the next web site on its list.

The easiest way to overcome Flash problems is simply not use it. But despite the difficulties with search rankings, some organizations need to use Flash. If yours is one of them, the Flash can be coded in HTML and an option can be added to test for the ability to see Flash before the Flash is executed. However, there’s some debate over whether or not this is an “acceptable” SEO practice, so before you implement this type of strategy in an effort to improve your SEO effectiveness, take the time to research the method.

Dynamic ASP :

Most of the sites you’ll encounter on the Web are static web pages. These sites don’t change beyond the regular updates by a webmaster. On the other hand, dynamic web pages are web pages that are created on the fly according to preferences that users specify in a form or menu. The sites can be created using a variety of different programming technologies including dynamic ASP. The problem with these sites is that they don’t technically exist until the user creates them. Because a web crawler can’t make

the selections that “build” these pages, most dynamic web pages aren’t indexed in search engines. There are ways around this, however. Dynamic URLs can be converted to static URLs with the right coding. It’s also possible to use paid inclusion services to index dynamic pages down to a predefined number of levels (or number of selections, if you’re considering the site from the user’s point of view).

Dynamic ASP, like many of the other languages used to create web sites, carries with it a unique set of characteristics. But that doesn’t mean SEO is impossible for those pages. It does mean that the approach used for the SEO of static pages needs to be modified. It’s an easy enough task, and a quick search of the Internet will almost always provide the programming code you need to achieve SEO.
 
PHP :

Search engine crawlers being what they are preprogrammed applications there’s a limit to what they can index. PHP is another of those programming languages that falls outside the boundaries of normal web-site coding. Search engine crawlers see PHP as another obstacle if it’s not properly executed.

Properly executed means that PHP needs to be used with search engines in mind. For example, PHP naturally stops or slows search engine crawlers. But with some attention and a solid understanding of PHP and SEO, it’s possible to code pages that work, even in PHP.

One thing that works well with PHP is designing the code to look like HTML. It requires an experienced code jockey, but it can be done. And once the code has been disguised, the PHP site can be crawled and indexed so that it’s returned in search results.

Source : Search Engine Optimization Bible by Wiley

Thursday, August 23, 2012

What is Search Algorithm?

Search Algorithm

In very general terms, a search algorithm is a problem-solving procedure that takes a problem, evaluates a number of possible answers, and then returns the solution to that problem. A search algorithm for a search engine takes the problem (the word or phrase being searched for), sifts through a database that contains cataloged keywords and the URLs those words are related to, and then returns pages that contain the word or phrase that was searched for, either in the body of the page or in a URL that points to the page.

There are several classifications of search algorithms, and each search engine uses algorithms that are slightly different. That’s why a search for one word or phrase will yield different results from different search engines. Some of the most common types of search algorithms include the following:

  • List Search: 
A list search algorithm searches through specified data looking for a single key. The data is searched in a very linear, list-style method. The result of a list search is usually a single element, which means that searching through billions of web sites could be very time-consuming, but would yield a smaller search result. 
  • Tree Search: 
Envision a tree in your mind. Now, examine that tree either from the roots out or from the leaves in. This is how a tree search algorithm works. The algorithm searches a data set from the broadest to the most narrow, or from the most narrow to the broadest. Data sets are like trees; a single piece of data can branch to many other pieces of data, and this is very much how the Web is set up. Tree searches, then, are more useful when conducting searches on the Web, although they are not the only searches that can be successful.
  • SQL Search:
One of the difficulties with a tree search is that it’s conducted in a hierarchical manner, meaning it’s conducted from one point to another, according to the ranking of the data being searched. A SQL (pronounced See-Quel) search allows data to be searched in a non-hierarchical manner, which means that data can be searched from any subset of data.

  • Informed Search: 
An informed search algorithm looks for a specific answer to a specific problem in a tree-like data set. The informed search, despite its name, is not always the best choice for web searches because of the general nature of the answers being sought. Instead, informed search is better used for specific queries in specific data sets.
  • Adversarial Search: 
An adversarial search algorithm looks for all possible solutions to a problem, much like finding all the possible solutions in a game. This algorithm is difficult to use with web searches, because the number of possible solutions to a word or phrase search is nearly infinite on the Web.
  • Constraint Satisfaction Search:  
When you think of searching the Web for a word or phrase, the constraint satisfaction search algorithm is most likely to satisfy your desire to find something. In this type of search algorithm, the solution is discovered by meeting a set of constraints, and the data set can be searched in a variety of different ways that do not have to be linear. Constraint satisfaction searches can be very useful for searching the Web.

Classification of Search Engines

Types of Search Engines

There might be lots of search engines in the internet world but there are some where users give preference to. Because all search engines are not created equal and algorithm of each search engines differ from other. Based on the website traffic generation and relevant search, these are classified into three, they are
  • Primary Search Engine
  • Secondary Search Engine
  • Targeted Search Engine

Primary Search Engine:

A primary search engine is the type you think of most often when search engines come to mind. Some index most or all sites on the Web. For example, Yahoo! Google, and MSN are primary (also called major) search engines.

Primary search engines will generate the majority of the traffic to your web site, and as such will be the primary focus of your SEO efforts. Each primary search engine differs slightly from the others. Most primary search engines are also more than just search. Additional features such as e-mail, mapping, news, and different types of entertainment applications are also available from most of the primary search engine companies. These elements were added long after the search was established, as a way to draw more and more people to the search engine.

Secondary Search Engine: 

Secondary search engines are targeted at smaller, more specific audiences, although the search engine’s content itself is still general. They don’t generate as much traffic as the primary search engines, but they’re useful for regional and more narrowly focused searches. Secondary search engines, just like the primary ones, will vary in the way they rank search results. Some will rely more heavily upon keywords, whereas others will rely on reciprocal links. Still others might rely on criteria such as meta tags or some proprietary criteria.

Secondary search engines should be included in any SEO plan. Though these search engines might not generate as much traffic as the primary search engines, they will still generate valuable traffic that should not be overlooked. Many users of secondary search engines are users because they have some loyalty to that specific search engine.

Targeted Search Engine:

Targeted search engines - sometimes called topical search engines - are the most specific of them all. These search engines are very narrowly focused, usually to a general topic, like medicine or branches of science, travel, sports, or some other topic.

When considering targeted search engines for SEO purposes, keep in mind that many of these
search engines are much more narrowly focused than primary or secondary search engines

Source : Search Engine Optimization Bible - Wiley

Monday, August 13, 2012

Basics of Search Engine

Search Engine and its Basics

Web Search Engine or Simply Search Engine are designed to search the information in the world wide web. In the modern world more than half the population in the world use Internet daily to find information and things online. Internet is actually a collection of FTP (File Transfer Protocol) sites that users could access to download or upload files.

As like the web directories which is maintained by the manually by the human,, search engines also maintain real time information by running an algorithm web crawler. There are certain agents which saves the web page in a related database and these agents are called as crawlers, spiders or robots. The work of these agents is to crawl the web page store the information of the webpage in database by means of indexing. So when the user types something in the search, it results in showing the relevant result which is fetched form the website database.

The first real search engine, in the form that we know search engines today, didn’t come into being until 1993. It was developed by Matthew Gray, and it was called Wandex. Wandex was the first program to both index and search the index of pages on the Web. This technology was the first program to crawl the Web, and later became the basis for all search crawlers. And from there, search engines took on a life of their own. From 1993 to 1998, the major search engines that you’re probably familiar with today were created:

Excite - 1993
Yahoo! - 1994
Web Crawler - 1994
Lycos - 1994
Infoseek - 1995
AltaVista - 1995
Inktomi - 1996
Ask Jeeves - 1997
Google - 1997
MSN Search - 1998

Source : Search Engine Optimization by Wiley

Friday, August 10, 2012

On Page and Off Page Optimization Technique

On-page and Off -Page Optimization

There are certain activities to be done for a website in the search engines. SEO techniques are broadly divided into two such as Onpage Optimization and Off Page Optimization.

On-page Optimization concentrates on the page work, which immediately reflects on the website but offpage optimization is completely promotion based where the website is promoted in various other sites in the search engine to get listed.

Below is furnished list of Onpage and Offpage optimization techniques,

On-Page Optimization Techniques:
  1. URL Optimization
  2. Content Optimization
  3. Footer links optimization
  4. Determining the Structure of website
  5. Adding Meta tags (Meta Title, Meta Tag, and Meta Description)
  6. Keywords research and analysis
  7. Competitor analysis
  8. Anchor text optimization
  9. Internal and External Link Structuring 
  10. Sitemap Generation
  11. Robot Text

Off-Page Optimization Techniques:

  1. Directory Submission
  2. Link building
  3. Article Creation and Submission
  4. Forum posting
  5. Social Bookmarking
  6. Profile link creation
  7. RSS Feed Generation
  8. Classified Submission
  9. Guest Posting
  10. Press Release
  11. Blog creation, posting and commenting 
  12. Review submission
  13. Document Sharing
  14. CSS Submission

Above is the list of On page and Off page SEO techniques, which helps to gain traffic for the website.