Search Engine Optimization (SEO) for ASP.NET Developers

6 minute read

I recently put together a presentation for a developer conference about SEO for ASP.NET Developers. I was a little surprised at how little content there was when I researched this topic. There was a lot of great content about SEO, but only a handful of articles for developers. I’ve decided to take my talk and convert it into a series of blog posts about SEO for ASP.NET.

As it turns out about 50% of it actually requires technical understanding of ASP.NET – a lot of SEO goodness can be achieved without knowing a thing about the technology your site is written on.

A Primer

Search Engine Optimization is the process of increasing your natural search rank on sites like Google. A lot of people and companies spend thousands of dollars on AdWords (and other ad options) to ensure that their content is shown when people use specific keywords looking for products or services.

Search Engine Optimization is difficult for developers for a number of reasons, but probably the number one reason being that the technology wasn’t specifically designed to address the challenges in SEO. For example, ASP.NET 1.0 was built and released before SEO became something that people concerned themselves with.

The first recommendation that we give customers is to plan early for SEO, much like we advise for planning for performance/scale. Similar to planning for performance & scale, planning for SEO of your site requires some decisions up-front about how your site’s information architecture is going to be laid out. One of the suggestions I read said that, “while most people plan for FireFox and Internet Explorer they tend to forget about the 3rd major browser: search engines.” This makes a lot more sense once you realize that all the compelling functionality you can enable with JavaScript libraries, Flash, and now SilverLight may be moot if search engines can’t access the content.

The good news is there is plenty of great recommendations and tools to help you with this as well as some strategies for dealing with sites that can be difficult to index. A little planning can go a long way.

Tools of the Trade

While there are several tools that I’d recommend you add to your developers belt there are 3 that I’ve found to be helpful for SEO purposes:

imageFireBug
FireBug is an add-on for FireFox and is incredibly handy for a number of tasks. One of which is to get a quick sense of what your pages are rendering to the browser and how much content you are sending back.

In the screen shot to the left I’ve opened the Graffiticms.com site and am examining the size of various elements that are downloaded when the page is requested.

imageGoogle Webmaster Tools
Google provides a number of great tools for helping you with SEO. One of which is a suite called Google Webmaster Tools.

Once you setup your site so that Google Webmaster Tools can inspect it you will get back a lot of data about crawl analysis, various site statistics, ranking information, as well as some suggestions for improving your site’s indexability.

 

imageFiddler
Fiddler is wonderful tool that allows you to monitor the traffic that goes out from your web requests and what the server actually sends back.

When running Fiddler sets itself up as a proxy in Internet Explorer and traps all incoming and outgoing HTTP requests. This provides insight into the various HTTP headers being sent back-and-forth as well as the ability to view the raw HTML that the server is generating.

 

Tip #1 – Beware of Duplicate Content

If you think that creating multiple pages in your site with the same content – or publishing the exact same content multiple times through other sites is good – you would be mistaken. While search engines such as Google don’t necessarily always consider this gaming (although it has been known to be done for this purpose) duplicate content only ensures that there are multiple places where the content can be found and takes away the uniqueness of the content and decreases your natural search engine rank.

How many domains are you supporting?

An example of this that most people probably aren’t aware of is with their own domain. For example, www.example.com and example.com – while most of us would likely view these 2 domains as ‘identical’ search engines view them as 2 separate sites. So if you create content and search engines can access it either with or without the ‘www’ you are decreasing your uniqueness and essentially duplicating content.

In Community Server, we actually built in functionality starting in version 2.0 that would automatically force a Community Server site to run on a single domain. By default we would always try to strip ‘www’ off the front of the URL – trying to help customers get better SEO for their sites. As it turns out this is also one of the number one questions people ask in our support forums, “Why is Community Server removing www…” For the record this is completely configurable and you can set Community Server to leave the www in place on the URL.

The good news is that it’s easy to resolve the issue of multiple domains serving the same content. Internet Information Server has a built in ability to handle this. For example, if your primary site is “Example.com” and you want to make sure that all requests to www.example.com are redirected to example.com (stripping the www):

imageCre
ate a Permanent Redirect in IIS

1. Open IIS and create a new website, e.g. “Example.com www redirect” that serves request for the same IP address as your primary site, but only for the host header of www.example.com.

2. Next, configure the website to perform a permanent redirect.

Now, when requests are made to www.example.com the server will permanently redirect all requests to the domain with the ‘www’ removed.

What about RSS?

Here is something you may not hear that often: beware of RSS syndication. RSS is a wonderful technology for enabling information sharing. People typically share content through RSS either by publishing all their content or publishing excerpts.

With an excerpt you are only sharing a portion or summary of the main content and typically require people to click-through to get the full content. Sites like cnn.com typically syndicate excerpts of their main stories.

Excerpts work, but readers – at least more savvy web users – don’t like them as much. It usually means they have to leave their RSS reader to read the content on the website.

On the other hand most bloggers, and even some other news sites, publish all their content and don’t use excerpts. For example, on both weblogs.asp.net and blogs.msdn.com both sites are configured to publish all content in the RSS feed. The problem is that SPLOGS take advantage of this.

A SPLOG is nothing more than an automated blog that is publishing content by subscribing to another sites RSS feed. The owner of the SPLOG then sets up Google AdWords, or another monetization option, and uses the content created by the primary site to help drive up their natural search rank. The goal being that they hope people find the SPLOG and click on ads.

We see this all the time for content created on weblogs.asp.net. While I’m not advocating using excerpts, you do need to realize that by publishing a full RSS feed you may be publishing your content (such as the content I’m writing for this blog post) in more places than you realize!

Next Tip: How content is linked really does matter

Want to receive more great content like this for free?

Subscribe to our newsletter to get best practices, recommendations, and tips for digital marketers