Tuesday, June 20, 2006

Upgraded my laptop

.. and got it working

I have a laptop so I can bring that to clients and show some stuff and work at other locations than my home office. As with all computers I have owned after some time the systems can not hold all the stuff you want to take with you and become slower and slower after intensive usage.

As the machine was still good enough for its purpose it was time to give a little boost. I wanted to add some memory (I worked with a lousy 512 MB anyway, which is not very much) and my hard disk was filling up with all sorts of disk images and projects and other stuff that I needed more storage as well.

Just for some extra info: I own a Acer Aspire 6003. I am quite happy with it.

Adding memory

Adding some memory shouldn't be too hard now. On the bottom of the machine is a panel that hides the memory. Taking a screwdriver it is simple to remove the screw and then open the panel. Tada! There is DIMM module. I take this out with care and look at it with interest. It says 256 MB and I have 512 MB total memory. I look inside the opening, but no, I can not find another DIMM. Hmm, this is strange. I have looked at the manual and it says that there should be two memory modules. There are some more panels one that leads to the wireless card and the others are for the battery and de the media drive. Look again in the manual and all over the Acer website but I cannot find out where the other DIMM might be. So, I decide to just replace this DIMM with a bigger one.

Carefully I make a photocopy of both sides of the DIMM so I have with all information necessary to purchase the correct one later. And indeed this came in handy as there are gazillions of memory modules to choose from. I buy a 1 GB DIMM and pop that in and my laptop all of sudden has 1,25 GB of RAM. It was completely plug and play. Unbelievable that things do work like that. Starting up my system is now a lot smoother and faster since there is no swapping going on anymore.

If anyone can tell me where the other DIMM is hiding in my laptop then I might just replace that one as well.

Update

Thanks to one of my readers, Amar, I am now aware that the other DIMM is hiding under the keyboard. Instructions on how to get to that DIMM are available on Amar's blog. But beware of all the hearts ;-)

replacing the hard disk

I had already figured out that adding memory would be much less of a hassle then replacing the hard disk. I had one big requirement with this part of the job: I don't want to be busy reinstalling the OS (Windows XP) and all the applications and then for ever patching all the stuff. I wanted to copy the hard disk to the new drive and replace the old disk with the newer one.

I had the following plan:

  1. buy an external hard disk casing for a 2.5" drive with a USB interface
  2. buy a 2.5"hard disk
  3. put these together and hook it up to my laptop
  4. copy my hard disk to the new drive
  5. swap the drives and smile, smile, smile

I combined the first three steps by finding a cheap complete external 120 GB 2.5" drive with USB cable. This would double the storage of my laptop which seemed enough for now.

So I hooked up the drive and started off with grabbing Partition Magic and created some room on the new drive and then started copying the partitions across. I created a new partition in the remaining free space. This takes sometime so in the mean time I watched some World Cup.

With all copying finished I switched everything off screwed open the external casing (voiding my warranty, but what the heck) and took out the new drive. I then swapped it with my laptop's drive and switched the laptop on.

Aaaargh!

Windows started protesting that hal.dll was corrupted or missing. Googling over the Internet and several news groups I found this to be a common problem happening when people tried to do what I did. I tried several of the fruitless solutions and then tried a new approach.

One workable solution appeared to be making a copy of the existing drive without making any changes to partition sizes or adding partitions in the remaining unused disk space.

I downloaded Casper XP trial edition at Future Systems. It gives me 30 days to try it out, which should be more than enough to tackle this problem. I put my old drive back into the laptop and I told Casper XP to copy my drive to the new drive. It warns me that all info will be lost and then told it would take more than 10 hours to complete. A World Cup hardly ever lasts that long so I decided to just let it run through the night.

The next morning it was finished and thanks to some clever algorithms for copy empty space it took just two and a half hours. I swapped the drives and like a breeze my system booted! Absolutely amazing!

Partition Magic came in handy to stretch the partitions to cover the entire drive and in a matter of another half an hour I had my system completely ready.

Conclusion

Even though there was some problem with copying the hard drive in the end it was quite a simple job. You need to know the right approach on copying your Windows XP installation onto a new hard drive and you need the right tools. Looking back I might not have needed Casper XP I could have used Partition Magic (with the right strategy) or Norton Ghost for that matter.

A job is always easy once you know how to do it.

Tuesday, June 13, 2006

A new Standard in screen resolution?

.. smaller or bigger?

A sequel to this article has been made. In that I discuss the current standard screen width you could design for. Based on data from the first months of 2007.

In a post today Tony Patton over on TechRepublic discusses Determining a standard screen resolution for your application. He has some point to keep in mind when designing a web site.

  • There is a general trend towards bigger monitors with higher resolutions. Multiple screens are close to becoming a common household item as well.
  • Never ever use your own screen resolution as the standard.
  • Nor the monitors of your colleagues. Both will be probably more state of the art than the old CRT or cheapo flat screen from Joe Average.
  • Screen resolution does not equal the viewable area or the browser window size. Most users will not have their browsers maximized on screen.
  • It is good to look at visitor statistics, but be sure to sample the right audience. Taking the numbers from a site largely visited by developers will not do when the site you are designing is aiming at pensioners.
  • Also keep accessibility issues in mind. The disabled users can use screen readers with or low resolution monitors with odd metrics. You should test, test, test.
  • You can use JavaScript to establish the viewport size and serve the client a fitting design.

All in all, a not all too shocking article, but a good summing up of things to keep in mind.

Monitors are becoming smaller, not larger!

In a comment TechRepublic member Wombat Ron then adds the increasing use of handheld and smartphones that have much smaller screens. He predicts that the 640*480 resolution could be coming back into the charts. So, it is difficult to say whether we are moving towards bigger or smaller screens.

Separating content and styling

I think Ron is right, although I think these could be handled by using the mediatype'handheld' to serve special styling through a dedicated CSS file. This is where separation comes back in its full power. With a good design of the structure of your site (the html) you will keep that lean and you can then just as easily serve that to handhelds and adjust the styling so that it will fit on a small screen.

What is wrong in the article?

One thing struck me as quite odd in the article was the example code that was in the article. This code looks at the size of the window to determine what to present to the customer. See code.

<SCRIPT language="JavaScript">
<!--
if ((screen.width >= 1024) && (screen.height >= 768)) {
window.location="high_resolution_version.html";
} else {
window.location="low_resolution_version.html";
}
//-->
</SCRIPT>

Here the principle of separation is missed completely. The users is redirected to another page and I think that it would be more then enough to serve another CSS file.

Separation of structure and styling gives the versatile option of changing the style while still leaving the structure intact. At our new website we have a 'normal' version and an 'impaired' version. The only difference is that other stylesheets are delivered to the visitor. The html stays the same. (Mind you: this does not make the site accessible, there are still plenty of issues that we will tackle soon.)

Right now, I am working on changing the layout of the site so that it strechted from an 800 pixel wide design to a 1024 pixel wide design. I this is all relatively painful as I only need to make changes to a couple of CSS files and stretch a few background images.

Afterburner

What is the most odd really in the 'resolution business' is that apparently designs are made for 1024*768 or 800*600. I hardly ever design the height of a web page. The width is a valid parameter but the height of the page is usually determined by the content and as a designer you have hardly any influence on that unless it is your own site.

I usually design for a certain screen width. Also I have a preference for fluid or liquid designs.

Tuesday, June 06, 2006

Google Sitemaps: overview of benefits

.. looking back with pleasure

Sometime in September 2005 Google put out the Google Sitemaps product. To help website owners all over the world to inform Google of the pages of their websites. Being a website owner, designer and developer I immediately jumped on the wagon and tried it out. I started an experiment to try the effectiveness out and see if it did really do what I hoped it would do: make my new content appear in the Google index more quickly than before.

My experiment

In a couple of posts ( 1, 2 & 3 ) I outlined the experiment I did with Google Sitemaps on my site.

What was the setup?

I created a sitemap of my website. In this XML file all the interlinked pages appear. I added two pages to my website:

  1. a page (A) that I added to my sitemap.xml but that was never linked from any the other pages on my website
  2. another page (B) that I did not add to my sitemap.xml, but it was linked to from page A.

Both pages had legitimate content and also link back to other pages of my website. Actually, both were to be part of my website but I merely have omitted putting links on the other pages pointing to these new pages. They are in essence orphans. But could also be seen as landing pages.

The experiment was to see if the Google crawler would visit these two pages. The first directly from the sitemap.xml and the second through crawling the links on the first page.

So what happened?

After having submitted the sitemap.xml I was filled with joy when I discovered that Google scanned the file within 24 hours. Moreover it appeared to come by twice a day. That did not mean that the pages appeared in the Google search results immediately, but still it gave me hope that they would after the next crawl.

Numeric examples of PageRanks in a small system.Image via Wikipedia

So, I kept checking my website logs to see if any of the secret pages (A or B) would pop up in my requested pages list. And checked again and checked again.

Now for the real purpose of the Sitemaps: getting your pages crawled. The Crawl process can take some time before it happens. The predominant belief is that a fresh crawl of the Web is done every couple of weeks or once a month.

So, after uploading my sitemap I expected it could take up to a month before my hidden pages were discovered. But after a couple of days the first secret page A was crawled. Bang! The sitemap.xml really showed its purpose. The file was indeed used by the Google crawler to crawl pages.

The second page did not show up immediately. So apparently, the links on the page were not yet crawled. They were probably queued to be crawled in a next crawl. Later they indeed did show up and so that part of the process worked as well.

The experiment was successful and has convinced me that Google Sitemaps really add something to a website.

Added value

A bonus of the Google Sitemap system is that a website master also has the possibility to view reports on the last crawl results. This can be achieved by placing an html file with a name provided by Google to verify that you can manage the site. Google then provides you with a list of failed pages. Pages that no longer exist or have another error.

These statistics have been extended over the last few months and now show also the most used search strings and also the search queries with the most clicks. There is more like a robot.txt checker and error reports. All very valuable for a web site owner. Read more about these handy sitemap statistics.

Conclusion of the experiment

By adding a Google Sitemaps file to your website (and keeping it up to date!) you can ensure that new pages are crawled at the next scheduled crawl. If you have a popular CMS there is probably already a plugin available to create and maintain a sitemap. I can recommend it to any site owner. It is worth the effort. For me it is a little bit of work because I still use static html pages on my site.

More goodies

Some time ago after the experiment more goodies were added. What extras are hidden in the Google Sitemaps console?

  • Crawl statistics: Pages successfully crawled, pages blocked by robots.txt, pages that generated HTTP errors or were unreachable.
  • The PageRank distribution within your site.
  • Various indexing stats (pages indexed, etc.)

If you, for a completely obscure reason, do not wish to create a Google Sitemap, you can rest assured. You do not need to have Google Sitemap to be able to use this functionality. All you have to do is create an empty HTML file and upload to your site to verify that you own the site. And then you start leveraging the benefits. For free.

Google Sitemaps turns itself into an absolutely awesome troubleshooting tool for all webmasters.

Experimentation is fun, but real life implementation is better

This interesting experiment shows that Google Sitemaps is doing what it is supposed to do: making pages noted by the Google crawlers.

As said we launched our new website on Tuesday 30 May and implemented Google Sitemaps with it. I had registered the sitemap with Google a couple of days before the launch so it was trying to download the then still not existing sitemap. On launch day around 6 AM our time the sitemap was downloaded by Google. That was the first possible moment less than two hours after the site went live.

Success #1: new pages are already in the index

Two days later we see already many pages in the Google Index. When I did a search on the first 100 pages changed in the last three months on Google I found that of these only 12 are still pointing to our old site. My guess was that these are there only because we had some problems setting the rewrite rules for the old asp files. That was not entirely correct as a week later we still see old pages, but those are different ones.

The Google index now already returns new pages only days after going live. Without the Google Sitemap this would taken much, much longer.

Success #2: we have a check on what goes wrong

Thanks to the statistics that we receive from Google we can now track the old, no longer existing pages that are being crawled. We can see errors popping up and if necessary we can further tweak our web application or web server to gracefully handle these.

Conclusion

Implementing Google Sitemaps has given us already huge advantages in the first week of going live. By having the Google Sitemaps ready at go-live we have been able to have Google find our new pages and content.

I would strongly recommend implementing Google Sitemaps with any web project so the pages will appear as quickly as possible in the index. Even though we had to implement the Google Sitemaps ourselves within our site framework it was not the greatest of challenges we had.

Many of the (open source) content management systems nowadays provide Google Sitemaps out of the box or through a plugin. So, there is hardly any reason not to implement them.

Update (4th September 2008)

The Google Sitemaps protocol has in the meantime been adopted by more search engines and is now als an integral part of the Robots.txt. So, it's nowadays even more important to use the protocol.

Better still: It's now an open standard.

Reblog this post [with Zemanta]

Thursday, June 01, 2006

Google Sitemaps rule!

.. yoohoo!

Yep, we are definitely doing things right over here!

As said we launched our new website on Tuesday and implemented Google Sitemaps in it. We have registered the sitemap and Tuesday morning around 6 o'clock our time the sitemap was downloaded by Google.

Now on Thursday some two days later we see already many pages in the Google Index. When I do a search on the first 100 pages changed in the last three months on Google I find that of these only 12 are still pointing to our old site. My guess is that these are there only because we had some problems setting the rewrite rules for the old asp files.

This rocks!

Separation is the way to go

.. in modern web design

Maybe I should again apologize for the ''Building websites is fun post I did yesterday. I was maybe cheering to hard for a goal I scored as part of a great team. But once again, I could not have done without the rest of the team.

And it is just such a good feeling that things that you have put together prove to be working in real life. I guess that's why I became a developer in the first place: creating things. That's probably also why I keep up this blog.

Col. John "Hannibal" Smith (George Peppard)
I love it when a plan comes together.
The A-Team
What's all the fuss about?

So, what's all this fuss about then? What did I contribute to the project that works like it should? Why cheering for just doing the job I was hired for? Shouldn't things always work when I create them?

Uhm? Yeah, well, right. Ok, yes, sorry again. I just made another impression that successful development work is something to cheer about loudly and long just because it so rarely happens. Apologies to all colleagues around the globe.

OK, now stop apologizing and tell the handful readers what it was that I did.

What did I do?

In the process of building a new website for an insurance company I was assigned to make sure that the XHTML and CSS coming out of the site was as optimal as possible. More or less self assigned as I thought to have gathered some knowledge in that area. The team was already underway with several Proofs of Concepts of early phases of the site framework and I joined in time to take up my task King of XHTML and CSS. And happily so.

Just before last summer the graphic designs came to us and they were delivered mainly as Photoshop files and some early versions of cut up html. I then started to work on tweaking these to be lighter in markup by stripping out unnecessary bits and making sure stuff showed like they should in the two major browsers of that moment: Internet Explorer 6 and Firefox. I also checked to see if things were reasonable in Opera and on a Mac. But that was only on an occasional basis.

Plan of attack

The old website was a highly successful site (in its market) and was developed some years ago. Since then many tweaks and enhancements were added. The site ran in frames and the layout was largely table based. Good old fashioned Old School web design. In essence nothing wrong with that: it produced very good revenue.

For the new web site we opted to become more up to date. We moved from the classic ASP to ASP.NET and on the client side we opted for XHTML. As configuration was set high over old fashion programming we did not go with the standard .aspx files (in fact we have only one .aspx file). A complete framework was built that has several types of containers for the different types of pages. In the CMS one can pick the right container (template) for the page and the information will then be displayed correctly. There are quite a few webservices running in the background that glued all things together.

My role in this was taking the different page designs and translate these into XHTML and CSS which the other developers could then implement in containers and components of the framework. My most important focus was separating the structure of the page and the styling. The third aspect of a web page: behavior was already being handled by external JavaScript files.

Using the principle of separation of structure (semantics), style and behavior we were able to keep the site very lightweight for our visitors. The users don't for all the clever stuff we did in the background on our servers and load balancers. They just want to get the information quickly.

Structure & semantics

As far as was possible within the framework and the design the html was stripped to the minimum so it was still semantically correct and enough structure to allow the styling to make up the design. This was all implemented in various containers and components that show different types of content and controls. That gave a few restrictions and added a bit of extra fat, but the overhead was small and made for easier development of the server-side framework.

Styling

At first we had a huge CSS file. Later that was cut up into several as not all containers use all styling. We added a bit of intelligence to the containers and let them load extra stylesheets when needed.

We also added a print stylesheet that simply hides all unnecessary stuff like menus and also adds the URL to links through the content() styling. This proofed to be quite a simple approach and gave satisfying results really quick.

"Bad vision" styling

Then there was a little bit more work. The site needed also to be viewable by people with some kind of disability. We added a button for "bad vision" that would link in other stylesheets. The "normal" stylesheets had pixel based fonts and as you know that does not really resize very well in Internet Explorer and also the used background images and all measurements were not relative to font size. Resizing the font in your browser will lead to a broken page layout pretty quickly.

So everything needed to be em based. Then the layout was changed from fixed width to fluid so the browser window would be used to the full at any resolution and not just 800 by 600 the site was designed for originally.

We also dropped a lot of the background images to make it easier to read and added a "skip links" link. (This "skip links" link is hidden in the normal vision site which the more I think about that was maybe not the best decision.)

This special impaired version of the stylesheet did involve some work, but it was done without any changes to the html. The only change is that containers load an extra set of alternative stylesheets.

Benefits

The benefits of separating structure, style and behavior have become really apparent during the development of the different styles. It was very easy to update and also very flexible. I also did some research on bandwidth savings and a saving from between 10 to 20 % is made. That is taking into account the images that are part of the content. During the research I found that these images are nowhere near optimized for the web. SO if we could improve on that I guess that the savings could go up to around 30%.

Conclusion

By separating the different aspects of the web pages we have reduced on time spent making changes and will also save on bandwidth usage. So overall it is the right thing to do, not because it is hot or trendy, but because it saves both time and money.

Go ahead and have look at the new website.