Compression of web content
Compression of web content before the website is made available makes it more responsive. It reduces the page size and bandwidth costs, improves the user experience and increases visitors' happiness. Before we do it, it's desirable to remove unnecessary elements from the page. Here I'll focus on some common tools to achieve better compression.
First, we need to figure out which resources weight the most in order to get a sense of the kilobyte distribution. In the Firefox plug-in Web developer toolbar we can "view document size" to create a file size summary of documents, images, objects, scripts and style sheets. Their total size is calculated and shown, which allows us to determine if we have added the most value with the least amount of kilobytes possible—there is often room for improvement.
Broadband connections enabled the inclusion of heavy images on websites. But such images not always enhance our message, especially when they are used to replace textual elements or for meaningless decoration. Their size is an order of magnitude bigger than that of code and their maintenance is harder. Designers often overuse images or don't compress them at all, so they create site monsters just because visitors "can afford to open it now". This approach of resource, bandwidth and time waste will continue to undermine the reputation of the web as media. Users have limited attention spans and therefore they won't see everything on a page, no matter how many images we've rendered to them. This means that potentially half of the heavy resources remain useless in a user session. Visitors may have to wait long to load the website's images on their mobile device, unless they have been specifically optimized for such use.
Unless you provide progressive image enhancement, more and bigger images are not always equal to a better user experience. Yahoo released the image compression tool Smush It as an attempt to improve website performance. It's a tool I highly recommend.
If you use Photoshop, you can compress images with the "Save for web..." dialog and save those with few colors as PNG with a limited color count (for best results go stepwise between 1 and 256 colors) or those with many colors as JPEG with a particular compression level. The difference is that PNG is a lossless format, while JPEG isn't.
Flash already produces compressed SWF files, so I won't discuss it here.
The compression of HTML markup allows other resources to start loading sooner, which reduces the response time and improves the initial user perception. A user that checks multiple pages of a website can notice an improvement this way. Google decide to omit the lengthy DOCTYPE to avoid adding unnecessary bytes with every user request. When they are thousands, this results in considerable bandwidth savings. There are many tools to compress HTML.
CSS is used on almost every website. Its syntax consists of selectors, properties and values. A design should use as few selectors as possible, because they slow down the page rendering. To eliminate some properties, a careful attention to the cascade is required. One possible approach is to define the selectors in the same sequence as their corresponding elements appear in the markup. This can prevent the addition of unnecessary code. The tool in the last paragraph also offers CSS compression. There are many other similar tools online, so test them and see which one suits you best.
I haven't used any PHP compressors yet. Maybe you have some ideas here.
We need to consider all these factors when we want to achieve good compression of our web content. It's wrong to avoid speaking about file size just because many people have fast internet connection. Many are still not so fortunate.