The growing information has led to a complete chaos in the file system of our PCs. Everywhere we look at, we see files with increasingly "innovative" names that scream for our attention. With so many of them it's easy to lose sight of how they are related. The directory that was created as a way to organize them now needs itself to be organized. It can have subdirectories that have even more subdirectories. All of them look very similar and since they don't have an extension, they don't have a specific icon that can be used to uniquely distinguish them (Win7). Without some kind of search, we are increasingly incapable of finding the files we need. Even pressing the first letter of the expected file name isn't enough to lead us where we want, since far too many files now start with this letter. It is well accepted now for software packages to contain hundreds of files and directories. Few people seem to worry that this complexity must be handled by the same end user they are trying to serve. Sometimes they don't even worry that their packages will use even more packages without even saying so; on the contrary—it is perceived as a positive sign that they are following the latest technology trends and practices. All these human-generated files overload the file system, making the OS search even slower. And if this operation is just two times slower for a machine, we should expect it to be exponentially more slower for the average human being. In fact, we may not know where to begin.
The increased complexity needs to be managed somehow. If we have half a million files, we'll be defeated just by the thought to go through them and figure out which ones we still need and which ones can be safely deleted. Our only excuse not to do that roots in the low price factor and general availability of terabyte-big hard drives. Their convenience allowed us to become sloppy with storage maintenance. As a result, a lot of space is wasted, sometimes to the extent it could, and we seem to touch the big and growing structure of our file system even more rarely, being less certain of what effect this could have. This is why proper naming of files and directories is important for our ability to find them. But if our hard drives were of limited size, there would be no room to accommodate every possible file we encounter (we are essentially collectors). Then we would need to not only pretend in words that we appreciate every byte, but prove it in action. When constraints change, previously widely accepted things can become completely intolerable. Cheap storage motivates the creation of more complex products and websites that are slower to load and require even faster machines to execute. If we remember that with every additional file a system has one more thing to load—becoming even slower,—we might reevaluate what files we create and when. A client's browser can make hundreds of HTTP requests to compose a website of all available files, but no matter how fast a single request is, their compound effect will be much slower. Imagine what could be if the browser wouldn't allow for six simultaneous connections. Then the sites that rely on this feature to work, will need minutes to load. In a single-threaded application we don't have to worry about race conditions, but as soon as we introduce multi-threading, things can become completely unpredictable in a communication-intensive system. We need to only think about files in terms of the communication between them to realize that less can be more.
Sometimes, it makes sense to keep our disk as empty as possible, especially when Internet content can fill it before we even notice. The least we can do is to delete files as soon as we know that they are no longer needed. Otherwise they'll stay on the system and once we lose track of them, they'll be hard to act upon. Having more files means greater likelihood to do something wrong with them (hit "Enter" on the wrong file, delete/overwrite a file) or simply to overlook the ones we need. Being unable to easily locate these files means that our work slows down too, despite of the availability of all modern methodologies. Having to work on too many files simultaneously, constantly juggling between them, can be slow and demotivating.
With less files, the antivirus needs less processor time to scan them. Program execution, disk defragmentation, search and backup are all faster. In an event of a disk failure we lose less. We need to remember that no file structure lasts forever, even if it is visibly existent on screen. This is why occasionally storing our most valuable data can keep us sane in an event of a disk failure (which is surprisingly common). We only need a single time to convince ourselves that we shouldn't assume anything. Using an external hard drive can help us store this data. But sometimes, when the drive isn't SSD, it can be very slow to copy lots of small files to it. At least this is what I experienced during my last backup, which took many hours, with me thinking that something with my disk was wrong. Back in time I copied a directory that has only changed slightly recently, but the changes would have been still hard to track manually. So I decided to delete what I have stored previously and save again the full contents of the directory in its present state. This again took a lot of time with the copy function in Windows 7. At the end I still had data to store, but didn't want to go the same route again. I thought that some kind of a backup program could probably do better, but didn't know which one. Then I remembered that a program I was using for FTP upload could also synchronize files and directories on multiple drives and decided to test if this could work. Because Total Commander was already started, it didn't automatically recognize the new drive attached to the PC. So I restarted the program and rescanned all directories. This was slow, but it worked, and the external disk was now recognized. Then I could see the contents of the directories on the local disk on the left and the ones on the external disk on the right. I went into the directories I wanted to synchronize and compared their contents, to get a list of only those files that have changed since the last time. This initial process was a bit slow too, because there were thousands of files and subdirectories that needed to be compared on a per case basis, but it still took around ten minutes, even with a slow external drive. Once done, I have started to copy only the changes and not all files as I did previously, essentially "doing the smallest thing that could possibly work." What has surprised me then was how fast the copying speed is. The entire backup procedure took around three hours, but if I did the same through Windows only, hours could easily turn into days. Something else that made impression on me was how transparent the program worked during the synchronization and copy operations. Initially, it had three windows: the main window with the left and right panels, the synchronize window and the window for the copy progress. But when I minimized the last, an icon appeared in the notification area of the taskbar, showing a number on a white background that gradually increased with the background being colored in blue from left to right during the progress. There was no button on the taskbar. The only thing I saw was just a small feedback box with all the details of internal operation completely hidden from me. A clear example of invisible design to which we all strive. It seems that a program just needs to fill the small box that will make us happy and not necessarily show us beautiful interfaces that don't work.
It was somehow strange to see how something could be in front of me so long while being unable to see it. As this case proved, doing things differently can be beneficial even when only for the learning. At least I feel that chaos has its new drive now.