It is interesting to observe how programs behave as a function of the input size. The Pascal’s triangle, for instance, can grow quite large depending on the power in (a+b)n. This makes it suitable for testing some simple algorithms and observing their running time. We can try to estimate how large n must be before computing a solution becomes unfeasible. Believing that n = 1000 will exhaust the available system resources is one thing; testing whether this is truly the case is another. What if we could produce a larger triangle in less time?
First, we need to make sure that the algorithm produces a correct output. We may have a triangle that is looking great, but if it’s not a Pascal triangle, it wouldn’t count. Optimizing individual routines is desirable only after we know that we have a valid result; optimizing the wrong code doesn’t prevent us from producing errors more efficiently. Ideally, having an example with the correct data on paper allows us to analyze it and seek a variety of ways to model it. Before starting with a problem, it is often a good idea to try to learn as much as possible about it. In this case, we can see that Wikipedia has a great animation that reveals how subsequent lines are formed from previous ones in the triangle.
After we have the simplest working program, we see a potential problem that might not seem obvious at first: The time required for the computation of the numbers seems to grow more slowly than the amount of bytes needed to store the computed numbers. Here is the size of the data produced for different sizes of n:
- n = 30: 7kB
- n = 100: 78kB
- n = 200: 577kB
- n = 300: 1.9MB
The last takes more than two seconds to load, which on the web would probably be the threshold beyond which a visitor would prefer to look elsewhere. In exchange for loading 1.9MB of data, he/she would get numbers that are hard to distinguish, with possibly no context and nothing particularly practical (raising a sum to the power of 30 isn’t something to try on paper). Then why bother at all? And the answer is to illustrate the power of numeric algorithms.
Looking at the triangle, we see that level x has exactly x items and that the numbers on level x+1 are created from x-1 individual sums. Something else we notice is that the triangle is symmetrical with respect to a vertical axis passing through the first element and cutting all levels below it in half. Each number that the line crosses is two times the one to the left on the previous level. This means that we could compute only half the numbers, say on the left side, and then reverse them to obtain the rest. But the question is whether this is practical when we know that producing so much data could quickly bring the browser to a halt irrespective of computational efficiency. If the answer is no, we could seek and choose a slightly less efficient algorithm if it’s easier to implement.
A triangle has to look like one, which means that besides the computational effort to generate the numbers, there is an interesting presentational challenge too. The first time we generate the numbers and place them in the center of each line, we see that they hardly build a triangle. The shape is more like a triangle whose sides were carved by ovals. We could try to change the font, but this may not help much. This means that lines at the top and the bottom have to stay intact, but those in the middle must in some way expand. We could do this with word spacing. Having the lines of the triangle formed around the first and last lines served as a visual cue to determine how much to expand each line. A nice side effect that we can see in this example is the rate with which lower levels become gradually more dense with data. As the number of digits grows, computing bigger triangles gets much slower. Although we have reached only n = 300 and not n = 1000, we have gained a better feeling for the constraints that resource limitations impose on our work. We have also developed a new appreciation for every single, free byte.
This suggests that even the simplest problems offer a variety of intriguing perspectives to choose, combine and develop. For instance, we could use similar arithmetic in many other situations, provided we are able to identify them. Overall, this has been a useful exercise.