countercomplex

On electronic wastefulness

Many things are horribly wrong in this world.

People are becoming more and more aware of this. Environmental and economic problems have strengthened the criticism towards consumer culture, monetary power and political systems, and all kinds of countercultural movements are thriving. At the same time, however, ever more people are increasingly dependent on digital technology, which gets produced, bought, used and abandoned in greater masses than ever, causing an ever bigger impact on the world in the form of waste and pollution.

Because of this, I have decided to finally summarize my thoughts on how digital technology reflects the malfunctions of our civilization. I became a hobbyist programmer as a schoolkid in the mid-eighties, and fifteen years later I became a professional software developer. Despite all this baggage, I'm going to attempt to keep my words simple enough for common people to understand. Those who want to get convinced by citations and technical argumentation will get those at some later time.

Counter-explosion

For over fifty years, the progress of digital technology has been following the so-called Moore's law, which predicts that the number of transistors that fit on a microchip doubles every two-or-so years. This means that it is possible to produce digital devices that are of the same physical size but have ever more memory, ever more processing speed and ever greater overall capabilities.

Moore's law itself is not evil, as it also means that it is possibile to perform the same functions with ever less use of energy and raw material. However, people are people and behave like people: whenever it becomes possible to do something more easily and less consumingly, they start doing more of this something. This phenomenon is called "rebound effect" based on a medical term of the same name. It can be seen in many kinds of things: less fuel-consuming cars make people drive more, and less calories in food make weight-losers eat more. The worst case is when the actual savings becomes negative: a thing that is supposed to reduce consumption actually increases it instead.

In information technology, the most prominent form of rebound effect is the bloating of software, which takes place in the same rate of explosiveness as the improvement of hardware. This phenomenon is called Wirth's law. If we took a time machine ride back to 1990 and told the contemporaries that desktop computers would be becoming thousand times faster in twenty years, they would surely assume that almost anything would happen instantaneously with them. If we then corrected them by saying that software programs still take time to start up in the 2010s and that it is sometimes painful to tolerate their slowness and unresponsiveness, they wouldn't believe it. How is it even possible to write programs so poorly that they don't run smoothly with a futuristic, thousand times more powerful computer? This fact would become even harder to believe if we told them that it also applies to things like word processors which are used for more or less exactly the same things as before.

One reason for the unnecessary largeness, slowness and complexity of software is the dominant economic ideal of indefinite growth, which makes us believe that bigger things are always better and it is better to sell customers more than they need. Another reason is that rapid cycles of hardware upgrade make software developers indifferent: even if an application program were mindlessly slow and resource-consuming even on latest hardware, no one will notice it a couple of years later when the hardware is a couple of times faster. Nearly any excuse is valid for bloat. If it is possible to shorten software development cycles even slightly by stacking all kinds of abstraction frameworks and poorly implemented scripting languages on top of one another, it will be done.

The bloat phenomenon annoys people more and more in their normal daily life, as all kinds of electric appliances starting from the simplest flashlight contain increasingly complex digital technology, which drowns the user in uncontrollable masses of functionality and strange software bugs. The digitalization of television, for example, brought a whole bunch of computer-style immaturity to the TV-watching experience. I've even seen an electric kitchen stove that didn't heat up before the user first set up the integrated digital clock. Diverse functionality itself is not evil, but if the mere existence of extra features disrupts the use of the basic ones, something is totally wrong.

Even though many things in our world tend to swell and complexify, it is difficult to find a physical-world counterpart to software bloat, as the amount of matter and living space on our planet does not increase exponentially. It is not possible to double the size of one's apartment every two years in order to fit in more useless stuff. It is not possible to increase the complexity of official paperwork indefinitely, as it would require more and more food and accommodation space for the expanding army of bureaucrats. In the physical world, it is sometimes necessary to evaluate what is necessary and how to compress the whole in order to fit more. Such necessity does not exist in the digital world, however; there, it is possible to constantly inhale and never exhale.

Disposability

The prevailing belief system of today's world equates well-being with material abundance. The more production and consumption there is, the more well-being there is, and that's it. Even though the politicians in rich countries don't want to confess this belief so clearly anymore, they still use concepts such as "gross national product", "economic growth" and "standard of living" which are based on the idealization of boundless abundance.

As it is the holy responsibility of all areas of production to grow indefinitely, it is important to increase consumption regardless of whether it is sensible or not. If it is not possible to increase the consumption in natural ways, planned obsolensce comes to rescue. Some decades ago, people bought washing machines and television sets for the twenty years to follow, but today's consumers have the "privilege" of buying at least four of both during the same timespan, as the lifespans of these products have been deliberately shortened.

The scheduled breaking of electric appliances is now easier than ever, as most of them have an integrated microprocessor running a program of some kind. It is technically possible, for example, to hide a timer in this program, causing the device to either "break" or start misbehaving shortly after the warranty is over. This kind of sabotage may be beneficial for the sales of smaller and cheaper devices, but it is not necessary in the more complex ones; in their case, the bloated poor-quality software serves the same purpose.

Computers get upgraded especially when the software somehow becomes intolerably slow or even impossible to run. This change can take place even if the computer is used for exactly the same things as before. Bloat makes new versions of familiar software more resource-consuming, and when reforms are introduced on familiar websites, they tend to bloat up as well. In addition, some operating systems tend to slow down "automatically", but this is fortunately something that can be fixed by the user.

The experience of slowness, in its most annoying form, is caused by too long response times. The response time is the time between user's action and the indication that the action has been registered. Whenever the user moves the mouse, the cursor on the screen must immediately match the movement. Whenever the user presses a letter key on the keyboard, the same letter must appear on the screen immediately. Whenever the user clicks a button on the screen, the graphic of the button must change immediately. According to usability research, the response time must be less than 1/10 seconds or the system feels laggy. When it has taken more than a second, the user's blood pressure is already increasing. After ten seconds, the user is convinced that "the whole piece of junk has locked up".

Slow response times are usually regarded as an indicator that the device is slow and that it is necessary to buy a new one. This is a misconception, however. Slow response times are indicators of nothing else than indifferent attitudes to software design. Every computing device that has become available during the last thirty years is completely capable of delivering the response within 1/10 seconds in every possible situation. Despite this fact, the software of the 2010s is still usually designed in such a way that the response is provided once the program has first finished all the more urgent tasks. What is supposed to be more important than serving the user? In the mainframe era, there were quite many such things, but in today's personal computing, this should never be the case. Fixing the response time problems would be a way to permanently make technology more comfortable to use as well as to help the users tolerate the actual slowness. The industry, however, is strangely indifferent to these problems. Response times are, from its point of view, something that "get fixed" automatically, at least for a short while and in some areas, at hardware upgrades.

Response time problems are just a single example of how the industry considers it more important to invent new features than to fix problems that irritate the basic user. A product that has too few problems may make consumers too satisfied. So satisfied that they don't feel like buying the next slightly "better" model which replaces old problems with new ones. Companies that want to ensure their growth prefer to do everything multiple times in slightly substandard ways instead of seeking any kind of perfection. Satisfaction is the worst enemy of unnecessary growth.

Is new hardware any better?

I'm sure that most readers have at least heard about the problems caused by the rat race of upgrade and overproduction. The landfills in rich countries are full of perfectly functioning items that interest no one. Having anything repaired is stupid, as it is nearly always easier and cheaper to just buy new stuff. Selling used items is difficult, as most people won't accept them even for free. Production eats up more and more natural resources despite all the efforts of "greening up" the production lines and recycling more and more raw material.

The role of software in the overproduction cycle of digital technology, however, is not so widely understood. Software is the soul of every microprocessor-based device, and it defines most of what it is like to use the device or how much of its potential can be used. Bad software can make even good hardware useless, whereas ingenious software can make even a humble device do things that the original designer could never have imagined. It is possible to both lengthen and shortern product lifetimes via software.

New hardware is often advocated with new features that are not actually features of the hardware but of the software it runs. Most of the features of the so-called "smartphones", for example, are completely software-based. It would be perfectly possible to rewrite the software of an old and humble cellphone in order to give it a bunch of features that would effectively turn it into a "smartphone". Of course, it is not possible to do complete impossibilities with software; there is no software trick that makes a camera-less phone take photos. Nevertheless, the general rule is that hardware is much more capable than its default software. The more the hardware advances, the more contrast there is between the capabilities of the software and the potential of the hardware.

If we consider the various tasks for which personal computers are used nowadays, we will notice that only a small minority of them actually requires a lot from the hardware. Of course, bad software may make some tasks feel more demanding than what they actually are, but that's another issue. For instance, most of the new online services, from Facebook to Youtube and Spotify, could very well be implemented so that they run with the PCs of the late 1990s. Actually, it would be possible to make them run more smoothly than how the existng versions run on today's PC. Likewise, with better operating systems and other software, we could make the same old hardware feel faster and more comfortable to use than today's hardware. From this we can conclude that the computing power of the 2000s is neither useful, necessary nor pleasing for most users. Unless we count the pseudo-benefit that it makes bad and slow software easier to tolerate, of course.

Let us now imagine that the last ten years in personal computing went a little bit differently -- that most of the computers sold to the great masses would have been "People's Computers" with a fixed hardware setup. This would have meant that the hardware performance would have remained constant for the last ten years. The 2011 of this alternate universe would probably be somewhat similar to our 2011, and some things could even be better. All the familiar software programs and on-line services would be there, they would just have been implemented more wisely. The use of the computers would have become faster and more comfortable during the years, but this would have been due to the improvement of software, not hardware. Ordinary people would never need to think about "hardware requirements", as the fixedness of the hardware would ensure that all software, services and peripherials work. New computers would probably be lighter and more energy-efficient, as the lack of competition in performance would have moved the competition to these areas. These are not just fringe utopian ideas; anyone can make similar conclusions by studying the history of home computing where several computer and console models have remained constant for ten years or more.

Of course it is easy to come up with ideas of tasks that demand more processing power than what was available to common people ten years ago or even today. A typical late-1990s desktop PC, for example, plays ordinary DVD-quality movies perfectly but may have major problems with the HD resolutions that are fashionable in the early 2010s. Similarly, by increasing the numbers, it is possible to come up with imaginary resolutions that are out of the reach of even the most expensive special-purpose equipment available today. For many people, this is exactly what technological progress means -- increase in numerical measures, the possibility to do the same old things in ever greater scales. When a consumer replaces an old TV with a new one, he or she gets a period of novelty vibes from the more magnificent picture quality. After a couple of years, the consumer can buy another TV and get the novelty vibes once again. If we had an access to unlimited natural resources, it would be possible to go on with this vanity cycle indefinitely, but still without improving anyone's quality of life in any considerable extent.

Most of the technological progress facilitated by the personal computing resources of the 2000s has been quantitative -- doing the same old stuff that became possible in the 1990s but with bigger numbers. Editing movies and pictures that have ever more pixels, running around in 3D video game worlds that have ever more triangles. It is difficult to even imagine a computational task relevant to an ordinary person that would require the number-crunching power of a 2000s home computer due to its nature alone, without any quantitative exaggeration. This could very well be regarded as an indicator that we already have enough processing power for a while. The software and user culture are lagging so far behind the hardware improvements, that it would be better to concentrate on them instead and leave the hardware on the background.

Helplessness

In addition to the senseless abundance of material items, today's people are also disturbed by a senseless abundance of information. Information includes not only the ever expanding flood of video, audio and text coming from the various media, but also the structural information incorporated in material and immaterial things. The expansion of this structural information manifests as increasing complexity of everything: consumer items, society systems, cultural phenomena. Those who want to understand the tools they use and the things that affect their life, must absorb ever greater amounts of structural information about them. Many people have already given up with understanding and just try to get along.

Many frown upon people who can't boil an egg or attach a nail to a wall without a special-purpose egg-boiler or nailgun, or who are not even interested in how the groceries come to the store or the electricity to the wall socket. However, the expanding flood of information and the complexification of everything may eventually result in a world where neo-helplessness and poor common knowledge are the normal condition. In computing, complexification has already gone so far that even many experts don't dare to understand how the technology works but prefer to guess and randomize.

Someone who wants to master a tool must build a mental model of its operation. If the tool is a very simple one, such as a hammer, the mental model builds up nearly automatically after a very short study. If someone who uses a hammer accidentally hits their finger with it, they will probably accuse themself instead of the hammer, as the functionality of a hammer can be understood perfectly even by someone who is not so capable in using it. However, when a computer program behaves against the user's will, the user will probably accuse the technology instead of themself. In situations like this, the user's mental model of how the program works does not match with its actual functionality.

The more bloated a software program is, the more effort the user needs to take in order to build an adequate mental model. Some programs are even marketing-minded enough to impose its new and glorious features to the user. This doesn't help at all in forming the mental model. Besides, most users don't have a slightest interest in extensive exploration but rather use a simple map and learn to tolerate the uncertainty caused by its rudimentariness. When we also consider that programs may change their functionality quite a lot between versions, even enthusiasts will turn cynical and frustrated when their precious mental maps become obsolete.

Many software programs try to fix the complexity problem by increasing the complexity instead of decreasing it. This mostly manifests as "intelligence". An "intelligent" programs monitors the user, guesses their intents and possibly suggests various courses of actions based on the intents. For example, a word processor may offer help in writing a letter, or a file manager may suggest things to do with a newly inserted memory stick. The users are offered all kinds of controlled ready-made functionality and "wizards" even for tasks they would surely prefer to do by themselves, at least if they had a chance to learn the normal basic functionality. If the user is forced to use specialized features before learning the basic ones, he or she will be totally helpless in situations where a special-purpose feature for the particular function does not exist. Just like someone who can use egg-boilers and nailguns but not kettles or hammers.

The reasons why technology exists are making things easier to do and facilitating otherwise impossible tasks. However, if a technological appliance becomes so complex that its use is more like random guessing than goal-oriented controlling, we can say that the appliance no longer serves its purpose and that the user has been taken over by technology. For this reason, it is increasingly important to keep things simple and controllable. Simplicity, of course, does not mean mere superficial pseudo-simplicity that hides the internal complxity, but the avoidance of complexity on all levels. The user cannot be in full control without having some kind of an idea about what the tool is doing at any given time.

In software, it may be useful to reorder the complexity so that there is a simple core program from which any additional complexity is functionally separated until the user deliberately activates it. This would make the programs feel reliable and controllable even with simple mental maps. An image processing software, for example, could resemble a simple paint program at its core level, and its functionality could be learned perfectly after a very short testing period. All kinds of auxilary functions, automations and other specialities could be easily found if needed, and the user could extend the core with them depending on the particular needs. Still, their existence would never disturb those users who don't need them. Regardless of the level of the user, the mental map would always match how the program actually works, and the program would therefore never surprise the user by acting against his or her expectations.

Software is rarely built like this, however. There is not much interest in the market for movements that make technology genuinely more approachable and comprehensible. Consumer masses who feel themselves helpless in regards to the technology are, after all, easier to control than masses of people who know what they are doing (or at least think so). It is much more beneficial for the industry to feed the helplessness by drowning the people in trivialities, distancing them for the basics and perhaps even submitting them under the power of an all-guessing artificially-intelligent assistant algorithm.

Changing the world

I have now discussed all kinds of issues, of which I have mostly accused bad software, and of whose badness I have mostly accused the economic system that idealizes growth and material abundance. But is it possible to do something about these issues? If most of the problems are indeed software-related, then couldn't they be resolved by producing better software, perhaps even outside of the commercial framework if necessary?

When calling for a counter-force for commercial software development, the free and open-source software (FOSS) movement is most commonly mentioned. FOSS software has mostly been produced as volunteer work without monetary income, but as the result of the work can be freely duplicated and used as basis of new work, they have managed to cause a much greater impact than what voluntary work usually does. The greatest impact has been among technology professionals and hobbyists, but even laypeople may recognize names such as Linux, Firefox and OpenOffice (of which the two latter are originally proprietary software, however).

FOSS is not bound to the requirements of the market. Even in cases where it is developed by corporations, people operating outside the commercial framework can contribute to it and base new projects on it. FOSS has therefore, in theory, the full potential of being independent of all the misanthropic design choices caused by the market. However, FOSS suffers from most of these problems just as much as proprietary software, and it even has a whole bunch of its own extra problems. Reasons for this can be found in the history of the movement. Since the beginning, the FOSS movement has mostly concentrated on cloning existing software without spending too much energy on questioning the dominant design principles. The philosophers of the movement tend to be more concerned about legal and political issues instead of technical ones: "How can we maximize our legal rights?" instead of "How should we design our software so that it would benefit the whole humanity instead of just the expert class?"

I am convinced that FOSS would be able to give the world much more than what it has already given if it could form a stronger contrast between itself and the growth-centric industry. In order to strengthen the contrast, we need a powerful manifest. This manifest would need to profoundly denounce all the disturbances to technological progress caused by the growth ideology, and it would need to state the principles on which software design should be based on in order to benefit human beings and nature in the best possible way. Of course, this manifest wouldn't exist exclusively for reinventing the wheel, but also for re-evaluating existing technology and redirecting its progress towards the better.

But what can ordinary people do? Even a superficial awareness of the causes of problems is better than nothing. One can easily learn to recognize many types of problems, such as those related to response times. One can also learn to accuse the right thing instead of superficially crying how "the computer is slow" or "the computer is misbehaving". Changes in language are also a nice way of spreading awareness. If people in general learned to accuse software instead of hardware, then they would probably also learn to demand software-based solutions for their problems instead of needlessly purchasing new hardware.

When hardware purchases are justifiable, those concerned of the environment will prefer second-hand hardware instead of new, as long as there is enough power for the given purposes. It is a common misconception to assume that new hardware would always consume less power than old -- actually, the trend has more often been exactly the opposite. During a period of ten years from the mid-1990s to the mid-2000s, for example, the power consumption of a typical desktop PC (excluding the monitor) increased tenfold, as the industry was more zealous to increase processing power than to improve energy efficiency. Power consumption curves for video game consoles have been even steeper. Of course, there are many examples of positive development as well. For example, CRT screens are worth replacing with similarly-sized LCD screens, and laptops also typically consume less than similar desktop PCs.

There is a strong market push towards discontinuing all kinds of service and repair activity. Especially in case of cellphones and other small gadgets, "service" more and more often means that the gadget is sent out to the manufacturer which dismantles it for raw material and sends a new gadget to the customer. For this reason, it may be reasonable to consider the difficulty of do-it-yourself activity when choosing a piece of hardware. As all forms of DIY culture seem to be waning due to a lack of interest, it is worthwhile to support them in all possible ways in order to ensure that there will still be someone in the future who can repair something.

Of course, we all hope that the world would change in a way such that the human- and nature-friendly ways to do things would always be the most beneficial ones even in "the reality of numbers and charts". Such a change will probably take longer than a few decades, however, regardless of the volume of the political quarrel. It may therefore not be wise to indefinitely wait for the change of the system, as it is already possible to participate in practical countercultural activity today. Even in things related to digital technology.


Comment this text at Blogspot.