The text is based on an article I have submitted for the Finnish Society for Aesthetics Call for Papers ("Environmental change and aesthetics"). It has not been published (yet). The length of the article is about 78000 characters. This is a translation of the Finnish original.
For decades, the development of information technology has been characterized by a very strong growth orientation, which is now coming to an end with the fading of Moore's Law and environmental change. Academic research in computing has only recently begun to wake up to the fact that there are limits to growth, and that a more fundamental paradigm shift is required to achieve sustainable computing; mere technical tinkering is not enough.
Growth-centricity has also dictated the development of digital esthetics, which will thus need to change as well. I suggest that the guidelines for this change should be sought in subcultures of computer art whose esthetic ideals are very different from the mainstream Maximalism and Virtualism – the self-serving glorification of the big and plentiful and the hiding of the technical basis of things. I have chosen demo art, chip music, pixel graphics and glitch art as examples of these subcultures. The ideals of "digitality" are also being challenged by post-digitality, so I will also look at these subcultures through this concept.
I will conclude with reflections on the possible impacts of environmental change on digital esthetics and computing more generally, and on the ways in which computer art subcultures could play a pioneering role in these developments.
Since its early decades, the development of information technology has been dominated by the ideal of infinite growth. This growth has been given exceptional scope by Moore's Law, a phenomenon described by Gordon Moore (1965). According to Moore, the transistor density of microcircuits doubles every two years or so.
The increase in transistor density has allowed computing to become smaller, cheaper and more ubiquitous, which has also made digital esthetics ubiquitous. Obvious everyday examples include computer and mobile phone interfaces, website designs and video game graphics. But digital esthetics also goes deeper than the surface, such as the preferred resolutions and storage formats for digital photos and video content, or the type of software code that is considered elegant and appropriate.
This growth and ubiquity has also led to an increasing share of human societies' energy consumption and environmental impact being caused by the use and production of digital technologies. According to the report by the International Energy Agency, data networks and data centres accounted for about 1% of global electricity consumption in 2019, and this percentage is growing (IEA 2020). The material life cycles of equipment also have their own growing problems, from the extraction of "blood minerals" to the problems of electronic waste. Rich countries have been able to ignore these problems, as they are mostly seen in the Global South.
The growth ideal is everywhere in IT. In digital esthetics, it manifests itself in particular as Maximalism, which glorifies a maximally data-intensive representation of reality – for example, the highest possible resolution of video content and the photorealism ideal of game graphics. This means, among other things, ever larger file sizes, ever larger data traffic volumes and ever greater use of computing resources. At the same time, increasing numbers of users are spending more and more time on increasingly data-intensive applications. Streaming video content will account for an increasing share of the rapidly growing data traffic – the IEA (2020) estimates that this share will reach 87% in 2022.
Closely related to Maximalism is a phenomenon I call Virtualism: the growing complexity of information technology over the decades has required that its "dirty details" be hidden in "black boxes". Thus, both the users and the developers of the technology have been able to turn a blind eye to its material basis.
Virtualism has enabled information technology to be seen as a "clean technology" that solves environmental problems rather than causing them. In particular, digitalization has been imagined as enabling the decoupling of economic growth from the material economy. Perhaps largely because of this, the unsustainability problems of ITC have not been taken very seriously: Sustainable ITC only emerged as a field of research in the first decade of the 21st century, and its mainstream tends to explain the problems as exclusively technical (Fors 2019, 14). Planetary limits and the need for paradigm shifts are only recognized on the fringes of the research field, for example in the Computing within Limits workshops.
So far, Moore's Law has been able to prevent the exponential quantitative growth from manifesting as exponential growth in energy consumptions. This has contributed to the belief among researchers that sustainability problems are technologically solvable and that a more profound shift is unnecessary. For example, the energy efficiency of data networks has grown at about the same exponential rate as the volume of data traffic, so the energy consumption of the networks has remained fairly constant (IEA 2020). However, this "balance of terror" may not last much longer: The end of Moore's law has been forecast to 2025 (Theis & Wong 2017), and this can be expected to lead to a sharp increase in energy consumption unless the growth of data flows is halted. Emerging computing technologies are unlikely to provide a panacea – for example, quantum computers are powerful for applications such as cryptography, but are not suitable for conventional data stream processing.
At the same time, we are facing a global condition where energy consumption and waste production need to be drastically reduced. This means that in IT, too, it is not enough simply to curb or stop the growth in data volumes, but to move from growth to contraction. This means a direct reversal of both the technological relationship and the esthetic ideals that have been ingrained in IT over the decades. This is, in my opinion, the biggest change that IT and digital esthetics will have to undergo as a result of environmental change.
I am a demoscene artist. Demo art emerged in the mid-1980s with the rise of home computers as part of a wider subculture of young computer enthusiasts. Its subcultural nature allowed it to develop its own particular esthetic basis, which is in many ways antithetical to the mainstream of computing. The value system of the demoscene emphasizes, among other things, the display of skills and ingenuity in very modest technical settings , such as the use of old computers and extremely small file sizes. These technical settings are also an essential determinant of the demoscene esthetics; I therefore consider it obvious that environmental change, when manifesting as technological change, will also redefine digital esthetics, and that the demoscene and some other subcultures of computer art can lead the way in this process.
In CS research, the nature of limits-aware computing has mostly been speculated in the context of Computing within Limits. However, Computing within Limits has so far hardly addressed the esthetic dimensions of its questions, while the subcultures of computer art have generally not given much thought to their relationship with environmental change.
In the article, I will first describe in more detail what I mean by Maximalism and Virtualism. I will then set these in discussion with Computing within Limits, subcultures of computer art and the concept of post-digitalism. Finally, I will present my own proposals for the future of digital esthetics in a dramatically changing world.
In this article, Maximalism refers to the mindset that quantitative growth is unequivocally a good and desirable thing, whether in terms of the size of systems or the amount of resources they consume. It is not related to the artistic or political concepts of maximalism, but is more akin to the idealization of infinite economic growth.
In the esthetics of information technology, Maximalism manifests itself in particular as estheticization of size and abundance. For example, images contain more and more pixels, colors and details, which require more and more storage and computing capacity. Whenever a technical invention enables more to be done with less, it is used to accelerate the quantitative growth - more to do more with more. This phenomenon is called the Jevons paradox.
I also associate Maximalism with the notion that technological progress is inextricably linked to quantitative growth and is not possible without it - in the same way that mainstream economic thought imagines GDP to be directly linked to welfare. Because of this belief, the search for new computing breakthroughs has a disproportionate bias on increased memory capacities and computing speeds.
Deep learning language models such as OpenAI's GPT neural networks serve as a recent example of this. In 2018, the largest models had just over a hundred million parameters (each parameter is a numerical value that expresses, for example, the strength of the coupling between two "neurons" in a neural network). Two years later, the GPT model had ballooned to 175 billion parameters (GPT-3), and at the time of writing, the largest language models already have over one and a half trillion parameters (Google Switch-C) (Wiggers 2021). The amount of computation involved in deep learning neural network research has doubled every few months – in 2018, this computation was three hundred thousand times as much as in 2012 (Schwartz et al. 2019).
Another example of Maximalism is cryptocurrencies such as Bitcoin, whose Proof-of-Work value-creation system is based on a maximal quantity of useless computation – in effect, maximal waste of energy. Although cryptocurrencies are not yet very widely used, their "mining" already accounted for around 0.3% of global electricity consumption in 2019 (IEA 2020). Bitcoin's link to environmental destruction is sometimes very direct – for example, there is a gas-burning power plant in New York bought by Bitcoin enthusiasts for their energy-intensive idle calculation (Hill 2021).
Brazenly unabashed Maximalism, although having gone out of fashion in many other areas of technology and economics, still dominates IT. This is largely because the resources maximized in computing, such as transitors, have ever saller physical manifestations due to Moore's Law. "In the 'bit-world', it is therefore possible to have attitudes towards growth that, when adapted to the material world, would be roughly equivalent to the limitless and explosive expansion of human civilization into outer space.
In everyday computing, the Maximalist esthetic is often associated with the pursuit of the most accurate, sharp and realistic image possible, for example in games or video content. This is a relatively unimaginative goal - everyone knows what reality looks like – and at the same time impossible: the sharp-eyed will learn to distinguish even the most faithful imitations from the real thing. This makes it convenient for marketing: it's easy to make "previous generation" photorealism look outdated by drawing the viewer's attention to the flaws. As the new generation uses more hardware resources, it also serves as a way to sell new microchips and devices to replace old ones. Increasing numerical values are also used in marketing – "1080" sounds ridiculously old-fashioned as a screen resolution compared to "4K", even if the buyer doesn't even notice the difference.
The history of video games is usually told as a kind of technological march of progress and obsolescence, with successive generations of increasingly powerful hardware bringing the gaming esthetic closer and closer to photorealism. Even texts that aim to shake up perceptions of video games, such as Juho Kuorikoski's Pelitaiteen manifesti (2018, 33-52) advocate this point-of-view.
The Maximalism of the games industry is also reflected by changes in language: in the 1980s, the word "gaming machine" (pelikone) was often a pejorative term. A machine that was not suitable for serious professional use because of its lack of features was a mere gaming machine – games were the least of the uses to which any small microcomputer could be put. In the 21st century, on the other hand, a "gaming machine" is expensive, powerful and in need of constant upgrades to stay that way. Similarly, one might consider the term good graphics, which used to refer to, for example, skilfully drawn and esthetically pleasing pixel images and well-programmed graphics routines, but in the 21st century has come to refer more to the technical graphics capabilities of the hardware and how well the game supports them.
There have been attempts to take steps towards photorealism even in areas where it is poorly suited, such as computer user interfaces. For example, in the early 2000s, Apple's user interface design guidelines (2001, 191-192) advised favouring a "photo-illustrative" style for icons that "approximates the realism of a photograph". Since then, Apple has softened its recommendation and started to emphasize simplicity and clarity.
Quantitative growth is often accompanied by an increase in complexity. In order to build and operate increasingly complex systems, there must be ways to manage complexity – for example, when managing large numbers of people, it is practical to blur their individuality and think of them as uniform 'pawns'. In IT, a typical approach is to hide complexity behind layers of abstraction, making the whole appear simpler and "cleaner". The Internet, for example, appears to the average user as a set of web pages with no physical location, rather than as a set of interconnected server machines located around the world – the web pages are a layer of abstraction that obscures the network itself. The abstraction may also give a misleading impression of what it represents: for example, a virtual server looks like a physical server machine to programs and users, even though it is only a program that imitates one.
Abstraction can be seen everywhere in society, just like Maximalism, but IT has provided exceptionally good conditions for it to grow. According to the Church-Turing thesis, which is a fundamental truth in computer science, all "universal" models of computing are equivalent to each other, i.e. in theory any computer can perform any task of any other computer, given enough time and memory. A powerful machine can thus stack up a huge number of layers of abstraction, each of which disguises the layers underneath as something else.
In this article, Virtualism refers to the ideal where the hiding and disguising of systems' inner workings is self-serving and extreme. In the name of user-friendliness, ordinary users in particular are shielded from all "unnecessary" details: error messages often no longer tell us anything other than that something went wrong somewhere. The users of a smartphone application may find it impossible to know which of their files are stored on the device itself and which are on an external server. Even the physical nature of the phone is Virtualist – its monolithic nature and lack of moving parts keep its internal structure as hidden from the user as possible.
Virtualism is a word of my own choice for a phenomenon for which I have not found a good existing name, but what was already criticized by the media theorist Friedrich Kittler (1992) in the early 1990s. For Kittler, the ultimate purpose of the layers of abstraction in computer science is to reinforce the notion that computer programs have an independent existence, independent of the physical machines that run them. Kittler's views have inspired media scholars to pay particular attention to the materiality of media and thus, among other things, to their environmental impact – while marketing discourse still cherishes Virtualist illusions about the immateriality and cleanness of digitality.
Maximalism and Virtualism support each other. When, for example, the number of pixels and bandwidth of a video image are maximized, its technical nature is obscured – it is no longer possible to tell that the image is made up of pixels, or that the structure of the image is being simplified during transmission. According to digital artist Rosa Menkman (2011, 13-14), the ideal of perfect, mirror-like, noise-free data transmission is a kind of holy grail that can be approached indefinitely but never reached. Glitch art, practiced by Menkman, critiques this ideal by highlighting the glitches and errors, thus also revealing hidden layers of abstraction.
Google image searches for the words digital and virtual bring up similar imagery, suggesting that the stereotypical ideas of "digital" and "virtual" are quite close: bluish patterns and figures made up of streaks and spots of light, floating immaterially in the air. The background is usually blue, and if the image shows a physical environment, it is artificial, sterile and often also blue.
The esthetics of Virtualism are also reflected in IT metaphors. Borning, Friedman and Gruen (2018) mention the word cloud, which conjures up images of something light and intangible (likewise floating against a blue background), when in reality it refers to large, energy-hungry computer halls that rent out storage and computing capacity to customers, commoditized in such a way that they do not have to care about the physicality of the servers at all.
One difference in the results of the image searches is that the search term "virtual" brings up virtual reality headsets that replace a person's normal visual perception of their environment with a computer-generated world. A kind of opposite of virtual reality is augmented reality, where the sensory reality is not hidden but augmented by computer-generated elements. For example, there might be an AR application that identifies plants and mushrooms that the user sees in the wild, thus enhancing their experience of nature. Augmented reality can thus be "Antivirtualist", connecting the user more strongly to reality rather than alienating them from it. Coles and Pasquier (2016) have found that digital eco-art, implemented using portable media hardware, could create profound experiences of the oneness of humanity and nature in some subjects – even those who had initially doubted the whole idea.
Although Coles and Pasquier talk about the interconnectedness of human, technology and nature, they do not comment on the environmental impact of the devices they use in their project. This is, in my view, a very common blind spot created by Virtualism. This blind spot should be countered by an Antivirtualism that exposes how the material lifespans of technology relate to the natural reality.
Sustainable ICT emerged as a field of research in the first decade of the 2000s, when the environmental impact of information technology became a major concern. Prior to this, ICT had generally been seen as "clean technology", solving rather than causing environmental problems, and playing an important role in making society more sustainable. In particular, in its "immateriality" it was seen as enabling the decoupling of economic growth and material consumption.
However, there are major problems in the study of sustainable computing, discussed in Per Fors' PhD thesis (2019, 13-19). Sustainability is often reduced to a set of technological goals (e.g. reducing energy consumption and toxic chemicals), the impact of which is subsequently cancelled by rebound effects (such as the afore-mentioned Jevons paradox). At the same time, the social and societal dimensions of technology are oversimplified, for example by assuming that people are independent and rational "homo economicus"es. In this way, the mainstream of Sustainable ICT tends to reinforce the current trajectory rather than support the transition to a more sustainable way of life.
However, a fringe has emerged where the ecological, material, energetic and social limits of computing are taken seriously. This margin has been meeting in Computing within Limits workshops every year since 2015. Daniel Pargman (2015), one of the founders of the event, notes that the mainstream of sustainable computing research had treated concepts such as peak oil with disdain or outright denialism, and a break from the mainstream was therefore necessary.
One of the founders of Computing within Limits is Bill Tomlinson, who launched the concept of Collapse Informatics in 2013. The purpose of Collapse Informatics is to explore and plan in times of abundance what computing could be like in future times of scarcity (Tomlinson et al. 2013). Indeed, many of the papers published in the early Limits workshops represent Collapse Informatics, considering scenarios in which the current IT infrastructure or semiconductor industry no longer exists, or in which the availability of IT resources slowly dwindles (Jang et al. 2017; Penzenstadler et al. 2015).
Eriksson and Pargman (2018) have proposed the use of counterfactual history – in this case, mapping the possible developments of information technology in a Coalworld where oil would have been available at only half of what it actually was. According to Eriksson and Pargman, alternative histories are particularly welcome because it is difficult for many people to imagine an information technology in which the explosive growth made possible by Moore's Law would not be an integral part.
While the early articles were often characterized by bleak pessimism, later articles have framed the outlook and proposals for development in a positive light. For example, it is better to justify the moderation of digital entertainment on the grounds of well-being and meaningful time use than by appealing to the finiteness of the planet (Widdicks & Pargman 2019). In the introductory text of the website of the 2021 event, Computing within Limits was already calling for "submissions that move us closer towards computing systems that support diverse human and non-human lifeforms within thriving biospheres".
New paradigms of computational thinking are the subject of many articles - for example, benign computing seeks alternatives to abstraction-oriented ways of designing information systems (Raghavan 2015). Inspiration has also been sought from more general sustainability thinking and sustainable design trends – for example, Dominique Hes and Chrisna du Plessis' (2015) regenerative sustainability has given rise to the idea of regenerative computing (Mann et al. 2018). In 2021, both the event organizers and many participants drew inspiration from Arturo Escobar's (2017) book Designs for the Pluriverse, which takes a critical and decolonial perspective on design.
The borrowing of ideas from alternative thinkers is by no means a new phenomenon in computing – for example, Ivan Illich's Tools for Conviviality (1975) has inspired an approach called convivial computing (Fischer & Lemke 1987), which emphasizes the possibilities for users to shape the system to their own needs. Even earlier, the same book had given inspiration to Fee Felsenstein, the creator of Osborne I, the first succesful portable computer (Levy 1984, 179-181).
Marloes De Valk (2021), who has mapped out concepts of sustainable computing, has observed that radical approaches are currently being explored and practised at many different margins, using different names, and are only just discovering each other. The names used include small technology, salvage computing, appropriate technology and low-tech, and many practitioners also feel a connection to the solarpunk movement. Also included in De Valk's selection is the concept of permacomputing, which I launched myself, envisioning a different kind of computing based on ideas from permaculture (Heikkilä 2020).
References to esthetics are still quite rare in this fragmented field, and very few articles on Computing within Limits, for example, touch on digital esthetics in any way. Some practicioners have needed to consider it to some extent, however; the builders of local solar-powered web servers, for example: to ensure energy sufficiency, websites need to be as light as possible. The solar server version of Low Tech Magazine favors images with minimal color (Abbing 2021), while the Solar Protocol project changes the layout of content depending on the amount of energy the server has available (De Valk 2021).
Tomlinson et al (2013) suggest that Collapse Informatics should explore the ways in which information technology is already being used by groups who have voluntarily marginalized themselves from the norms of the consumerist world. Survivalists and freegans are given as examples of such groups. Personally, I would prefer to also include less voluntarily marginalized groups, especially ones from the Global South.
In considering what digital esthetics looks like in a world where there is less room for Maximalism and Virtualism, we can explore, in the spirit of Collapse Informatics, the subcultures of computer art where different ideals prevail. Those in which, for example, old equipment or even electronic waste plays a central role, and in which the technological relationship and esthetic ideals are very different from the mainstream. It can also be seen as an advantage if the subculture is relatively independent from the tools, settings and phenomena of the industrial mainstream culture.
In what follows, I will examine art forms and subcultures such as demo art, chip music, pixel graphics and glitch art. Since some of these have links with the early computer hacker subculture, I will start by discussing it. I will also reflect on the concept of "Post-digital" and related phenomena that emerged in the 21st century.
There has been computing-related subcultural activity since the 1950s. As an early example, Steven Levy (1984, 22-27) mentions the enthusiast community of students at the Massachusetts Institute of Technology centered around their TX-0 computer. This community already saw that computers could be used to create art and beauty; this was quite counter-cultural at a time when computers were mostly approahed with cold utilitarism. Levy identifies this idea as one of the seven principles of hacker ethics (1984, 35-36).
The hacker esthetic of the early decades was characterized by esoteric incomprehensibility: hackers valued, for example, the kind of elegance or ingenuity of code that could not be justified from a utilitarian point of view and was incomprehensible to outsiders. When Peter Samson, one of the TX-0 hackers, created a musical program for the machine that played Bach melodies in monophonic beeps through a speaker connected to the machine's accumulator register, most people saw only that the multi-million-dollar machine was being used like a cheap toy piano. The hacker community, on the other hand, saw an interesting hack to make the computer do something it was not intended to do (Levy 1984, 27-28).
The Jargon File (Raymond 2004), which documents hacker culture and its concepts and which began to be compiled in 1975, names hack value as a characteristic of hacks, but does not even bother to define it – to Raymond, it is something that cannot be explained, and which often appears to outsiders as useless tinkering. However, the dictionary even gives a formula for the hack value of display hacks that generate graphics on a display screen: multiply the beauty of the images produced by the program by its algorithmic ingenuity and divide the result by the length of the program. But again, it is not clear what the beauty itself consists of. However, the hack value defined in this way has a clear connection with the elegance that a mathematician sees in theorems, which are short and concise but profound.
Psychologist Sherry Turkle (1984, 201-204), who has studied computer culture since the 1970s, describes hacker esthetic as "anti-sensual": what a thing looks or sounds like on the surface can be completely ignored, because only its inner elegance and ingenuity matter. One hacker Turkle met listened to classical music very analytically, looking for "recursive phenomena" in it, using Douglas Hofstadter's book Gödel, Escher, Bach as a guide, but ignored the fact that the faulty stereo equipment was distorting the sound in a downright painful way. Turkle also tells the story of a musician who ended up in hacker circles and found it difficult to find a common language with hackers: although music was close to many hackers, they spoke of it in terms of its structural rather than sensual beauty.
Today, hacker culture has taken many forms – there are hackers focused on security and data breaches, electronics hackers who gather at hacklabs, and software developers who gather at hackathons. The most direct continuity of tradition, however, is probably represented by open source culture, whose esthetics are still described by anthropologist Gabriella Coleman (2013, 97-101) with a rather "anti-sensual" emphasis: for example, programming language and code may have their own beauty, and hacking as an activity may have its own esthetics, but Coleman does not seem to refer even once in her book to the more superficial layer that is more understandable to outsiders.
A major hacker project of the 1970s was the personal computer, which became the dominant form of computing in later decades. Hackers had already noticed that direct human-computer interaction could be a consciousness-expanding experience – something that appealed to the counterculture of the time: in 1972, Stewart Brand, founder of Whole Earth Catalog, considered the arrival of computers to the public "perhaps the best news since psychedelics" (Gere 2002, 125). The 1980s saw the birth of the Free Software movement and the GNU project, which was complemented in the 1990s by the Linux kernel. Today, a significant proportion of software in use is open-source.
Hacker culture has traditionally been a subculture of computing students, researchers and professionals. As key hacker projects have become mainstream, there has also been a merging of values. In California's Silicon Valley, the mixing of hackers, hippies and capitalists has given rise to a so-called Californian Ideology (Barbrook & Cameron 2007), which takes the ideas of endless economic growth and virtualization to ridiculous extremes (including the already mentioned utopia of explosive expansion to the outer space). Over the years, Linux operating systems have ballooned in size in much the same way as their commercial rivals, and the traditional hack value of admiring smallness and elegance has place only in the individual parts of the ballooning whole. While there are minimalist hacker projects, they often suffer from the self-serving obscurity and esotericism of the original hacker culture, making them inpenetrable to non-hackers.
Demo art was born in the second half of the 1980s, by which time the dreams of the previous generation of hackers about personal computers had already been realized. In the rich Western Bloc countries, there already was a computer "in almost every home" – at least in those with school-age male children. There were several models of affordable "home computers" on sale, all incompatible with each other, so games and other software could only be exchanged between owners of the same model. In Finland, the other Nordic countries, Germany and some other countries, the most popular machine was the U.S.American Commodore 64.
By this time, home computers were already being used to make commercial art – games – so early demo art defined its esthetics in relation to them: young enthusiasts wanted to create visually striking and good-sounding things that were at least as good as those in the games. They also wanted to impress the less tech- and programming-savvy users – good demos were the ones that even the gamers wanted to share with each other. Programming elegance was meaningless unless it manifested in some outwardly noticeable way, such as an unprecedented graphical trick or the small size of a program. In the early days, demos were linked to breaking copy protection in games, and dense code was needed for the crack-intros that were added to the beginning of games as "graffiti" of the pirate groups.
Demo artists form groups, which in the early days were mostly local groups of friends. Initially, physical "snailmail" was used for keeping contact with groups of other localities and distributing the demos, but this was superceded in the 1990s by modems and the Internet. This made it also easier for the groups were to function regardless of their members' locations. However, face-to-face meetings are still an important part of the subculture. A few demo events grew in size and began to attract young people interested in computing in general. In the 1990s, such major events included The Party in Denmark, The Gathering in Norway and Assembly in Finland. Assembly is still an internationally significant demo event, even though it is mainly seen as a gaming and e-"sports" event.
Competition between groups to find new technical tricks motivates the demomakers to explore the devices they use at ever deeper levels to circumvent their limitations. This can be seen as the root cause of why the demoscene's values and relationship with technology began to diverge from mainstream computer culture in the 1990s. Demos were strongly linked to a particular type of machine and its technical limits, which made it difficult to judge demos made for newer and more powerful machines: the standards were entirely different.
A particular characteristic of the hardware relationship of the demoscene is that there is no obsolescence. Although new machines are always sooner or later accepted (Reunanen and Silvast 2009), they never replace the classic home computers such as the Commodore 64 or Amiga, which have continued to be supported by major demo events decade after decade. Discontinued devices have been kept in working order by hobbyists and, where necessary, by developing replacement components. In the former Eastern Bloc countries, a large proportion of hobby computers in the 1980s were built by hobbyists themselves, so the local demo artists also came to have a stronger relationship to the material basis of the technology they were using (Elfimov 2008).
Demos are also made on equipment that is unique or otherwise exceptional - for example, home-built circuit boards or unusual "found objects". For example, a decommissioned timetable display of the Helsinki metro has been used as a display device for a demo. Daniel Botz (2011, 387-390), who has studied the esthetics of the demoscene, has observed that for the demoscene artist, computers and other devices are not tools but material from which the work is shaped. Although demos are often considered art, the centrality of technology and material has also led to the approach of demos as a craft (Hansen et al. 2014).
However, the vast majority of demos have always been made for "ordinary everyday computers", which since around the mid-1990s has mostly meant IBM PC compatibles running Microsoft operating systems. The problem with these, however, is that new, more powerful components come onto the market faster than one can explore their potentials. In the early 1990s, the problem of constantly changing technical boundaries began to be compensated for by a new type of competition, introcompo, which limited the file size of the demo to, for example, forty kilobytes or four kilobytes. The file size constitutes a clear and permanent limit, the expressive potential of which can be explored and pushed year after year, decade after decade.
In the 21st century, the importance of intros has increased, with a shift towards ever smaller works. According to a popular saying, "4k is the new 64k", and even smaller sizes such as 1024 or 256 bytes have become established alongside 4096 bytes (Reunanen 2013). Lovebyte, an online demo event held for the first time in 2021, focused only on sizes of 256 bytes and below, and to the surprise of the organizers, hundreds of works were submitted. At the same time, mainstream computing has seen software bloat – for example, many popular video games are now over 100 gigabytes in size (Ridley 2020). If you compare a hundred gigabytes to the weight of a 200-tonne blue whale, 64 kilobytes would be roughly equivalent to an ermine (128 grams) and 256 bytes to a large bumblebee (0.5 grams). Similarly, one could compare the entire collection of books in a large library to a single short story or poem.
Demos have often been compared to music videos, as most of them have graphic events that are precisely synchronized with the events of the audio track. However, demos are not videos – sequences of pre-recorded images – but executable computer programs that produce the same performance in real time at each run.
There are many different technical and esthetic approaches to demo art, but one that is particularly typical is what I have called Optimalism (Heikkilä 2010). Optimalism often has the same goals as Maximalism – for example, the realism of 3D graphics – but they are achieved within strict minimalist constraints: getting an old machine to do things it is supposedly underpowered to do, or cramming massive landscapes into a ridiculously small space. An example of the latter is Iñigo "IQ" Quiles' four-kilobyte PC demo Elevated (YouTube) from 2009.
Optimalism can take many forms, and often involves fooling the viewer. For example, blurring and lighting techniques can be used to hide flaws and fool the viewer into thinking there are more pixels and detail in the image. This approach is very visible in the Amiga demo Lapsuus by Maturefurk (YouTube).
Some techniques may only work in certain special cases – for example, 3D objects that can only be rotated around a specific axis. Many of the techniques have similarities to the tricks used by non-digital cinema – for example, shadow and lighting techniques have been used in noir films where they often stem from the limited budgets but have been made an esthetic cornerstone of the genre.
Both the characteristics of low-tech devices and the artificial size limits are called limitations in the demoscene. I have considered this to be a somewhat one-sided term, as the constraints are also enablers and sources of creative inspiration for the demo artist – in the same way as they are for, say, OuLiPo writers: they may even help to discover ideas that would otherwise go unnoticed.
While most demos do not really communicate anything, the medium itself often carries a strong message – a demonstration that new technical and artistic discoveries do not require an ever-increasing framework. If new tricks can be extracted from computers that are several decades old, why does the mainstream think that technological progress requires constant hardware upgrades? For me, as a demo artist, another important aspect of the choice of medium and extremely small files is the critique and ridicule of the prevailing Maximalist culture.
The success of the Internet since the mid-1990s has created space for a wide range of computer art. Around the turn of the millennium, Internet art communities specializing in chip music and pixel art appeared. These are types of art also practiced by the demoscene, but the new venues took a somewhat different approach.
"Chip music" as a term originates from the demoscene and refers to the sound chips of old computers and game consoles, which synthesize their sound from simple "bleepy" waveforms. The term encompasses both the music made for devices using such a chip, such as the Commodore 64 or the NES, and music that sounds like it but is made using other technologies. The need for this term arose from the need to distinguish it from the "bigger" computer music on machines such as the Amiga, which uses digitized audio samples from traditional musical instruments.
Chip musician Anders Carlsson (2010) identifies the micromusic.net online community, which also acts as a record label, as the originator of the "chipscene" of the 2000s. The music released through this community is characterized by an appreciation of the "bleepiness" of the sound and a punk-inspired playfulness. This is in stark contrast to the chip music of the demoscene, which tends to get as far as possible from the "bleepiness", seeking the richest possible expression within a limited framework.
Carlsson distinguishes two opposing approaches in "hacker esthetics" - transgression and immersion. What both have in common is that the medium guides creativity (as in hacker culture in general), but they differ in how the characteristics of the medium are treated. To summarize, transgression focuses on the atypical – what the tool can do when taken to its limits – while immersion focuses on the typical – what is most characteristic of the medium as it is. Broadly speaking, the demoscene can be seen as favoring transgression, while the "new scenes" favor immersion.
Many chip musicians value authenticity – the use of real hardware and sound chips rather than software imitations. This is particularly important at gigs where the equipment is prominently displayed. A very popular instrument used by gigging chip musicians is the Gameboy handheld console, originally released by Nintendo in 1989, which usually runs the LSDj music program developed within the community itself.
Although modern computer monitors represent all graphics as pixels, pixel art refers explicitly to graphics whose pixels are deliberately placed in specific locations by the artist (Silber 2016, xv). This is in contrast to 2D or 3D vector graphics, for example, or the digital image editing software Photoshop, which automatically generate the pixel patterns based on the user's choices. Pixel art generally restricts the amount of pixels and colors in order to keep the artist's workload reasonable. If the art is done on a low-end device, such constraints come with the device – for example, someone doing graphics on a Commodore 64 has to commit to the 16-color built-in palette.
The difference between transgression and immersion in the approaches of chip musicians is also evident in pixel art: in demoscene, for example, it is typical to try to smooth down pixel boundaries with intermediate colors (antialiasing), whereas the artists in the pixel forums of the 21st century are more likely to display the pixel clusters sharply. This may also be due to the fact that the pixels on modern LCD screens are always clear and sharp anyway, and cannot be blended together in the same way as on the CRT screens of analogue TV sets, on which the demoscene tradition is based.
Demoscene-type Optimalism in pixel art:
Frog,
Landscape and a Lot of Clouds by Oys (Commodore 64 multicolor-mode
picture, 2006).
21st-century pixel art style: Valenberg's background graphics for
the game VirtuaVerse.
Pixel art is an extremely basic form of digital art – the kind that any rudimentary drawing program or digital image-producing device can do – but it is also the form of digital art that is least connected to digital technology. Since pixels are primarily placed by hand, pixel art can be just as easily drawn on, say, grid paper, or created using traditional craft techniques such as cross-stitching. This also allows the history of pixel art to be extended well into the past – as well as into a possible future in which information technology is no longer used.
Pixel graphics has grown in popularity during the 21st century, particularly in low-budget indie video games – largely because it can achieve a satisfactory result with less work than, say, high-end 3D graphics. Daniel Silber (2016) also cites the timelessness of the style, and the iconic and memorable nature of its simplicity. Of course, for many gamers, pixel art is also nostalgic, although Silber points out that it is also a sensible and modern choice: pixel graphics are more accessible to creators and the file sizes are much smaller than in the world of 3D graphics.
While pixel art highlights the basic elements of the digital image that people do not want to see, glitch art focuses on the malfunctions of software and hardware that are even more undesirable. Glitches are deliberately created, for example, by short-circuiting digital devices (circuit bending) or by messing up the contents of files (data bending).
The devices used in circuit bending are often electronic sound toys based on microchips, but not intended to be reprogrammable by the end user. Such devices are also of interest to chip musicians, for example, but unlike the traditional hacker approach, circuit bending relies more on random experimentation than on understanding the technical workings of the device. In any case, in musical use, the aim is to produce sounds that are within the capabilities of the equipment but which the end user is not supposed to be able to access.
An early example of digital glitch art is Jamie Fenton and Raul Zaritsky's 1978 media art work Digital TV Dinner, in which the image and sound of a malfunctioning video game console are recorded onto a video cassette (Betancourt 2017). However, the word glitch was first associated with art only in the context of the glitch music style of the 1990s (Cascone 2000), and the first visual artist to call their work glitch art was Ant Scott in 2001-2005 (Menkman 2011, 34).
Discarded electronics is of particular interest to circuit benders. Artist Garnet Hertz and media researcher Jussi Parikka (2012) use the name "zombie media" for media that has been returned to use after its "death". Media archeology is media research that focuses particularly on old and "dead" media, and Hertz and Parikka suggest that circuit bending and other hardware hacking that creates zombie media can help media archeology to uncover the potential of media that was not revealed during their lifetime. It would also shed light on how electronic waste can be used to create new kinds of devices for new purposes. Glitch and media artist Benjamin Gaulon's works often highlight the life cycles and reuse of electronics.
For glitch researcher Rosa Menkman (2010), an essential feature of glitches is that they reveal things about the medium that someone wants to keep hidden. For example, defacing the binary content of a JPEG image file creates glitches with grid-like boundaries, revealing the grid-based nature of the JPEG storage format. Menkman describes the glitches typical of different formats with technical descriptions in A Vernacular of File Formats (2010).
Compression artefacts, which can appear in an over-compressed JPEG image or digital video, for example, are akin to glitches – they are also unwanted, and often reveal similar things about formats. Compression artefacts are estheticized, for example, by the Small File Media Festival, which has as its leitmotif the criticism of the environmental impact of streaming large video files. The works presented at the 2020 and 2021 events have the maximum file size of 5 megabytes, and as the majority of the works are in standard video formats, they are usually full of artefacts when compressed to this size.
JPEG glitches:
JPEG compresion artefacts:
While demo artists, chip musicians and pixel graphic artists typically strive for the total control of their media, glitch art is more about throwing oneself at the mercy of chance. On the other hand, even the demoscene has room for elements found through chance or programming errors; especially algorithms that produce as much as possible with as little as possible are often difficult to control. Working on extremely short programs can sometimes approach circuit bending in its randomness – for example, very short bytebeat programs that produce music-like sounds have been "discovered" largely as a result of random experimentation (Heikkilä 2014).
Post-digital refers to things that can be seen as a reaction against the pervasive digitality and the sterile "flawlessness" of digital media. A clear example is the subcultural renaissance of analogue techniques such as film photography and C-cassettes. Often, in the spirit of glitch esthetics, "lower quality" technologies such as Lomo film cameras are preferred, bringing out the intrinsic quality of the film medium.
On the other hand, even things that are very visibly digital may be called "post-digital" – glitch art or pixel graphics for example. For Cascone (2000), who coined the term "post-digital", the word actually meant something that comes after the "digital revolution" advocated by digital utopian Nicholas Negroponte in 1995. Cascone used the term to refer to Glitch music that was produced with computer software in the late 1990s.
Florian Cramer (2015), a software artist and art researcher, finds the concept of "post-digital" confusing and misleading, yet useful. Cramer clarifies the concept by stating that the "digital" in it is generally not a technical term, but refers to market-driven, stereotypical images of "digital" – such as the ones (and zeros) brought up by the Google image search I discussed earlier. The prefix "post" can also be interpreted in two ways – so, "post-digital" may refer to "the later form of digital" as well as "what comes after digital".
Cramer has also found that young art students who have lived their whole lives in a digitally infused world are often fascinated by non-digital media. Perhaps the same exoticism is now seen in analogue technology as was seen in digital by the youth of the 1980s – a reversal of preferences in just one generation. At the same time, however, Cramer points out that the analogue is not intended to replace the digital: there are places for both in life and art – in much the same way that there is room in the demoscene for both old and new platforms.
The term "post-digital" is so broad that it can easily refer to any digital art that is or will be practised outside the Maximalist-Virtualist mainstream. I will henceforth use it for this purpose, although I will put it in quotation marks because of its misleadingness.
One of the things that seems to unite "post-digital" phenomena is that they are easily perceived as nostalgic, even if for the creators themselves it is something quite different. The nostalgia narrative has often been frowned upon by demo artists, chip musicians and pixel artists, who, instead of nostalgia, may emphasize, for example, the manageability and tangible simplicity of their mediums compared to the mainstream digitality. Arguments against the nostalgia narrative include the popularity of pixel graphics in advertising aimed at young people and the use of chip sounds in mainstream popular music.
The popularity of the nostalgia narrative is certainly due to the popularity of nostalgia-based "retro" phenomena, but perhaps also due to a naïve notion of progress, in which the new is always better than the old in every rational way, and the preference for the old or reasonable can therefore be satisfactorily explained only by feelings. If rational arguments for "post-digital", based on the limits of the planet and environmental change, become more widespread at some point, we may not only hear about the "longing for the past", but also about the "hairshirting" that all voluntary reduction of consumption is often perceived to be.
It is definitely possible to imagine a future where environmental change has no impact on digital esthetics at all. In this case, for example, one has to assume that technological society will be able to continue to function as it does today, to the extent that escaping reality into digital fantasy worlds remains possible and there is no "degradation" of technology and telecommunications: as environmental change deepens, the role of digital media as an unchanging escapist safe haven would become more and more consolidated. However, I do not find this in any way a likely scenario, nor is it interesting for the purposes of this article.
My starting assumption is that environmental change will inevitably lead to changes in technological society that cannot be virtualized out of sight, even in the "bit world". These changes would initially arise, in particular, from a voluntary or forced reduction in energy consumption. I also assume that the transition to a leaner economy will not be painless, but will sooner or later lead to increasing disruption or outright collapse of electricity, telecommunications and logistics infrastructures and the like. More generally, I also associate environmental change with the loss of predictability and controllability of natural systems. All this will also affect digital technology for as long as it is in use, and thus digital esthetics.
In what follows, I hypothesize the impact of environmental change on digital esthetics and, to some extent, on digital technology in general. I base my ideas partly on ideas generated by the Computing within Limits articles, but I have also taken the liberty of throwing in more imaginative ideas, especially for speculations about the more distant future.
I am convinced that Maximalism is receding from digital esthetics and IT in general due to several distinct reasons. As Moore's Law fades, new hardware designs are less and less able to increase computing capacity without increasing energy consumption. Growing awareness of the environmental impact of IT will push maximization-oriented market towards the margins, as more and more people settle for what is good enough for them. Moreover, people's growing frustration with pervasive digitality and its entrenched esthetics may increase the popularity of "post-digitality" – as has been the case with the art students Cramer has observed – thus making Maximalism less fashionable. Concrete problems such as access to energy and raw materials may also encourage producers to favor moderation and simplicity over maximization.
There is room for significant reductions in consumption, for example, in data traffic, an increasing proportion of which now consists of video streaming for entertainment purposes. In practice, this would mean, for example, that entertainment services would no longer follow the "cornucopian paradigm", where consumers are encouraged to spend as much time as possible with as lavish data flows as possible. There would be a greater emphasis on the meaningfulness and quality of time, and services might also start to set usage limits for its users. Duration-limited films would increase their popularity compared to binge-watchable and never-ending TV series. As telecom costs rise and devices become more scarce, viewing could also become more communal and celebratory, with more viewers invited to the same screen.
The pressure to keep data volumes affordable would probably lead not only to a reduction in the amount of time spent watching, but also to a preference for more compact storage, which would have a range of visible effects. As coarser image resolutions and more visible compression artefacts would have to be "tolerated", their positive estheticization might also increase – in the same way as they have been estheticized at Small File Media Festival. On the other hand, one could also start to favor the type of imagery that compresses better and thus generates fewer artefacts – for example, animated cartoons are typically more compression-friendly than video shot on camera. Shots might be longer and move slower, with less sharp detail and more shadows and silhouettes to reduce the amount of details to be compressed. These features would also fit naturally into a world where, with less energy consumption, there would be less artificial light and less haste.
Moderating data volumes, on the other hand, would likely lead to the preference and development of video formats whose pecularilites can be estheticized more easily – for example, a codec might prefer smooth blurring of the image instead of pixelating. Formats could also become more specialized and distinctive – for example, cartoon animations could be produced using Flash-type animation techniques, which were popular on the Internet before the dominance of generic video formats. Content production would then be more closely linked to the file format, and viewers would become familiar with the characteristics of the different formats.
If network capacity starts to decline at some point, it can be expected that large files will be transferred on physical media rather than over the network, and that the network will be used mostly for messaging and small file sharing. This is the case in Cuba, where a weekly file package, El Paquete Semanal, illegally collected by private individuals, is the main way to obtain foreign TV series and movies to watch (Dye et al. 2018). Such a development would divide digital esthetics into different "worlds": online, concise formats and minimalist approaches would be favored, while hand-to-hand-shared memory sticks would still have some room for Maximalism.
A significant and rapidly growing proportion of today's consumer-level Internet traffic is that related to digital gaming. In 2019, this accounted for around 8%, mostly consisting of copying games over the Internet to gaming machines (Sandvine 2019). The main reason for this is probably the huge file sizes of PC and console games, but I believe that most of this is "air" – wasteful, Maximalist graphical data that could be significantly reduced, for example by demoscene techniques, without affecting the audiovisual appearance of the games in any way.
On the other hand, the vast majority of video gaming already takes place with mobile devices, which people already have for other purposes. These games are also considerably smaller than the Maximalist games. In a scarcer world, there would be ever less room for energy-hungry gaming-only hardware and their associated rivalry culture, and more and more games would be targeted at devices that people already have, and which would not be "upgraded" for very light reasons. At the same time, they would become lighter in terms of technical requirements, and less photorealism-oriented.
Economical moderation would mean that digitality would no longer be so pervasive. There might be fewer devices and more sharing, so time spent on them would be more valuable, more thoughtful and perhaps more communal. Reducing screen time would certainly benefit public health in many places.
As "post-digital" is often defined as a reaction to the pervasiveness of digital, perhaps we should start talking instead about "post-post-digital", for example, though preferably under a less monstrous name. Many features of "post-digital" would certainly be included in it, but now for reasons of resource economics rather than escapism or counter-culturalism.
If "analog life" returns to the status it had a few decades ago, digital technology may perhaps regain some of the glamor it still had in the 1990s. However, this glamor would be quite different from that of the last millennium, as there would be no market frenzy for constant hardware replacement, but the new technological visions would rather be based on doing new things with the hardware that is already in use. However, there could be an ironic nostalgic relationship with the hype of the past decades, resembling retrofuturism, which circulates old visions of the future, or vaporwave, which circulates the marketing imagery of 1990s consumer electronics.
With the demise of planned obsolescence and the vanity-driven consumerist rivalry, we can expect an era of "planned longevity", where hardware is kept in service for as long as possible, regardless of its age or "performance". The mere awareness of the material and energetic sacrifices that have been made to produce these devices would lead to a respect for them as more than just rapidly replaceable consumer goods.
The supply networks of the current electronics industry are highly vulnerable to disruption, as evidenced, for example, by the component shortages of the early 2020s. I expect disruptions to increase in the future due to factors such as the dwindling availability of extractable raw materials and the political instabilities caused by the environmental crisis. Production may become intermittent, or there may be a shift to simpler and cruder processes with less need for the more specialized raw materials. In the most extreme scenario, production would cease completely and no new components would be produced.
However, even in this scenario, information technology would not necessarily disappear very quickly. It has been estimated that the information technology in the world today could be kept in working order for several generations, as long as social institutions are in place to maintain it (Jang et al. 2017). I like to imagine such institutions as monasteries of some sort – after all, Western European monasteries preserved and developed not only spiritual culture but also, for example, water mill technology (Lucas 2006).
Components that have been in use for several decades will sooner or later start to break down. In some point, a component may become glitchy in a way that it can still be used for something with some adjustments (art, for example). This can give rise to a kind of reverse glitch esthetics that tries to either circumvent or exploit the glitches instead of celebrating them. This is similar to what one might have to do with ecosystems that have been disrupted by environmental change – nature "glitches", but with creative design one can still make an area of land produce food, for example.
An environment that is bizarre, incomprehensible and uncontrollable would hardly be treated as a mere resource any more. The new approach would perhaps be closer to pre-industrial worldviews, where gods and spirits influenced various natural phenomena, and humans had to nurture their relationship with them in order to retain their success at farming and hunting. This change in worldview might also make the attitudes towards inanimate objects more animistic. In Japanese Shinto, something like this happened centuries ago, which has enabled the Japanese, among other things, to relate to robots in a more natural way (Kitano 2007). A kind of animistic attitude towards information technology is also represented by the Metaphorism of video game researcher Ian Bogost (2012, 61-84), which speculatively but realistically tries to envision the internal experience of an object – for example, how the sensor of a camera "sees" the world.
In a world where everyday material resources are closer and production chains are more tangible, it can be assumed that people would demand similar tangibility from the technology they use. This demand would be increased by the need to repair equipment and solve problems locally. There would no longer be a desire to hide the internals of the systems, but rather to make the systems reveal them in accessible and transparent ways. Virtualism that hides everything "irrelevant" would become feel ever more oppressive and alienating; so, instead of shiny and sterile monoliths, there would be brutal observability, moving parts and an abundance of informative detail.
The philosopher of technology Albert Borgmann (1984, 196-210) suggests "focal things and practices", such as running or the culture of the table, for preventing technological alienation. "Focal" refers to something that acts as a kind of focus (as in optics) on the world around it. Ideally, Borgmann believes, all of life should be focal, but today's "technology paradigm" makes it impossible, so islands of focality have to be deliberately created.
However, Borgmann does not believe that, for example, information technology (216-217) or motorcycle maintenance (160-161) can serve as a basis for focality, because industrial technology is so strongly linked to the alienating technological paradigm. But could the world change so that this technological paradigm no longer dominates? New paradigms might be able to enable a more rooted kind of digital technology.
The philosopher of technology Yuk Hui (2017) uses the term cosmotechnics for the unity of cosmic order, moral order and human technical activity. According to Hui, modern technology has broken this unity and should now be renewed by creating new cosmotechnics suitable for the Anthropocene era. Since cosmotechnics is inextricably linked to a specific culture, each culture should create its own cosmotechnics.
The digital technology and new digital esthetics brought to the world by new cosmotechnics might be a diagonal opposite to how they are today. Instead of global uniform standards and a sterile pseudo-neutrality, it would perhaps be more crafty and local in color. Even if the devices were still more complex than, say, a spoon carved from the wood of the local forest, they would compensate by expressing their inner essence in an understandable way.
Today, it is easy to see things in the "non-digital world" as more tangible and understandable than things in the "digital world". As the environment becomes more unpredictable and uncontrollable, and the "focality" of artificial devices is improved, the roles may get reversed. Digitality could then be a kind of pocket of intelligibility, helping to bring clarity to the complexity of the bizarre environment.
As the environmental change deepens, there is a particular urgency to move from unsatisfiable consumption to moderation. This also applies to digital technology, which is still often imagined as a "clean technology". Much of the consumption of digital technology is entertainment-driven, and the consumption of this entertainment is itself based on Maximalist esthetics. However, sustainable computing research does not even seem to have discovered digital esthetics, and even environmental activist projects with a more minimalist digital esthetic - such as the Small File Media Festival, or the sustainable internet magazine Branch – still seem to be taking their first faltering steps.
I believe that non-Maximalist subcultures of computer art can help to bring about the necessary change. They have concrete examples of different esthetic approaches, which may even have several decades of tradition behind them but still look for new ways of doing things. Demos and pixel art have often tried to appeal to masses, even if they require some acquired taste. In any case, they are closer to popular culture that appeals to common people than, say, much of the academic art world or activism-based alternative culture projects.
For a digital subculture, the demoscene is quite resilient because it is not dependent on, for example, digital telecommunications – in its early years, all it required was some kind of image-and-sound-capable computer hardware, and people with time to spend with them. Once conditions become scarcer, it might be possible that the demoscene will come to an unprecedented kind of flourish – after all, its practices are alright based on scarcity. However, I would prefer not to bind sustainable digital esthetics projects to the demoscene or any other subculture, as they all have their own narrownesses.
I think it would be more fruitful if the existing art forms, subcultures and practices gave birth to cultural strands as diverse as possible, each with its own approach to "post-digital" or even "post-post-digital" digital esthetics. In particular, these strands could be created by those whose lives are already tangibly affected by environmental change or who have made lifestyle choices in preparation for a more difficult future.
Abbing, Roel Roscam 2021. ‘This is a solar-powered website, which means it sometimes goes offline’: a design inquiry into degrowth and ICT. LIMITS ’21: Workshop on Computing within Limits. https://computingwithinlimits.org/2021/papers/limits21-abbing.pdf (31.10.2021).
Apple 2001. Aqua Human Interface Guidelines. Cupertino: Apple Computer, Inc.
Barbrook, Richard & Cameron, Andy 2007. The Californian Ideology. Imaginary Future. http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2/
Betancourt, Michael 2017. The Invention of Glitch Video: Digital TV Dinner (1978). Millennium Film Journal 65, 54-63.
Bogost, Ian 2012. Alien Phenomenology, or What It's Like to Be a Thing. Minneapolis: University of Minnesota Press.
Borgmann, Albert 1984. Technology and the Character of Contemporary Life. Chicago: University of Chicago Press.
Borning, Alan & Friedman, Batya & Gruen, Deric 2018. What Pushes Back from Considering Materiality in IT? Workshop on Computing within Limits, LIMITS 2018.
Botz, Daniel 2011. Kunst, Code und Maschine: Die Ästhetik der Computer-Demoszene. Bielefeld: transcript Verlag.
Carlsson, Anders 2010. Power Users and Retro Puppets: A Critical Study of the Methods and Motivations in Chipmusic. Lund: Lund University.
Cascone, Kim 2000. The Aesthetics of Failure: “Post-Digital” Tendencies in Contemporary Computer Music. Computer Music Journal 24(4), 12–18. DOI 10.1162/014892600559489
Coleman, E. Gabriella 2013. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press.
Coles, Laura Lee & Pasquier, Philippe 2015. Digital eco-art: transformative possibilities. Digital Creativity 26:1, 3-15. DOI 10.1080/14626268.2015.998683
Cramer, Florian 2015. What Is 'Post-digital'? Postdigital Aesthetics: Art, Computation and Design, 12-26. Hampshire: Palgrave Macmillan.
De Valk, Marloes 2021. A pluriverse of local worlds: a review of Computing within Limits related terminology and practices. LIMITS ’21: Workshop on Computing within Limits. https://computingwithinlimits.org/2021/papers/limits21-devalk.pdf (31.10.2021).
Dye, Michaelanne ym. 2018. El Paquete Semanal: The Week's Internet in Havana. CHI 2018. DOI 10.1145/3173574.3174213
Elfimov, Konstantin 2008. Brief History of Russian Speccy Demoscene and the story of Inward. Mustekala – kulttuurilehti 6/08. http://mustekala.info/teemanumerot/demoscene-paris-art-scene-6-08/brief-history-of-russian-speccy-demoscene-and-the-story-of-inward/ (31.10.2021).
Eriksson, Elina & Pargman, Daniel 2018. Meeting the future in the past – using counterfactual history to imagine computing futures. Workshop on Computing within Limits, LIMITS 2018. DOI 10.1145/3232617.3232621
Escobar, Arturo 2017. Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. Durham: Duke University Press.
Fischer, Gerhard & Lemke, Andreas C. 1987. Constrained Design Processes: Steps Towards Convivial Computing. Boulder: Colorado University at Boulder Department of Computer Science.
Fors, Per 2019. Problematizing Sustainable ICT. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1769. Uppsala: Acta Universitatis Upsaliensis.
Gere, Charlie 2002. Digital Culture. London: Reaktion Books.
Heikkilä, Ville-Matias 2010. Defining Computationally Minimal Art (Or, taking the "8" out of "8-bit"). Viznut.fi. http://viznut.fi/texts-en/computationally_minimal_art.pdf (31.10.2021).
Heikkilä, Ville-Matias 2014. Käsittämättömät koodirivit musiikkina: bytebeat ja demoskenen tekninen kokeellisuus. WiderScreen 1-2/2014. http://widerscreen.fi/numerot/2014-1-2/kasittamattomat-koodirivit-musiikkina-by (31.10.2021).
Heikkilä, Ville-Matias 2020. Permacomputing. Viznut.fi. http://viznut.fi/texts-en/permacomputing.html (31.10.2021).
Hertz, Garnet & Parikka, Jussi 2012. Zombie Media: Circuit Bending Media Archaeology into an Art Method. LEONARDO 45(5), 424-430.
Hes, Dominique & du Plessis, Chrisna 2015. Designing for Hope: pathways to regenerative sustainability. New York: Routledge.
Hill, Michael 2021. Bitcoin-mining power plant raises ire of environmentalists. ABC News. https://abcnews.go.com/US/wireStory/bitcoin-mining-power-plant-raises-ire-environmentalists-80618790 (6.11.2021)
Hui, Yuk 2017. On Cosmotechnics: For a Renewed Relation between Technology and Nature in the Anthropocene. Techné: Research in Philosophy and Technology 21(2–3), 319-341. DOI 10.5840/techne201711876
IEA 2020. Data centres and data transmission networks. Tracking Buildings 2020. Paris: IEA. https://www.iea.org/reports/data-centres-and-data-transmission-networks (31.10.2021).
Illich, Ivan 1973/1975: Tools for Conviviality. Glasgow: William Collins Sons & Co Ltd.
Jang, Esther ym. 2017. Unplanned Obsolescence: Hardware and Software After Collapse. Third Workshop on Computing within Limits. https://computingwithinlimits.org/2017/papers/limits17-jang.pdf (31.10.2021). DOI 10.1145/3080556.3080566
Kitano, Naho 2007. Animism, Rinri, Modernization; the Base of Japanese Robotics. ICRA'07: 2007 IEEE International Conference on Robotics and Automation.
Kittler, Friedrich 1992/1996. There is No Software. Electronic Culture: Technology and Visual Representation, 331-337. New York: Aperture Foundation.
Kuorikoski, Juho 2018. Pelitaiteen manifesti. Helsinki: Gaudeamus.
Levy, Steven 1984/2010. Hackers: Heroes of the Computer Revolution. Sebastopol: O'Reilly.
Lucas, Adam R. 2006. The role of the monasteries in the development of medieval milling. Wind & Water in the Middle Ages: Fluid Technologies from Antiquity to the Renaissance, 89-128. Tempe: University of Arizona Press.
Mann, Samuel ym. 2018. Regenerative Computing: De-limiting hope. LIMITS ’18: Computing within LIMITS. DOI 10.1145/3232617.3232618
Menkman, Rosa 2010. A Vernacular of File Formats: A Guide to Databend Compression Design. Academia.edu. https://www.academia.edu/3847005/Menkman_Rosa_2010_A_Vernacular_of_File_Formats (31.10.2021).
Menkman, Rosa 2011. The Glitch Moment(um). Amsterdam: Institute of Network Cultures.
Moore, Gordon E. 1965. Cramming more components onto integrated circuits. Electronics 38(8), 33–35. DOI 10.1109/N-SSC.2006.4785860
Hansen ym. 2014. Crafting Code at the Demo-scene. Proceedings of the 2014 Conference on Designing Interactive Systems – DIS '14. DOI 10.1145/2598510.2598526
Pargman, Daniel 2015. On the Limits of Limits. LIMITS 2015: First Workshop on Computing within Limits. https://computingwithinlimits.org/2015/papers/limits2015-pargman.pdf (31.10.2021).
Penzenstadler, Birgit ym. 2015. Collapse (& Other Futures) Software Engineering. LIMITS 2015: First Workshop on Computing within Limits. https://computingwithinlimits.org/2015/papers/limits2015-penzenstadler.pdf (31.10.2021).
Raghavan, Barath 2015. Abstraction, Indirection, and Sevareid’s Law: Towards Benign Computing. LIMITS 2015: First Workshop on Computing within Limits. https://computingwithinlimits.org/2015/papers/limits2015-raghavan.pdf (31.10.2021).
Raymond, Eric S. [ed.] 2004. The Jargon File, version 4.4.8. http://catb.org/jargon/ (31.10.2021).
Reunanen, Markku & Silvast, Antti 2009. Demoscene Platforms: A Case Study on the Adoption of Home Computers. History of Nordic Computing 2: Second IFIP WG 9.7 Conference, HiNC 2, Turku, Finland, August 21-23, 2007, Revised Selected Papers, 289-301. Berlin: Springer-Verlag. DOI 10.1007/978-3-642-03757-3_30
Reunanen, Markku 2013. Neljän kilotavun taide. WiderScreen 2-3/2013. http://widerscreen.fi/numerot/2013-2-3/neljan-kilotavun-taide/ (31.10.2021).
Ridley, Jacob 2020. The biggest games by install size, real mighty storage hogs. PC Gamer. https://www.pcgamer.com/biggest-game-install-sizes/ (31.10.2021)
Sandvine 2019. The Global Internet Phenomena Report 2019. Sandvine. https://www.sandvine.com/global-internet-phenomena-report-2019 (31.10.2021).
Schwartz, Roy et al. 2019. Green AI. arXiv.org. https://arxiv.org/abs/1907.10597 (31.10.2021).
Silber, Daniel 2016. Pixel Art for Game Developers. Boca Raton: CRC Press.
Theis, Thomas N. & Wong, H.-S. Philip 2017. The End of Moore's Law: A New Beginning for Information Technology. Computing in Science & Engineering 19(2), 41-50. DOI 10.1109/MCSE.2017.29
Tomlinson, Bill ym. 2013. Collapse Informatics and Practice: Theory, Method, and Design. ACM Transactions on Computer-Human Interaction 20, 4, Article 24. DOI 10.1145/2493431
Turkle, Sherry 1984/2005. The Second Self: Computers and the Human Spirit. Cambridge: MIT Press.
Widdicks, Kelly & Pargman, Daniel 2019. Breaking the Cornucopian Paradigm: Towards Moderate Internet Use in Everyday Life. Fifth Workshop on Computing within Limits (LIMITS ’19). DOI 10.1145/3338103.3338105
Wiggers, Kyle 2021. Google trained a trillion-parameter AI language model. VentureBeat. https://venturebeat.com/2021/01/12/google-trained-a-trillion-parameter-ai-language-model/ (31.10.2021)