Procedural generation has a lot of hype around itself nowadays, and it also happens to be one of the aspects of the demoscene that appeals to a wider audience. Indeed, the techniques for size-limited demos are advancing all the time: today's 64Ks have the same kind of stuff as yesterday's full-size demos, and today's 4Ks are similar to yesterday's 64Ks.
But how far can this trend possibly go? Is there an absolute theoretical limit in what can be fit in 4096 binary octets of program code?
Just for a thought experiment, here's my recipe for the Ultimate 4K Demo:
Step #0: Imagine a platform with an infinite performance and unlimited memory. We can assume this because we're only concerned about the expressive power of the 4096 bytes themselves, not what a real machine would be capable of.
Step #1: Simulate the laws of physics. Either the real ones or something else, as long as the simulator is able to evolve interesting structures such as stars, planets, landscapes, forms of life, cultures etc. Scientists have come up with relatively simple basic rules for our own universe, so they may very well fit in 4K. Besides, since we have an infinite amount of memory and performance, we can even omit all the optimizations that would otherwise bloat the code.
Step #2: Put in a camera. If we have photons or similar wave/particle objects in our simulation, we already have a free raytracer: just grab the state of the photons passing a rectangular area and draw pixels accordingly. We can also record sound from fixed points by measuring the changes in material density.
Step #3: Initialize the simulator with some random data and check that it works and the results look nice. Because of the infinite performance of our machine, even a billion years' worth of precalc is instantanenous, so we can experiment with several possible random seeds and pick the best result.
Step #4: Find some nice spots for the camera. As long as we have chosen the rules and initial parameters wisely, it shouldn't be too difficult to find nice-looking lifeforms and such. We can even attach the camera to a moving thing, so complex camera runs won't waste the valuable space either.
So, there really is no theoretical limit in what can be fit in a 4K demo. Or is there? We still need to go the other way around.
Assume that we are looking something very specific, like a piece of music that sounds exactly like "Acidjazzed Evening".
For finding this kind of stuff, we'll probably need to write an intelligent bruteforcer that goes thru millions of universes in order to find the specific thing we are looking for. If the probability of "Acidjazzed Evening" to exist in a universe is, say, 1:1018 (one to million million million), we'll need to extend the seed by about 60 bits. This really isn't too much - a very small fraction of the total bytecount of a 4K demo, and definitely much less than what would take to store the song by conventional means.
However, the more specific we get with our demands, the more bits we need in the seed because of the dramatically increasing improbability. I don't know what the probability for the planet Earth is, or the probability for the latest blockbuster movie, but I'm sure that if we require a lot of exact duplicates of specific things familiar to us, 4096 bytes won't be enough anymore.
There are basically two different approaches for procedural content: "constructivism" (construct a structure and store the required steps) and "explorationism" (explore the search space and pick a seed that generates something passable). There's already some use of the the latter approach in 4K demos, and I assume it to become increasingly common as size-limited demos need more and more content density.