How do 64k demos work
It's all within one executable. See CodeInChaos's answer. Cody I don't think so. Usually for a size limited demo the size of the executable and all resources needs to be below the limit. There are some differences on what runtime libraries you can use, but usually those are already part of the OS. Cody because it's no 64k demo anymore if you add resources exceeding that size.
And 64k is already one of the larger size limits. Limits of k are common too. Cody The topic says "64k demos". So it doesn't just talk about programs where the main program happens to be small. There are certain popular limits such as 4kB and 64kB. Show 4 more comments. Active Oldest Votes. The audio could be stored in some midi like format where the different notes are stored.
Shows how to create very different textures from perlin noise 3D models probably have some geometric description using formulas and the detail is added with techniques similar to procedural textures.
I think this is a good link to open a window on this world: llg. Add a comment. Philip Conrad 1, 1 1 gold badge 11 11 silver badges 22 22 bronze badges. This avoids the need for a separate data format and the related serialization code, but this solution can also get pretty messy. The key to small executables is scrapping the default standard library and compressing the compiled binary.
The binaries are compressed with kkrunchy , which is a tool made for exactly this purpose. Floating point code caused some headaches by producing calls to nonexistent standard library functions.
This is not a problem because you can set the floating point truncation mode at the start of your program with this snippet courtesy of Peter Schoffhauzer :. You can read more about these things at benshoof. Now it was possible to make a wrapper that calls our own pow implementation:. If you are suffering through this, these might prove helpful:. Note that we only load the function pointers for OpenGL functions that are actually used in the production in the latter example.
It might be a good idea to automate this. The functions need to be queried with string identifiers that get stored in the executable, so loading as few functions as possible saves space. Additionally, we had the raymarcher output a depth buffer value so we could intersect signed distance fields with rasterized geometry and also apply post-processing effects. Early on our plan was to have an unified lighting pipeline for both raymarched and rasterized shapes.
An early experiment with shadow mapping. Note how both the towers and the wires cast a shadow on the raymarched terrain and also intersect correctly. Rendering huge terrains with shadow maps is super hard to get right because of the wildly varying screen-to-shadow-map-texel ratio and other accuracy problems.
Also, raymarching the same scene from multiple points of view is slow. So we just decided to scrap the whole unified lighting thing. This proved to be a huge pain later when were trying to match the lighting of the rasterized wires and raymarched scene geometry.
The terrain is raymarched value noise with analytic derivatives. If you want to learn more you you can read more about this technique in this old article of his or play around with his awesome rainforest scene on ShaderToy. The landscape heightmap became much more realistic after msqrt implemented exponentially distributed noise. The landscape effect is very slow because we do brute force shadows and reflections. The shadows use a soft shadow hack in which the penumbra size is determined by the closest distance encountered during shadow ray traversal.
They look pretty nice in action. We also tried using bisection tracing to speed it up but it produced too many artifacts to be useful. Landscape rendering with fixed point iteration enhancement left and with regular raymarching right.
Note the nasty ripple artifacts in the picture on the right. The sky is built using pretty much the same techniques as described by iq in behind elevated , slide Just some simple functions of the ray direction vector. Our post-processing effects really make it come together even though the underlying geometry is pretty simple. An ugly distance field with some repeated blocks.
A fake back-scattering effect based on normal and height brightens the tip of the waves, visible in the image above as small turquoise patches. During the launch scene, a few additional effects are added, like this rain drop shader. Maybe a translucent billboard with a beautiful shader could work? One day, we started experimenting with naive ray marching through a medium.
We observed with delight that even in an early crude rendering test, and despite coder colors and the lack of a decent phase function, the volumetric lighting was immediately convincing. At that point, that initial billboard idea disappeared, never to be heard of ever again.
As we added the phase function and played with it, it started to feel like the real deal. This opened a lot of possibilities from a cinematography point of view.
But then there was performance. Light shafts give this scene a look inspired by the film Blade Runner. In the end we settled with a simpler technique close to the one used in Killzone Shadow Fall video here with a few variations. The effect is done in one full screen shader at half resolution:. Volumetric lighting makes it possible to give a mood and a distinctive cinematic look that would be difficult otherwise.
An immediately recognizable aspect of an underwater image is absorption. As objects get distant, they become less and less visible, their colors fading into the background, until they disappear completely. Similarly, the volume affected by light sources is reduced as light is quickly absorbed by the water medium.
This effect has great potential for cinematography, and modelling it is simple. It is done with two steps in the shader. A first step applies a simple absorption function to the light intensity when accumulating the lights affecting an object, therefore modifying the light color and intensity when it reaches surfaces.
A second step applies the same absorption function to the final color of the object itself, thus modifying the perceived color depending on the distance from the camera.
Test of light absorption in the water medium. Notice how color is affected by the distance from the camera and the distance from the light sources.
When reviewing the typical features of an underwater scenery, they were sitting among the top elements in the wish list, but their implementation seemed risky. Organic elements like that can be difficult to get right, and getting them wrong could break immersion. They would need to have a believable shape, be well integrated in their environment, and they might even require some subsurface scattering shading model. One day though, we felt inspired to experiment.
Starting from a cube, scaling it, and putting a random number of them on a spiral around an imaginary trunk: from far enough it could pass as a long plant with many small branches. After adding a lot of noise to deform the model it was already starting to look half decent. However as we tried adding those plants to a scene, we realized the performance tanked rapidly with the number of objects.
This limited way too much the number of them we could put for the image to look convincing. It turns out our new unoptimized engine was already hitting a first bottleneck. So we implemented a crude ad hoc frustum culling at the last minute in the final version a proper culling is used : , allowing the dense bushes visible in the demo.
With appropriate density and sizes patches with normal distribution , and the details taken care of by the dim lighting, it was starting to look interesting. Experimenting more, we tried to animate them: a noise function to modulate the intensity of an imaginary underwater stream, an inverse exponential function to make the plants bend, and a sinus so their tip would swirl in the stream. Doodling some more, we stumbled upon the money shot: the submersible casting a light through the bushes , drawing shadow patterns on the seafloor as it passed off camera.
The vegetation casting shadow patterns on the seafloor. Particles are the final subtle touch. Pay close attention to any real underwater footage and you will notice all sorts of suspended matter. Stop paying attention and it disappears. We tuned particles to be barely noticeable, preventing them from getting in the way. Yet they give a sense of volume filled with a tangible medium, and help sell the look.
The technical side is fairly straightforward: in Immersion, particles are just instanced quads with a translucent material. The rendering order problem due to translucency was simply avoided by setting the position along one axis according to the instance id. For those who have not had one, it might be worth just checking the specs out to appreciate the demos.
GarBenjamin likes this. Joined: Oct 11, Posts: 17, Ryiah , May 22, Billy and GarBenjamin like this. You must log in or sign up to reply here. Show Ignored Content. Your name or email address: Password: Forgot your password?
0コメント