Bit Depth Confusion
I do not understand why all maps are generated in 8 bit, regardless of the source bit-depth?
For example, if I feed a 32-bit floating point normal map I expect to get displacement in a 32-bit float. But in shadermap I will get quantized integer map even if I save it in 32-bit exr file. It is obvious that generated data is integer by typical stepping artifacts. What's the point of writing int data in 32bit float exr? Maybe I missed something in settings?
Is there any way to control map generation bit-depth (8, 16, 32 bit int/float)? Also it will be nice to have a feature to choose a format for displacement/normals -1 to 1 for float exr's and normalized 0 to 1 for integer formats (png, tif etc.).