2D to 3D Conversion of the FACE on the SHROUD

BY BERNARDO GALMARINI

ANALYSIS OF THE HEAD
Methodology of adaptation of the process of 2D to 3D conversion, to parameters that are based on grayscale data in the image.

INTRODUCTION

In the process of the stereoscopic conversion from 2D to 3D of the Enrie–photographs of the Shroud of Turin, I have used my vast experience of 2D to 3D conversions of photographs. The idea behind this project was, to adapt this kind of sculpting of a photograph, to more precise scientific parameters, present in the Shroud-image. These limits mark the boundaries of the artistic subjectivity, transforming the whole process into a more objective one. The Shroud image contains gray-scale information and there is a correlation between the gray values and the depth values, which means that the image intensity varies inversely with the cloth-to-body distance (if we suppose that the image was produced by some kind of radiation into a fabric that lays perpendicularly to the body, and completely “flat”). But, in any case, no matter how the image was formed, the perfect correlation between the gray values and the position of the body in the Z axis exists, and that is a fact.
This information is unique to this photographic image. There is no other photograph about real world subjects that has this 3D information.
The 3D image, based only on the grayscale information is an approximation of how the Man in the image on the Shroud looked like, due to interference of the grayscale information by “extraneous information”, e.g. blood contaminants, greases, aging and damage to the linen.
Earlier on it could have been a real life image, if this same method could have been applied 1500 years ago, when the Shroud was in a better state of conservation, or in its first year of existence when it had much more faithful gray-scale information, but that of course we will never know. So, the procedure was, to use the gray-scale information as the basis and then “sculpt” (using my experience in this field), the facial image, but only in the before mentioned areas, obscured by “extraneous information”. Knowing, that on average the gray-scale represents a human face or body, it is logical to eliminate from this study the small areas of interference by “extraneous information”.

 

METHODOLOGY

The method normally is, to “sculpt” on the original face, until it is given the right volume, which means the same in all the points of the matrix of the gray-scale that is put on top of the original, after correcting the areas obscured by “extraneous information”. This proved to be impossible in 4 simultaneous conversions (or master camera views from different angles) and judging by the naked eye only. So, instead, I attached a “mask”, based on the gray-scale map, to the artistic version after erasing the deformed areas from the latter. Subsequently I corrected and filled in the image-free areas of these “masks” (which appear as “holes” in the 3D image), measuring Pixel by Pixel and attachment to attachment in the 4 successive conversions). This is comparable to the restoration process used to “mask” insect holes in wood for example. The point is, that the four used “masks” are different, so are their “holes”, which I filled in such a way, that I used the same patches in all of them. And these patches (being the same) had to be transformed according to the camera view (virtual) and lens (virtual) that I was working with, in such a way that they look right in stereo view. This implies that there could not be a Pixel shift or Pixel displacement between the patches going from one camera view to the next, because the brain detects a mistake of even one Pixel and that will cause what is called “retinal rivalry”, which is confusion, caused by the rivalry between the two retinas.
See for the different stages in this process the Figures 1 to 6 (4 and 5 form a stereoscopic pair):

Fig. 1

Fig. 1

Fig. 2

Fig. 2

Fig. 3

Fig. 3

Fig. 4

Fig. 4

Fig. 5

Fig. 5

Fig. 6

Fig. 6

THE PHOTOS ARE ANAGLYPH, SO USE YOUR 3D GLASSES !!!!

Figure 1 shows the FACE-Anaglyph with grayscale without and with “softening”.
Figure 2 shows the FACE-Anaglyph without “softening”= 3D-grayscale.
Figure 3 shows the FACE-Anaglyph “softened”.
Figure 4 shows the FACE with the mask or matrix in PARALELL view (eyes are parallel).
Figure 5 shows the FACE with the mask or matrix in CROSS-EYED view.
Figure 6 shows the MASTER-ANAGLYPH.

Note to the figures 4 and 5 : In figure 4 we appreciate a difference between the “artistic’ conversion (the image that contains the red points) and the image that should be the conversion to coincide with the version with the “depth-map”. This artistic version, should be such that you do not see any separation (on the Z-axis) between the red points (key-points) and the surface of the face. Once you have accomplished this coincidence between the artistic conversion of the face and the “key-points” of the scientific conversion (based on the “depth-map = grayscale) we will have a conversion with a much more human aspect and with a precision that is being dictated by the objective or scientific parameters. This specific way of doing the conversion of 2D to 3D has at least twice as much depth, so twice as much 3D effect than the first conversions that we accomplished in 2005.

This will translate in two different qualities:
1) 100% more depth with the same visual angle as before.
2) The same 3D depth with a double visual angle as before (when you move to the side you will notice that the head of the image rotates more degrees).

In the end this particular conversion process is a hybrid, that is not only similar to the version of the VP-8 Analyzer, but it also looks much more human than the previous conversions.
I thought at first, that in this more scientific conversion, the hidden information in the Shroud (3D information in the gray-scale), would be a nuisance or obstacle to produce a human representation of the face, and that I would have to struggle continuously against this. Strangely enough, this hidden scientific information in the Shroud became the key and the basis for this work, reducing my artistic work to only softening the “holes” and deformities (caused surely by the passing of time) and the adapting to what this scientific version commands you to do: filling in and normalizing the “holes” or “dead areas” in the hidden information of the linen. For example: the areas without information in the forehead have been corrected following the surrounding gray-scale with coherent information and with a normal human forehead in mind. This process was helped by the fact, that the central zone of the forehead and the bony structure of the orbits contain very coherent information and that of course was taken as a guideline.

The final work included the positioning of the 625 virtual camera shots, placed in a virtual lateral camera array, which was to be the basis information that the Holographic Laboratory in Eindhoven, the Netherlands, needed to produce a Master-Hologram.