Nov 28, 2021

Texture Mapping on a Sphere

I covered simple texture mapping - affine mapping on a 3D rotated plane - in my previous post. Mapping a texture on a sphere is of course more difficult. In this post I present two Python solutions, which both are less than perfect. Also, you may want to take a look at the RGB Sphere I wrote about earlier, with more spherical 3D graphics, using polygons - it maps a world map on a sphere as well, with light sourcing, although the resolution is then quite low.

As usual, the code can be found in my github repository.

Twins - but not identical

The two mapping methods are shown below.



The first one is based on mapping each and every pixel on the sphere to a flat Mercator map of the Earth, while the second is actually rotating a lot of (ca. 165 000) 3D dots, which have been assigned a color using the same map. In the first, then, the dots are constant but their colors change; in the second, the colors are constant and the dots move...

The main drawback of the first option is that as the dots are constant, the size of the sphere is constant as well. And the problem with the second is that even that big amount of independent dots is not enough to avoid some parts of the sphere not being covered, leaving black dots between the colored ones. See the video to understand this better:


Going Forwards


The second option, rotating a lot of 3D dots, has rather little to do with texture mapping actually - it is a relatively simple exercise of 3D rotation once the dots have been set up with their respective colors.

        # generate 3D nodes for a sphere.

        a = np.pi / 180.0  # for scaling degrees to radiuses
        R = self.image_mercator_R

        # first generate "traditional" 3D nodes with ~even spacing on sphere surface
        c = int(self.point_nr / 4)  # nr of circles of points needed for half a sphere.
        for i in range(c):
            lat = 90.0 / (2 * c) + 90.0 * (i / c)  # latitude north
            rad = np.cos(lat * a)  # radius scale at this latitude
            p = int(self.point_nr * rad)  # nr of points needed at this latitude
            j = np.arange(p)[:, None]
            long = 360.0 / (2 * p) + 360.0 * (j / p)  # longitudes at this latitude; most at the equator, least close to pole(s)
            x = np.cos(long * a) * rad * self.radius
            y = np.ones((p, 1)) * np.sin(lat * a) * self.radius  # y = latitude is constant
            z = np.sin(long * a) * rad * self.radius
            # add nodes both north and south
            self.nodes = np.vstack((self.nodes, np.hstack((x, y, z)), np.hstack((x, -y, z))))

            # pick colors from image
            if self.image_type == 'Mercator':
                # Mercator mode (https://en.wikipedia.org/wiki/Mercator_projection). Crop at 0.9999 - R defines "maximum" latitude (85 deg)
                lat_coord = np.minimum(0.9999, np.ones((p, 1)) * ((1.0 + R * np.log(np.tan(np.pi / 4.0 + 0.4999 * (lat / 90.0) * np.pi / 2.0)))
                                                                  / 2.0)) * self.image_size[1]
            else:
                # Normal mode: picture is simply wrapped
                lat_coord = np.ones((p, 1)) * (lat / (90.0 * 2.0) + 0.5) * self.image_size[1]

            image_coord = (np.vstack((np.hstack((long / 360.0 * self.image_size[0], lat_coord)),
                                      np.hstack((long / 360.0 * self.image_size[0], self.image_size[1] - lat_coord - 1))))).astype(np.int16)
            self.node_colors = np.hstack((self.node_colors, self.image_array[image_coord[:, 0], image_coord[:, 1]]))

Setting up those 165 000 (self.point_nr actually depends on the window height and represents the number of dots on the sphere equator) dots as evenly as possible on a sphere involves going through latitudes (lat, based on i in the loop forin range(c)). Number of points p is small for high latitudes and big when close to the equator, to get even spacing. lat and long (which are in degrees) are then simply converted to 3D coordinates x, y, and z; and these are used to add both the north and south (where y is negative) hemisphere nodes (dots).

Then, for each node/dot, a color must be assigned. When using a Mercator map, the conversion of latitude to image y coordinate is slightly complex; the "normal" mode is simpler and works better for any normal image. (Like a photo of your cat, for example. If using such a photo I recommend making an image with two copies of the photo side by side, like the eastern and western hemisphere.) Once the image coordinates have been calculated, the node colors are stored so that there is a color defined for each 3D node. 

The program then simply rotates these nodes and uses the colors mapped above to show the ones which are on the visible side (the "front") of the sphere. (See my earlier postings on 3D rotation if interested.)

And Backwards


The actual texture mapping is based on a fixed circular area, where each pixel is mapped to an image so that the imaginary sphere appears to be rotating in 3D space.

        for y_val in range(-self.radius + 1, 0):
            rad = int(np.sqrt(self.radius ** 2 - y_val ** 2))  # radius (pixels) at this latitude
            x = np.arange(-rad, rad + 1)[:, None]  # x simply covers all pixels on this line (y = constant)
            y = np.ones((rad * 2 + 1, 1)) * y_val  # y = constant on this line
            z = -np.sqrt(self.radius ** 2 - y_val ** 2 - x ** 2)  # z from sphere formula x**2 + y**2 + z**2 = r**2; negative as facing the viewer
            self.nodes_flat = np.vstack((self.nodes_flat, np.hstack((x, y, z)), np.hstack((x, -y - 1, z))))

        # store as integers, as anyway plotting pixels
        self.nodes_flat_x = (self.nodes_flat[:, 0]).astype(np.int16)
        self.nodes_flat_y = (self.nodes_flat[:, 1]).astype(np.int16)
        # precalculate [Mercator] conversion for each (integer) y and the distance between x's. Add precision by using 4 times the number of radius lines
        flat_y_range = np.arange(-self.radius * 4.0, self.radius * 4.0 + 1)
        self.image_flat_x = (self.image_size[0] / 4.0) / np.maximum(2.0, np.sqrt(self.radius ** 2 - (flat_y_range / 4.0) ** 2))
        if self.image_type == 'Mercator':
            self.image_flat_y = (np.minimum(0.9999, np.maximum(0.0, ((1.0 + R * np.log(np.tan(np.pi / 4.0 + 0.49999 * (np.arccos(flat_y_range /
                        (4.0 * self.radius))- np.pi / 2.0)))) / 2.0))) * self.image_size[1]).astype(np.int)
        else:
            self.image_flat_y = (np.minimum(0.9999, np.arctan2(flat_y_range, np.sqrt((self.radius * 4.0) ** 2 - flat_y_range ** 2)) / np.pi + 0.5)
                                 * self.image_size[1]).astype(np.int)

        self.rotated_nodes_flat = np.zeros((np.shape(self.nodes_flat)[0], 4), dtype=np.float)

This time, no longitudes and latitudes, but pixels to start with! That means going through half a circle vertically (y_val) and each x on that y_val horizontal line. Then, a z is calculated from these so that they (x, y and z) again represent 3D coordinates on a sphere, but in such a way that when plotted in 2D they will fill a circle with each dot representing one and exactly one pixel. These are stored as nodes_flat

Then, pre-calculated image mapping data are calculated and stored, as log, arccos, arctan etc. are rather cumbersome to calculate. This will help mapping later.

Now, normal 3D rotation - as in Going Forwards above - simply takes a stationary object and rotates its nodes around all three axes, resulting in a new set of nodes in 3D space. The Backwards part here means that we already know the end result i.e. the rotated nodes in 3D space - the nodes_flat above - and we instead need to know where they came from. As the nodes_flat are so defined that they neatly fill the circle we are using to plot the sphere, we can nicely map the texture this way.

    def rotate(self):

        # rotate object
        self.angles += self.rotate_speed
        matrix = self.rotate_matrix(self.angles)

        if self.mode == 1:
            # normal 3D rotation
            self.rotated_nodes[:, 0:3] = np.matmul(self.nodes, matrix) * self.size
        else:
            # invert rotation matrix - "backwards rotation" of the required 2D nodes (flat nodes) to "original" to get colors
            matrix_inv = np.linalg.inv(matrix)
            self.rotated_nodes_flat[:, 0:3] = np.matmul(self.nodes_flat, matrix_inv)

        self.measure_time("rotate")

    def rotate_matrix(self, angles):

        # define rotation matrix for given angles
        (sx, sy, sz) = np.sin(angles)
        (cx, cy, cz) = np.cos(angles)

        # build a matrix for X, Y, Z rotation (in that order, see Wikipedia: Euler angles).
        return np.array([[cy * cz               , -cy * sz              , sy      ],
                         [cx * sz + cz * sx * sy, cx * cz - sx * sy * sz, -cy * sx],
                         [sx * sz - cx * cz * sy, cz * sx + cx * sy * sz, cx * cy ]])

The difference in rotation is deceptively small. Instead of using the usual rotation matrix directly, I invert it first using NumPy's linalg.inv. (If your matrix algebra needs refreshing, you can think of multiplying something with an inverse of a matrix in a same way as multiplying something with 1/x instead of x - it's a lot like dividing instead of multiplying.) 

Then, when we know where the end result nodes came from, we still need to map them to the image.

        rgb_array = pygame.surfarray.pixels2d(self.screen)
        rgb_array[self.nodes_flat_x + self.position[0], self.nodes_flat_y + self.position[1]] = self.image_array[
            ((np.arctan2(self.rotated_nodes_flat[:, 0], self.rotated_nodes_flat[:, 2]) / (2.0 * np.pi) + 0.5)
             * self.image_size[0]).astype(np.int16),  # x calculated
            # alternative X using a precalculated coordinate - but too complex to help much
            # (self.image_flat_x[(4.0 * self.rotated_nodes_flat[:, 1]).astype(np.int16) + 4 * self.radius] * self.rotated_nodes_flat[:, 0]
            #                    + (np.sign(self.rotated_nodes_flat[:, 2]) + 2) * self.image_size[0] / 4).astype(np.int16),  # x using precalcs
            # y calculated from Mercator - rather complex and hence slow
            # (np.minimum(0.9999, np.maximum(0.0, ((1.0 + R * np.log(np.tan(np.pi / 4 + 0.4999 * (np.arccos(self.rotated_nodes_flat[:, 1] / self.radius)
            #                 - np.pi / 2)))) / 2.0))) * self.image_size[1]).astype(np.int)  # y calculated
            self.image_flat_y[(4.0 * self.rotated_nodes_flat[:, 1]).astype(np.int16) + 4 * self.radius]  # y using precalcs
           ]

That is actually rather simple, or at least short, above. I use pygame's surfarray.pixels2d and set the nodes_flat pixels centered around self.position to a color found from self.image_array (another surfarray.pixels2d based on the map image). The version used above calculates the image x coordinate using self.rotated_nodes_flat, and picks the y coordinate using the pre-calculated self.image_flat_y data described above. 

The commented (#) alternative way uses pre-calculation for x as well, but it is too complex to really help; and it calculates y (Mercator only) but this is quite a bit slower than the pre-calculated alternative.

And that's all I have to say about that.

1 comment: