Applying full color image texture to create an interactive earth globe

TLDR: Here is the final result!

EDIT: I ended up doing this process for all the planets. I also added rings to saturn (same exact concept as the planets, just with a surface of revolution rather than a sphere). Kept earth for scale!

The implementation:

For a project I’m working on, I needed a textured globe. This is something I’ve done a million times in MATLAB but I’m trying to move away from MATLAB. I was discouraged initially to see forum posts like this stating things like “Unlike matlab you cannot map a multicolor (rgb) image”. This chart studio post showed how to map 2d black and white (or one hue) images onto surfaces, and reading through it gave me an idea.

I wrote a quick script which maps an RGB image into a grayscale image, but does so such that each range of colors only occupies a specific range of intensities. For my earth example, I mapped anything blue (ocean-like) to <0.1, mapping everything green to 0.1-0.5, anything tan to 0.5-0.9, and then anything white to >0.9. That image looked like this:

Obviously, that looks a bit wonky! But then I just wrote a quick colormap like this:

    colorscale =[[0.0, 'rgb(30, 59, 117)'],

                 [0.1, 'rgb(46, 68, 21)'],
                 [0.2, 'rgb(74, 96, 28)'],
                 [0.3, 'rgb(115,141,90)'],
                 [0.4, 'rgb(122, 126, 75)'],

                 [0.6, 'rgb(122, 126, 75)'],
                 [0.7, 'rgb(141,115,96)'],
                 [0.8, 'rgb(223, 197, 170)'],
                 [0.9, 'rgb(237,214,183)'],

                 [1.0, 'rgb(255, 255, 255)']]

And it worked perfectly! As for mapping it onto a sphere, I used this rather simple implementation:

import plotly.graph_objects as go
import numpy as np
from PIL import Image

def sphere(size, texture): 
    N_lat = int(texture.shape[0])
    N_lon = int(texture.shape[1])
    theta = np.linspace(0,2*np.pi,N_lat)
    phi = np.linspace(0,np.pi,N_lon)
    # Set up coordinates for points on the sphere
    x0 = size * np.outer(np.cos(theta),np.sin(phi))
    y0 = size * np.outer(np.sin(theta),np.sin(phi))
    z0 = size * np.outer(np.ones(N_lat),np.cos(phi))
    # Set up trace
    return x0,y0,z0

texture = np.asarray('earth.jpg'.format(planet_name))).T

x,y,z = sphere(radius,texture)
surf = go.Surface(x=x, y=y, z=z,

layout = go.Layout(scene=dict(aspectratio=dict(x=1, y=1, z=1)))

fig = go.Figure(data=[surf], layout=layout)

Anyways, I figured I’d share as I’ve seen other people interested in texturing shapes in plotly and I thought this was kind of a neat trick for achieving that. Obviously you lose some color depth, but for my purposes (I needed 3d shape models of all the planets), this worked great!

Edit 2:

In general, this would work pretty poorly. Since 8-bit color images (256 values per rgb channel) can represent 16,777,216 colors, yet our greyscale image can only represent 256 unique intensity values. So we’ll need to do some kind of subsampling of the full colorspace to get the mapping to work.

I believe however, this can be acceptable with some tricks. The earth globe above for example, while not perfect, is certainly a lot better than you’d get by naively subsampling 16.7M colors down to 256 values. And thats due to the fact that we didn’t subsample the color space evenly, because we didn’t need to. The blues of the ocean didn’t need much space to store, and so we could map all of the “blues” into small portion of our intensity map. The same goes for high value colors (near white), as we could condense those all down. And then by identifying two hues that represented the rest of the image (hues of red and green), we could store those hues in much more detail. Allowing us to recover the full image much more accurately.

I believe this could be automated for any image. By looking at the statistics of how various hues appear in the image, we could probably automate a way of identifying the which colors can be compressed down, and which can’t be. Then subsample and map them accordingly. Doing it this way though, would require that such an automation tool also construct an accompanying colormap (inorder to “unpack” to intensity texture back into rgb). That shouldn’t be too difficult, but adds an additional level of complexity.

I might give a shot at somepoint in the next few weeks of trying my hand at that with some textures. For my project, I ended up hand tuning the colormaps and constructed intensity map textures, since I only needed to do it for each planet and it didn’t take too long (especially since most planets are a single hue to begin with!). But I think this might be a cool thing to look into more in the future!


hi Chris @crgnam
:wave: Welcome and thank you for sharing this awesome trick with the community.

1 Like


Indeed … very cool … especially as you then morphed it into a 3D spherical object.

But my next question(s):
a) Do you have colorplots of the other planets that you can show off (e.g. on Chart Studio)?
b) WHY were you creating these charts … i.e. for what purpose / what audience?

We’re getting into use of Plotly for astronomy visualizations … so my ears are piqued as this is not an everyday thing.


I’m a PhD candidate in aerospace engineering. I’m getting close to defending, and I’m working on a python API (named “ceres”) that will serve as an opensource implementation of my dissertation.

I actually started working on ceres while I was still in undergrad, about 6 years ago now… though it didn’t get the official name “ceres” until about 2 years ago. It was originally written as a MATLAB toolbox, but I found that limiting both in functionality and in its ability to be distributed to the public. So I decided a few months ago to start implementing it in python.

I’m getting close (hopefully by the end of the month) to a v0.1 release. And part of that, I really wanted easy tools for visualization (both data visualization, maps, orbits, etc.) Obviously I don’t need textured planets, but it only took me a few hours to figure out, and I think it adds a lot of “polish”!

It also has an interface with blender’s python API for making more proper animations that way. But thats obviously an offline process.

Oh and to answer part a of your question… I’m brand new to plotly. (I only started using it last Friday). How would I go about sharing them? (I’m unfamiliar with what Chart Studio is). As I said above, demos of the planets with be in documentation, and the api for actually plotting them yourselves will be distributed publicly by the end of the month. I actually just tested last night how to package the textures to be distributed with pypi. I’m just not quite at the point at officially releasing it.


polish is good :nail_care: … and so is polish :poland:

so yeah … go beyond the “need” … but who doesn’t “want” textured planets, eh?

especially if you figured out how to do it quickly / easily.

I’m interested in the animation angle … blender is one avenue … that would interesting to see (if you check out our LInkedIn posts you’ll see some christmas tree “graphs” in 2D and 3D … one guy morphed it to do a blender animation

Brand new to plotly … want to share those images … this could be worth show and tell … I’ll connect w you over on LI and pick up the conversation thread :slight_smile:

1 Like

Very interesting idea!!! <3
If colormapping were performed for surfaces, just like for heatmaps, then we could map an image as a texture for surfaces.
Three years ago, when px.imshow() didn’t exist yet, I tried to learn from an image, via scikit learn KMeans, an array, z_data,
and a colorscale, to reproduce the image as a heatmap. The result was unexpectedly good (see )

Yesterday seeing your example I tried the same method to get (learn) from a RGB-image, the surfacecolor and the colorscale for a surface.
But the result is far from acceptable.
For comparison:

  • this is the original image:


  • and this one is the corresponding heatmap, plotted for z_data and colorscale returned by the function image2zvals defined in the Jupyter Notebook from my chart studio account (link posted above).
z_data, colorscale = image2zvals(img, n_colors=64, n_training_pixels=5000)

The number of colors in the colorscale can appear big, compared to the usual number of colors in a colorscale, but it isn’t. The Julia version, PlotlyJS.jl, passes colorscales for heatmaps, surfaces, etc, of 256 colors, similar to matplotlib colormaps.

Now trying to map the same z_data as texture (surfacecolor) for a surface, we get a surface with artifacts. It seems that interpolation isn’t performed as for heatmaps :frowning:
Maybe @alexcjohnson and/or @archmoj could explain this odd behaviour.

The code for surface:

x= np.linspace(-pi, pi, c)  # r x c is the image resolution
y= np.linspace(-pi, pi, r)
x, y = np.meshgrid(x,y)
z= 0.5*cos(x/2) + 0.2*sin(y/4)
z_data, pl_colorscale = image2zvals(img1, n_colors=64, n_training_pixels=5000)
fig2 = go.Figure(go.Surface(x=x, y=y, z=z, surfacecolor=z_data,
fig2.update_layout(width=650, height=650, font_size=11,
                   scene=dict(xaxis_visible=False, yaxis_visible=False,  zaxis_visible=False,
                           aspectmode="data", camera_eye=dict(x=2.5, y=2.5, z=1.5)))

As we decrease the number of colors (in fact k from 2^k, for KMeans) we get a better plot for surface:

Number of colors:




If this color-interpolation issue were solved then we could map any image onto a surface.


+1 for this! I would absolutely use this. Thank you all.

1 Like

Interesting indeed! We never implemented the zsmooth attribute for surface - the only mode available is the equivalent of zsmooth='best'. In this mode at every pixel we interpolate the data values smoothly between the four corners of the surface grid cell, then map that interpolated value to the colorscale. That means any grid cell where all four corners have the same value will have a uniform color as desired, but any cell with corners of different value we’ll visit every color on the colorscale between the values on those corners. Given that the colorscales you’ve created here aren’t ordered (and generally cannot be ordered) you end up passing through a lot of colors you didn’t intend.

When you made the heatmap version you explicitly set zsmooth=False, but even zsmooth='fast' would have been fine. In that mode for a heatmap we allow the browser to interpolate, which has the effect of first mapping the corners to colors and then interpolating between them in color space. Anyway, surface traces don’t have this; it would be a nice addition (so please make an issue in the plotly.js repo where we can discuss and prioritize this!) but because it needs to be implemented on the GPU it may be a bit tricky.

The only option that occurs to me to do this today is to switch to mesh3d traces: plotly-mock-viewer. The “vertex intensity” variant would have the same problem you encountered but the other three should all be work - “cell intensity” would use a colorscale mapping like you’ve done here, the other two explicitly provide colors. This is a good deal heavier of a solution than using surface traces, but it comes with correspondingly more flexibility.


Thanks for these details, @alexcjohnson!

1 Like

I succeeded to map the image onto a surface, as @alexcjohnson suggested . Namely, I triangulated the surface by a regular triangulation, and defined a Mesh3d with intensitymode="cell". It’s a bit tricky to associate a value from z_data to each cell (triangular face), to get the intensity list.

Colorscale with 32 colors:


Thanks a lot for sharing that is exactly what I was looking for.
However when I ran the code in my laptop with an example image for google, it was lagging too much. The image size I used was small, so I don’t know what was the problem.
Did you face the same issue?


Thank you for sharing this very useful info. Could you explain a little more about how you did you get the intensity list? I really appreciate your help! Thanks!

Here you can find details on how the intensity is defined and assigned to each cell:

Appreciate! Exactly what I need!