How to plot contour of plotly 3D Mesh?

I have been trying to plot the contour of a 3D Mesh and I was able to get the vertices at the edge of my 3D mesh. However, when I try to connect these vertices with a line, I get really bad results.



The first image is the set of points representing the vertices. The second image shows my attempt trying to connect the edge vertices. The third image shows my mesh with the failed plotted edge vertices. Is there a way to connect with a line all the edge vertices properly to create the contour for my mesh?

The following are the x,y, and z coordinates for the edge vertices

x = [-26.68821335 -26.60249519 -26.95862198 -26.58558273 -26.84652328
 -26.04704857 -26.98462296 -27.25349998 -27.24040604 -24.06988525
 -24.39101601 -24.81681442 -25.26436043 -23.57262802 -25.47171783
 -25.85359192 -25.89053535 -26.11143112 -26.43889046 -26.29651451
 -23.00212479 -26.72643661 -26.90960121 -27.4543705  -27.78765678
 -28.10663223 -28.45610428 -28.89383125 -29.47120667 -29.71392441
 -30.28079987 -30.37588501 -30.75014877 -23.64927864 -25.55357742
 -26.20693016 -31.03199387 -22.2843647  -22.96079636 -23.18783379
 -26.75884247 -31.30900002 -22.18011665 -22.45661926 -22.68852615
 -31.17599869 -22.21070862 -22.22953606 -27.8441143  -30.98797035
 -28.36685371 -30.91625023 -28.84769821 -29.03984642 -29.26161957
 -29.51830292 -30.89692116 -29.97975349 -30.76473808 -30.87370872]

y = [ 9.04875755  8.68356895  8.9154768   7.77346563  8.51372337  7.24554348
  7.20953274  7.50647259  7.76091146  6.4849844   6.22659349  5.91468191
  5.93531656  6.96326637  5.93945169  5.93444681  6.35006809  6.3862195
  6.35229731  6.5966897   7.81059933  6.67235184  6.88526058  6.79734039
  6.75841141  7.09598207  7.07125425  7.3789506   7.5661664   8.21220493
  8.38313675  8.97794437  9.45121193 14.00763416 15.21823788 15.42790127
 10.15401268 10.23243141 13.15625858 13.79493141 15.53408337 10.75207901
 10.92155552 12.00337887 12.60393333 11.34342861 11.44986439 11.19862938
 15.49152184 11.92475033 15.48239803 12.48376846 15.46685505 14.98930645
 14.54309082 14.12913227 13.00738716 14.29349232 13.72273445 13.35398388]

z= [40.09610748 40.2755394  39.78066254 40.46575165 40.00103378 41.22445679
 40.07003403 39.68055344 39.6496048  43.70822525 43.36251831 42.89729309
 42.36333847 44.2328949  42.11548233 41.65794373 41.55335236 41.28055573
 40.88649368 41.02244949 44.80235672 40.48281097 40.22029877 39.55779266
 39.14909744 38.68617249 38.25093842 37.63653183 36.86626053 36.41537857
 35.65213776 35.38962555 34.78951263 42.89670944 40.04175186 39.09609985
 34.24311447 45.34605408 43.98243332 43.55350113 38.31879425 33.72212601
 45.37714386 44.8492012  44.44398117 33.72883606 45.25617218 45.2734642
 36.87064362 33.80443573 36.17105103 33.72898483 35.52944946 35.43631363
 35.28750229 35.07746506 33.58849716 34.4047966  33.53139496 33.50752258]

Hi @numam,

Could you provide more information on what you’re attempting? From what it seems, you’re wanting to take the concave hull of a 2D projection of your 3D mesh. Does this sound correct?

In general, a 3D mesh doesn’t have a single edge, at least not in the sense it seems you’re wanting here.

Thanks for your reply @3d65,
I am not sure if this is the best approach to attain my general goal but yes, what you described is correct.

I am doing this because I am creating ROIs from hundreds of MRIs based on the activation of certain neurons. I want to display only the contour of each ROI because I am planning on displaying all the ROIs on one brain surface. These ROIs will be located around the same brain area. So, displaying them as rings with different colors will allow me to visually check that they are properly created. Does it make sense?

Here is a brain surface with one ROI (the white spot):

Here is just one of the ROIs:

For me, the ROI is a 2D object though it is in 3D space as you can see in the following screenshot taken from the side of the ROI

The final goal is to get something like this where each ring represents one ROI :

Hi @numam,

I understand your goal now. Here’s an example of how you can get the contour of this ribbon-like object; you’ll need to pip install sklearn and alphashape. In the example below, just add your own X, Y, and Z data into the main call at the bottom of the script, and let me know what you see.

As a short summary, we can take the data, transform it into 2D space via PCA, generate the concave/convex hull, and train /use a simple neural net to transform back into 3D space accurately. Let me know if you have questions!

import plotly.graph_objs as go
import numpy as np
from sklearn.decomposition import PCA
from sklearn.neural_network import MLPRegressor
import alphashape


def generate_contour_sample(x, y, z, max_iter=3000, alpha=0, n_components=2):
    X = np.array([x, y, z]).T

    # do PCA to make 3D data planar
    pca = PCA(n_components=n_components)
    pca.fit(X)
    X_transformed = pca.transform(X)

    # generate boundary points in PCA space
    alpha_shape = alphashape.alphashape(X_transformed, alpha)
    alpha_shape_points = alpha_shape.boundary.coords
    x_boundary = np.array(alpha_shape_points.xy[0])
    y_boundary = np.array(alpha_shape_points.xy[1])

    # create a regressor for more accurate reconstruction of boundary back into 3D space
    regr = MLPRegressor(random_state=1, max_iter=max_iter).fit(X_transformed, X)
    boundary_recon = regr.predict(np.array([x_boundary, y_boundary]).T)

    fig = go.Figure(
        data=[
            go.Scatter3d(x=x, y=y, z=z, name='Input Data'),
            go.Scatter3d(x=X_transformed[:, 0], y=X_transformed[:, 1], z=50 + np.zeros(len(X_transformed[:, 0])),
                         name='Points in 2-component PCA'),
            go.Scatter3d(x=x_boundary, y=y_boundary, z=75 + np.zeros(len(x_boundary)), name='Edge in PCA space'),
            go.Scatter3d(x=boundary_recon[:, 0], y=boundary_recon[:, 1], z=boundary_recon[:, 2],
                         name='Reconstructed Edge')

        ],
        layout=go.Layout(scene=dict(
            aspectmode='data'
        ))
    )

    fig.show()

    return boundary_recon


if __name__ == '__main__':
    t = np.linspace(1, 100, 1000)
    x = t * np.cos(t)
    y = t * np.sin(t)
    z = y * x / 250 + np.sin(5 * t) * np.sqrt(t) / 2
    contour_pts = generate_contour_sample(x, y, z)

2 Likes

Freak Drew! That is awesome!!! Thank you so much for helping me out.

The contour is not perfect yet, but it is so extremely close to what I am looking for. I think that with a few tweaks it will be perfect.

Here is a link to the scenes for you to check out the output
https://byu.box.com/s/ig2mji879u74y71tfd5wfiazgb3ibiud

Here are some screenshots.


In this screenshot you can see that the contour is a bit off but its pretty close to perfect:

When plotted on the inflated brain, we see it is looking good :slightly_smiling_face:

I will try to digest what you did and tweak a bit your code to see if I can make it perfect. I uploaded the vertices file to that link posted above in case you feel like playing around with it more. Thank you so much! Owe you one!

Hey @numam,

Here’s an updated version that should be more consistent. Instead of transforming back into 3D space via a neural network, we can simply take the indices of the detected edge, and use those indices to obtain the points in 3D space.

Let me know how it goes for you! Here’s the code:

import plotly.graph_objs as go
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
import alphashape


def generate_contour_sample(x, y, z, alpha=0.0, n_components=2, plot=True):
    X = np.array([x, y, z]).T

    # do PCA to make 3D data planar
    pca = PCA(n_components=n_components)
    pca.fit(X)
    X_transformed = pca.transform(X)

    # generate boundary points in PCA space
    alpha_shape = alphashape.alphashape(X_transformed, alpha)
    alpha_shape_points = alpha_shape.boundary.coords
    x_boundary = np.array(alpha_shape_points.xy[0])
    y_boundary = np.array(alpha_shape_points.xy[1])
    bound_mat = np.array([x_boundary, y_boundary]).T

    # get the indices for the boundary
    inds = []
    for coord in bound_mat:
        # return index where the values are close
        ind_match = np.where(np.all(np.isclose(coord, X_transformed, atol=1e-3), axis=1))
        if ind_match:
            inds.append(ind_match)
    inds = np.array(inds).flatten()

    if plot:
        fig = go.Figure(
            data=[
                go.Scatter3d(x=x, y=y, z=z, name='Input Data', mode='markers'),
                go.Scatter3d(x=X_transformed[:, 0], y=X_transformed[:, 1], z=50 + np.zeros(len(X_transformed[:, 0])),
                             name='Points in 2-component PCA'),
                go.Scatter3d(x=x_boundary, y=y_boundary, z=75 + np.zeros(len(x_boundary)), name='Edge in PCA space'),
                go.Scatter3d(x=x[inds], y=y[inds], z=z[inds],
                             name='Reconstructed Edge')

            ],
            layout=go.Layout(
                scene=dict(
                    aspectmode='data',
                ),
                template='plotly_dark'
            )
        )

        fig.show()

    return np.array([x[inds], y[inds], z[inds]]).T


if __name__ == '__main__':
    data = pd.read_csv('ROIvertices.csv', header=None)

    contour_pts = generate_contour_sample(data.iloc[:, 0], data.iloc[:, 1], data.iloc[:, 2], alpha=0.7, plot=True)

Hi @3d65 ,

your second solution works even better, great work!

A while ago I had to deal with a very similar problem. In my case, I had to find the contour for various point clouds, which were quite different concerning the distribution of x,y,z coordinates. In some cases the point cloud defined a flat surface, in other cases a surface with lot’s of curvatures in space. On top of that, I had to find the concave hull, i.e the parameter alpha !=0

Do you have experience how to get a robust approach for concave hull curves, for example the yellow curve below? At some point (alpha=0.09) alphashape does not find a single edge but finds a multi-part geometry.

Sorry for hijacking this thread, but I had to play around with the alphaparameter of your function.

Hey @AIMPED,

This stack overflow answer seems like a good option. In summary, we can use the alphashape.optimizealpha() function to determine the alpha value, and multiply by 0.95 for some padding.

alpha = 0.95 * alphashape.optimizealpha(points)
hull = alphashape.alphashape(points, alpha)

If you DM me a sample of your data I can try messing around with it!

Drew this updated version works perfect! You are the best. This version is also way faster than the previous one.
Here are screenshots of the outcome.


Screen Shot 2022-09-19 at 4.33.08 PM

Thanks again!

3 Likes