Breaking news
On our Dash Docs you’ll see a new button, Ask AI
– representing your personal chatbot for easier data app development.
Based on LLM models trained on over 2,000 pages of our documentation, this chatbot can answer your technical questions about Dash. Read this blog post to learn how it provides real-time code suggestions, error identification, up-to-date information, guided discovery, personalized solutions, and layout and design help.
20 Likes
AIMPED
October 31, 2023, 5:03pm
2
That’s a nice feature, I’ll test that in depth the future for sure! I really like the links to the reference!
Question: How do I change the background grid of Scatter3d plots?
Answer:
To change the background grid of Scatter3d plots, you can use the update_layout() method of the Figure object and specify the scene parameter. Within the scene parameter, you can customize the xaxis, yaxis, and zaxis attributes to modify the background grid color.
Example:
import plotly.graph_objects as go
import numpy as np
N = 50
fig = go.Figure(data=[go.Scatter3d(
x=30 * np.random.randn(N),
y=25 * np.random.randn(N),
z=30 * np.random.randn(N),
mode='markers',
marker=dict(
size=4,
color='blue',
opacity=0.8
)
)])
fig.update_layout(
scene=dict(
xaxis=dict(
backgroundcolor="rgb(200, 200, 230)",
gridcolor="white",
showbackground=True,
zerolinecolor="white"
),
yaxis=dict(
backgroundcolor="rgb(230, 200, 230)",
gridcolor="white",
showbackground=True,
zerolinecolor="white"
),
zaxis=dict(
backgroundcolor="rgb(230, 230, 200)",
gridcolor="white",
showbackground=True,
zerolinecolor="white"
)
)
)
fig.show()
1 Like
This is great! What would also be great is a LLM Dash component that could answer end user questions about underlying plot data
2 Likes
Hello @davzup89 ,
There was a competition for this, check for the AI competition.
Please note, that using OpenAI is not free, and you’ll need to make your own api key, etc.
Botly needs some training on the properties of components, he was creating some on AG Grid and also seemingly DataTable.
Thanks for pointing the competition out, @jinnyzor
Here’s the link: Dash-ChatGPT App Challenge
Some of the submitted apps were also added to Plotly Explore Apps Page .
Wow, super cool! Does Plotly have any plans on integrating with an IDE (PyCharm)?
Am I the only one that thinks this isn’t a great idea? LLMs have many limitations, and while producing limited structures like code is probably among their better use case it’s still essentially fancy autocomplete.
Mozilla had to walk back a similar decision:
opened 08:00PM - 30 Jun 23 UTC
🧑🤝🧑 community
### Summary
MDN's new "ai explain" button on code blocks generates human-like… text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference.
### URL
https://developer.mozilla.org/en-US/docs/Web/CSS/grid
### Reproduction steps
as soon as i heard about this, i visited the first MDN article in my address bar history (for [the grid property](https://developer.mozilla.org/en-US/docs/Web/CSS/grid)), hit "ai explain" on the first code block encountered (the syntax summary), and received the following information:
> `grid: "a" 100px "b" 1fr;`: This value sets the grid template to have two rows and two columns. The first row has a height of 100 pixels and the second row has a height of 1 fraction unit (1fr). The columns are named "a" and "b".
which is deeply but subtly incorrect — this creates only _one_ column (more would require a slash), and the quoted strings are names of areas, not columns. but it's believable, and it's interwoven with explanations of other property values that _are_ correct. this is especially bad since grid is a complex property with a complex shorthand syntax — exactly the sort of thing someone might want to hit an "explain" button on.
the generated text appears to be unreviewed, unreliable, unaccountable, and even unable to be corrected. at least if the text were baked into a repository, it could be subject to human oversight and pull requests, but as best i can tell it's just in a cache somewhere? it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.
and far from disclaiming that the responses might be confidently wrong, you have called it a ["_trusted_ companion"](https://developer.mozilla.org/en-US/blog/introducing-ai-help/). i don't understand this.
### Expected behavior
i would like MDN to contain correct information
### Actual behavior
MDN has generated a convincing-sounding lie and there is no apparent process for correcting it
### Device
Desktop
### Browser
Firefox
### Browser version
Stable
### Operating system
Linux
### Screenshot
_No response_
### Anything else?
_No response_
### Validations
- [X] I have read the [Community Participation Guidelines](https://www.mozilla.org/en-US/about/governance/policies/participation/).
- [X] I have verified that there isn't [already an issue](https://github.com/mdn/yari/issues) that reports the same bug to avoid creating a duplicate.
- [X] I have checked that this is a concrete bug. For Q&A open a [GitHub Discussion](https://github.com/orgs/mdn/discussions/categories/platform).