Data upload and callback

Hello, I am using Dash for the first time.

The goal is that the user uploads files, the program must then process its files with functions (Coclustering) and then display results and plots.

I don’t know how to process the files that the user is going to upload (I have already the function which do this).

Actually, I try to create an interface for Coclustering, I already have the coclustering algorithm and I don’t how to combine it with Dash.

Any help !

Hello,
for the document upload, check dcc.Upload, basically you will add another function to process the file after being uploaded, this function will be called in the Callback function, the result of your processing should be data that will be plotted and displayed for the user.

app.layout = html.Div([
dcc.Graph(id='MyGraph',animate=True),
# Upload component
dcc.Upload(
        id='upload-data',
        children=html.Div([
            'Drag and Drop or ',
            html.A('Select Files')
        ])
])
# function to process your uploaded data
def parse_contents(contents, filename, date):
       # processing


@app.callback(Output('MyGraph', 'figure'),
              [Input('upload-data', 'contents')],
              [State('upload-data', 'filename'),
               State('upload-data', 'last_modified')])
def update_output(list_of_contents, list_of_names, list_of_dates):
    if list_of_contents is not None:
        # Do processing using parse_contents() function
        return figure #return graph content to be shown

Check here to get an idea how to link Callback function return with your graph.

Actually, the user upload 3 files (I already have done this)

then we must use this function to process the uploaded files (I’m stuck here)

def load_data_labels(data_name, verbose = False):

if data_name == "CLASSIC4":
    # Set data file path
    filename = "C:/Users/Amira/Desktop/CLASSIC4/docbyterm.txt"
    # Open the file and read content
    myfile = open(filename, "rb")
    content = myfile.read().decode()
    myfile.close()
    # Split content to build a matrix
    content = content.split("\n")
    meta = content[0].split(" ")
    doc_term_counts = np.zeros((int(meta[0])-1, int(meta[1])))
    
    for i in range(1, len(content)-1):
        meta = content[i].split(" ")
        if len(meta) == 3:
            row = int(meta[0])
            if row >= 1553:
                row -=1
            doc_term_counts[row-1, int(meta[1])-1] = int(meta[2])
    
    # Load the true labels
    filename = "C:/Users/Amira/Desktop/CLASSIC4/documents.txt"
    labels_df = pd.read_csv(filename, usecols = [1], delim_whitespace = True, header = None)
    # il y a un header
    labels = labels_df.values.flatten()
    labels = labels[:len(labels)-1]
    ## Permutation
    tmp_perm = np.random.RandomState(seed=42).permutation(doc_term_counts.shape[0])
    np.take(doc_term_counts, tmp_perm, axis = 0, out = doc_term_counts)
    labels = [labels[i] for i in tmp_perm.tolist()]
    
    # Load terms
    filename = "C:/Users/Amira/Desktop/CLASSIC4/terms.txt"
    #dicbyterm2
    myfile = open(filename, "rb")
    content = myfile.read().decode()
    myfile.close()
    terms = content.split("\n")

return doc_term_counts, labels, terms

data, labels, terms = load_data_labels(data_name)
n_clusters = len(np.unique(labels))

then we have to display graphs and index (do coclustering)