1

I have a load of csv files. I want to create a loop that allows me to do this;

    df_20180731 = pd.read_csv('path/cust_20180731.csv')

for each of about 36 files.

My files are df_20160131, df_20160231 ...... df_20181231 etc. Basically dates by the end of the month.

Thanks

1
  • 1
    os.listdir(path) to make the list, then iterate over for a list of DataFrames Commented Dec 4, 2018 at 15:05

3 Answers 3

2
# include here all ids
files = ['20160131', '20160231']

_g = globals()

for f in files:
    _g['df_{}'.format(f)] = pandas.read_csv('path/cust_{}.csv'.format(f))


print(df_20160131)
Sign up to request clarification or add additional context in comments.

Comments

2

You could do something like:

import glob
import pandas as pd

datasets = {}
for file in glob.glob('path/df_*'):
    datasets[file] = pd.read_csv(file)

Comments

1
import os
import pandas as pd

# get a list of all the files in the directory
files = os.listdir(<path of the directory containing all the files>)

#iterate over all the files and store it in a dictionary 
dataframe = {file: pd.read_csv(file)  for file in files}

#if the directory must contain other files, 
#you can check the file paths with any logic(extension etc.), in that case


def logic(fname):
  return  '.csv' in fname

dataframe = {file: pd.read_csv(file)  for file in files if logic(file) }
#this will create a dictionary of file : dataframe_objects 

I hope it helps 

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.