Sure. You COULD write a bunch of loops and process files one at a time. Sometimes it is more efficient to create one file especially if you have a bunch of downstream processes that also operate on that data.
In that scenario, it is better to write one loop that combines all the CSVs into one big file. That way, no more loops have to be written.
import shutilimport globimport osroot_directory =#populated from configurationapp_directory =#populated from configurationsource_directory =#as specifiedtarget_directory =#as specifiedoutput_file_name ='YourFileName.csv'source_path = os.path.join(root_directory, app_directory, source_directory)target_path = os.path.join(root_directory, app_directory, target_directory, output_file_name)#change file extention if necessarysource_files = glob.glob(source_path +'*.csv')withopen(target_path, 'wb')as outfile:for i, fname inenumerate(source_files):withopen(fname, 'rb')as infile:if i !=0: infile.readline()# Throw away header on all but first file shutil.copyfileobj(infile, outfile)# Block copy rest of file from input to output without parsing# clean input directoryfor i, fname inenumerate(source_files): os.remove(fname)