Quick draft, untested: Use list.files() aka dir() to dynamically generate your list of files This returns a vector, just run along the vector in a for loop Read the i-th file, then use assign() to place the content into a new variable file_i That should do the trick for you.
Quick draft, untested: Use list.files() aka dir() to dynamically generate your list of files. This returns a vector, just run along the vector in a for loop. Read the i-th file, then use assign() to place the content into a new variable file_i That should do the trick for you.
Dirk Eddelbuettel Thanks, that works. Indeed that is what I tried to do originally but using I Csv(...)). Why doesn't the former work?
– Fred Mar 16 at 1:19 1 Local scope versus global environment. You could try I – Dirk Eddelbuettel Mar 16 at 1:21 @Dirk Eddelbuettel Many thanks, final question: Had I used lapply and dumped everything inside a list, how would I "unpack it"? I ask because lapply is much faster and I dislike loops.
– Fred Mar 16 at 1:25 3 Prove that lapply is faster in reading N files. Moreover, if you dislike loops the burden is on you to read up on the *apply family. And again, these days they are not generally faster.
– Dirk Eddelbuettel Mar 16 at 1:31 5 Yowser, assign and Has someone hijacked Dirk's account? – mdsumner Mar 16 at 3:09.
Don't. Keep them as a list. It's the way to go.
2 - because anything you are likely to want to do with 100 data frames will be easier to do if they are in a list than if they are 100 data frames with names file1 to file100. – Spacedman Mar 16 at 8:16 2 @hadley @Spacedman I am actually not reading 100 files but 8. And although they have similar names they are very different in structure, so ill suited for working with *apply family of functions.
There is a reason I asked the question I asked. – Fred Mar 16 at 13:13 1 We can only answer the question that you posed! If your stackoverflow question isn't the same as your real question, you can't expect to get the best answers.
– hadley Mar 16 at 13:39 1 @hadley If you want to make a point the way I would have done it is (1) Answer the question like Dirk did: "This is how you do x" and (2) Mention that it may not be a good idea to do so. Instead some people just impose the party line unawares that the reason some we come to this forum is precisely to ask the not so obvious. – Fred Mar 16 at 14:29 2 @Fred - if you asked me how to commit suicide, I would walk you over to the counselling center and make sure you got help.It's unethical to do anything else.
I will continue to give answers that I think people need, not the answers people want. If you don't like it downvote me and move on with your life. – hadley Mar 16 at 16:43.
Use assign with a character variable containing the desired name of your data frame. For(i in 1:100) { oname = paste("file", i, sep="") assign(oname, read. Csv(paste(oname, ".
Txt", sep=""))) }.
A simple way to access the elements of a list from the global environment is to attach the list. Note that this actually creates a new environment on the search path and copies the elements of your list into it, so you may want to remove the original list after attaching to prevent having two potentially different copies floating around.
Thank you all for replying. For completeness here is my final answer for loading any number of (tab) delimited files, in this case with 6 columns of data each where column 1 is characters, 2 is factor, and remainder numeric: ##Read files named xyz1111. Csv, xyz2222.
Csv, etc. Filenames *csv") ##Create list of data frame names without the ". Csv" part names Csv",sep="")) assign(i, read. Delim(filepath, colClasses=c("character","factor",rep("numeric",4)), sep = "\t")) }.
1 A couple of things: 1) you don't need to use lapply to generate the data frame names, because substr is already vectorised; just use substr(filenames, 1, 7). And 2) if your data is not actually comma delimited, you shouldn't use read.csv. The point of that function is to read csv files, not general delimited data.
If your data is tab delimited, consider read. Delim (and you don't need the header=T part either). – Hong Ooi Mar 16 at 2:01 @Hong Ooi Many thanks!Corrected.
The original files are tab delimited . Txt with some weird encoding. If I import those I get garbage columns named X at end of dataframe.So I opened .
Txt Open Office Calc, saved as . Csv and now they import fine. Somehow Calc did not replace tab separation when saving as csv file.
– Fred Mar 16 at 2:17 1 A couple of very minor points: (1) using single forward slashes as path separators is platform-independent (and seems neater to me, but that's a matter of taste); (2) file.path() could be substituted for your outer paste() again not a big deal but slightly more semantic – Ben Bolker Mar 16 at 2:22 @Ben Bolker Thanks!Corrected. I am new to Stack Overflow. Learning a lot!
– Fred Mar 16 at 2:40.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.