Vince,
My understanding from the Problem Statement in post 6 is that we need to read the contents of each index.txt file because the paths point to the next index.txt file, then that index.txt file if not empty will have paths that point to the next index.txt.......and so on. So we have to read each line the index.txt file and use the path to find the next file.
Here is my proposed algorithm
A. I need to learn how to read and save files:
- open file c:\index.txt
- read line 1 and save it to file 1.txt
- repeat step 2 for each line until the EOF
Suppose there are 10 lines in c:\index.txt, then after doing steps 1-3 there will be files 1.txt thru 10.txt
B. Next I need to learn how to read a line to change the directory:
- Open file 1.txt
- Read line 1 in 1.txt (all of these files will have only one line)
- Change the directory using this path
- Execute steps 1-3 in section A storing each next file to 11.txt, 12.txt,etc.....
C. The steps in section A & B continue until all index.txt files found are EMPTY.
D. After completing sections A-C there may be any number of i.txt files from the above process located in the same folder.
Suppose the process yields 100 files from 1.txt to 100.txt which come from many different directories.
Now we can merge all 100 files into a file named ListFile.txt which will contain 100 lines in REVERSE order
as required in the Problem Statement in post 6.
Note: there should not be a problem with memory overload or memory space because the .txt files are very small.
Even if there are 1000 directories involved there should not be a memory issue.
Is all of the above possible using DOS scripting?