Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

WAD TC 15.0.1.58 x64 crasches with a simple dir command

Oct
6
0
I have some (big) file server (about 2TB data) where need a list of off line files using ...

dir \\fileservername\D$\*.* /s /ao /b >> Offline-Files.txt

... but TC disappears without any error message after a certain number of files listed in Offline-Files.txt (this run 60528).

It looks like a bug (buffer issue??).
Any idea how to get this fixed?
 
Accidentally this did not help.
Again 60528 files where identified and than TC disappeared
(not even to be found in the task list anymore).
I am afraid we have a bug here.
I'll try the same procedure on another file server and will post the result soon.
 
On another file server it was even faster and TC disappeared after 1319 found files.
Still I am afraid we have a bug here.
 
I tried the following:

md \dfasdfasdfasdfasdf
md \dfasdfasdfasdfasdf\fasdfasdfasdf
:
md \dfasdfasdfasdfasdf\fasdfasdfasdf\...\fasdfasdfas

where I kept adding more and more directories until TCMD 16.00.40 crashed. I also tried to delete this direcory using:

rmdir /s \dfasdfasdfasdfasdf

It also crashed.

I think somewhere on your file server is a directory that is to long for TCC to handle and the "dir" commnand just stops.

This on a XP SP3 Pro Workstation
 
Last edited:
Is there a limitation of the directory depth?
FYI:
I used to use Ghislers Total Commander find functionality (also with the offline attribute)
And Total Commander came back without issues (1.670.072 files found).
The only issue with Total Commander is to produce a proper list of the files found.
Still looking for a simple solution.
 
This is WAD, not a bug. (And it's been this way for many years.)

DIR loads all of the files in any given directory into memory before displaying the directory and then processing the subdirectories. Directories with a very large number of files (and with a limited amount of memory for TCC) will exhaust the system heap and/or stack space.

For something like this, you'd be a lot better off using a DO loop (which will be slower but only process one file at a time).
 
I tried the following:

md \dfasdfasdfasdfasdf
md \dfasdfasdfasdfasdf\fasdfasdfasdf
:
md \dfasdfasdfasdfasdf\fasdfasdfasdf\...\fasdfasdfas

where I kept adding more and more directories until TCMD 16.00.40 crashed. I also tried to delete this direcory using:

rmdir /s \dfasdfasdfasdfasdf

It also crashed.

This is a known Windows API bug.
 
I think somewhere on your file server is a directory that is to long for TCC to handle and the "dir" commnand just stops.
I really hope TCC doesn't crash because of this. CMD handles it without crashing and just outputs the message "Directory too long" or something like that.
 
Apologize to say but I thought TC is even better than CMD.
That is why I would comment WAD as NGD (no good design).
I would expect that TC reports an error at least instead of disappearing at all.
Is there a chance to get something else within TC?
Will try the proposed loop you mentioned before.
Do you know another command line based tool that could provide me the list?
If you like in a privat email.
Thanks
 
Is there a limitation of the directory depth?
For any specific disk volume the limit is MAX_PATH (260 characters) for the full path name, including the terminating NUL character. However, this can be circumvented by mapping a volume to a subdirectory of the parent volume, e.g., by the SUBST or the NET USE commands.

MSDN articles also mention that it is possible to use up to 32767 Unicode characters, and even more, under special circumstances.
 
This is WAD, not a bug. (And it's been this way for many years.)

DIR loads all of the files in any given directory into memory before displaying the directory and then processing the subdirectories. Directories with a very large number of files (and with a limited amount of memory for TCC) will exhaust the system heap and/or stack space.

For something like this, you'd be a lot better off using a DO loop (which will be slower but only process one file at a time).
Could this be documented in Limitations?
Could the impending heap / stack overflow be detected before TCC crashes?
 
No exeception, out-of-heap, stack overflow, crash or sudden-death could ever be WAD.
A decent intelligible error message is the minimal respons for any condition to be considered "handled" as designed.
There is point calling something a design if the design says "do nothing".
 
Apologize to say but I thought TC is even better than CMD.
That is why I would comment WAD as NGD (no good design).
I would expect that TC reports an error at least instead of disappearing at all.
Is there a chance to get something else within TC?
Will try the proposed loop you mentioned before.

This issue comes up only once every two or three years. I could trap it in TCC, but (1) it would slow down DIR processing significantly (adversely affecting the 99.999999999999999% of DIRs that don't have 60K+ files in a directory); and (2) it wouldn't do anything useful anyway, other than saying "out of memory".
 
Could this be documented in Limitations?
Could the impending heap / stack overflow be detected before TCC crashes?

Sure. Are you willing to accept a significant slowdown in all of your directory accesses (i.e., nearly everything you do in TCC) so TCC can intercept something that you've never encountered, and are extremely unlikely to ever encounter?

If so, add it to the feedback forum. If enough people say they want TCC to be slower, I will implement it.
 
The occasion I encountered the issue is typically with using a single PDIR command to accumulate a catalog of a whole primary drive, including CRC, inode and link count information. Certainly a very, very infrequent operation, not worth slowing down TCC. (I presume PDIR is sharing relevant code with DIR). OTOH documenting that it is conceivable that DIR runs out of resources when more than some number of files are in a single directory, or whatever the restriction is, both in the Limitations as well as the DIR topic (and PDIR if applicable) would be a cheap warning.
 
I doubt I will ever run into this, so, honestly, I don't honestly have a strong opinion on this. Yet, if CMD is capable of detecting a problem, and I've never found CMD's dir to be intolerably slow, I would tend to think that TCC could too.

I'm a programmer myself. Though, obviously, you know your product and your coding better than I do. But, it just seems to me that when you call malloc() or whatever memory allocation call you use, that a simple test if it is valid shouldn't be a significant overhead, and generally considered good programming practice. But, again, I know you know your product better than I do, but, just from my own experience, it doesn't seem like it should be a significant overhead.

But, again, since I doubt I will run into it, I will let the people who have stronger opinions discuss it in more detail.

The only reason I even opened this thread was I found the "WAD" tag on a thread entitled TCC crashes on dir to be a bizarre WAD.
 
I'm a programmer myself. Though, obviously, you know your product and your coding better than I do. But, it just seems to me that when you call malloc() or whatever memory allocation call you use, that a simple test if it is valid shouldn't be a significant overhead, and generally considered good programming practice. But, again, I know you know your product better than I do, but, just from my own experience, it doesn't seem like it should be a significant overhead.

TCC is running out of stack space, not heap space.
 
TCC is running out of stack space, not heap space.
Is it really __try/__except is so much overhead? I would imagine entering __try block it's much less time than the kernel calls needed to actually read a directory content.
 

Similar threads

Back
Top