1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

WAD TC 15.0.1.58 x64 crasches with a simple dir command

Discussion in 'Support' started by PBoehm, Feb 26, 2014.

  1. PBoehm

    Joined:
    Oct 14, 2008
    Messages:
    6
    Likes Received:
    0
    I have some (big) file server (about 2TB data) where need a list of off line files using ...

    dir \\fileservername\D$\*.* /s /ao /b >> Offline-Files.txt

    ... but TC disappears without any error message after a certain number of files listed in Offline-Files.txt (this run 60528).

    It looks like a bug (buffer issue??).
    Any idea how to get this fixed?
     
  2. Steve Fabian

    Joined:
    May 20, 2008
    Messages:
    3,520
    Likes Received:
    4
    I would first try the command with the proper syntax:

    dir /s /ao /b \\fileservername\D$\*.* >> Offline-Files.txt

    While most TCC commands accept options in undocumented locations, it is not dependable.
     
  3. PBoehm

    Joined:
    Oct 14, 2008
    Messages:
    6
    Likes Received:
    0
    Accidentally this did not help.
    Again 60528 files where identified and than TC disappeared
    (not even to be found in the task list anymore).
    I am afraid we have a bug here.
    I'll try the same procedure on another file server and will post the result soon.
     
  4. PBoehm

    Joined:
    Oct 14, 2008
    Messages:
    6
    Likes Received:
    0
    On another file server it was even faster and TC disappeared after 1319 found files.
    Still I am afraid we have a bug here.
     
  5. cgunhouse

    Joined:
    Dec 2, 2008
    Messages:
    209
    Likes Received:
    2
    I tried the following:

    md \dfasdfasdfasdfasdf
    md \dfasdfasdfasdfasdf\fasdfasdfasdf
    :
    md \dfasdfasdfasdfasdf\fasdfasdfasdf\...\fasdfasdfas

    where I kept adding more and more directories until TCMD 16.00.40 crashed. I also tried to delete this direcory using:

    rmdir /s \dfasdfasdfasdfasdf

    It also crashed.

    I think somewhere on your file server is a directory that is to long for TCC to handle and the "dir" commnand just stops.

    This on a XP SP3 Pro Workstation
     
    #5 cgunhouse, Feb 26, 2014
    Last edited: Feb 26, 2014
  6. PBoehm

    Joined:
    Oct 14, 2008
    Messages:
    6
    Likes Received:
    0
    Is there a limitation of the directory depth?
    FYI:
    I used to use Ghislers Total Commander find functionality (also with the offline attribute)
    And Total Commander came back without issues (1.670.072 files found).
    The only issue with Total Commander is to produce a proper list of the files found.
    Still looking for a simple solution.
     
  7. rconn

    rconn Administrator
    Staff Member

    Joined:
    May 14, 2008
    Messages:
    9,860
    Likes Received:
    83
    This is WAD, not a bug. (And it's been this way for many years.)

    DIR loads all of the files in any given directory into memory before displaying the directory and then processing the subdirectories. Directories with a very large number of files (and with a limited amount of memory for TCC) will exhaust the system heap and/or stack space.

    For something like this, you'd be a lot better off using a DO loop (which will be slower but only process one file at a time).
     
  8. rconn

    rconn Administrator
    Staff Member

    Joined:
    May 14, 2008
    Messages:
    9,860
    Likes Received:
    83
    This is a known Windows API bug.
     
  9. Rod Savard

    Joined:
    May 26, 2008
    Messages:
    481
    Likes Received:
    3
    I really hope TCC doesn't crash because of this. CMD handles it without crashing and just outputs the message "Directory too long" or something like that.
     
  10. PBoehm

    Joined:
    Oct 14, 2008
    Messages:
    6
    Likes Received:
    0
    Apologize to say but I thought TC is even better than CMD.
    That is why I would comment WAD as NGD (no good design).
    I would expect that TC reports an error at least instead of disappearing at all.
    Is there a chance to get something else within TC?
    Will try the proposed loop you mentioned before.
    Do you know another command line based tool that could provide me the list?
    If you like in a privat email.
    Thanks
     
  11. Steve Fabian

    Joined:
    May 20, 2008
    Messages:
    3,520
    Likes Received:
    4
    For any specific disk volume the limit is MAX_PATH (260 characters) for the full path name, including the terminating NUL character. However, this can be circumvented by mapping a volume to a subdirectory of the parent volume, e.g., by the SUBST or the NET USE commands.

    MSDN articles also mention that it is possible to use up to 32767 Unicode characters, and even more, under special circumstances.
     
  12. Steve Fabian

    Joined:
    May 20, 2008
    Messages:
    3,520
    Likes Received:
    4
    Could this be documented in Limitations?
    Could the impending heap / stack overflow be detected before TCC crashes?
     
  13. djspits

    Joined:
    Apr 13, 2010
    Messages:
    189
    Likes Received:
    2
    No exeception, out-of-heap, stack overflow, crash or sudden-death could ever be WAD.
    A decent intelligible error message is the minimal respons for any condition to be considered "handled" as designed.
    There is point calling something a design if the design says "do nothing".
     
  14. rconn

    rconn Administrator
    Staff Member

    Joined:
    May 14, 2008
    Messages:
    9,860
    Likes Received:
    83
    This issue comes up only once every two or three years. I could trap it in TCC, but (1) it would slow down DIR processing significantly (adversely affecting the 99.999999999999999% of DIRs that don't have 60K+ files in a directory); and (2) it wouldn't do anything useful anyway, other than saying "out of memory".
     
  15. rconn

    rconn Administrator
    Staff Member

    Joined:
    May 14, 2008
    Messages:
    9,860
    Likes Received:
    83
    Sure. Are you willing to accept a significant slowdown in all of your directory accesses (i.e., nearly everything you do in TCC) so TCC can intercept something that you've never encountered, and are extremely unlikely to ever encounter?

    If so, add it to the feedback forum. If enough people say they want TCC to be slower, I will implement it.
     
  16. Steve Fabian

    Joined:
    May 20, 2008
    Messages:
    3,520
    Likes Received:
    4
    The occasion I encountered the issue is typically with using a single PDIR command to accumulate a catalog of a whole primary drive, including CRC, inode and link count information. Certainly a very, very infrequent operation, not worth slowing down TCC. (I presume PDIR is sharing relevant code with DIR). OTOH documenting that it is conceivable that DIR runs out of resources when more than some number of files are in a single directory, or whatever the restriction is, both in the Limitations as well as the DIR topic (and PDIR if applicable) would be a cheap warning.
     
  17. Speedie6

    Joined:
    Jun 28, 2008
    Messages:
    41
    Likes Received:
    0
    I doubt I will ever run into this, so, honestly, I don't honestly have a strong opinion on this. Yet, if CMD is capable of detecting a problem, and I've never found CMD's dir to be intolerably slow, I would tend to think that TCC could too.

    I'm a programmer myself. Though, obviously, you know your product and your coding better than I do. But, it just seems to me that when you call malloc() or whatever memory allocation call you use, that a simple test if it is valid shouldn't be a significant overhead, and generally considered good programming practice. But, again, I know you know your product better than I do, but, just from my own experience, it doesn't seem like it should be a significant overhead.

    But, again, since I doubt I will run into it, I will let the people who have stronger opinions discuss it in more detail.

    The only reason I even opened this thread was I found the "WAD" tag on a thread entitled TCC crashes on dir to be a bizarre WAD.
     
  18. rconn

    rconn Administrator
    Staff Member

    Joined:
    May 14, 2008
    Messages:
    9,860
    Likes Received:
    83
    TCC is running out of stack space, not heap space.
     
  19. Patulus

    Joined:
    Feb 1, 2010
    Messages:
    25
    Likes Received:
    0
    Is it really __try/__except is so much overhead? I would imagine entering __try block it's much less time than the kernel calls needed to actually read a directory content.
     

Share This Page