Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

Network share slowness vs CMD?

Jun
16
0
TCC 18.00.32 x64 on Windows 7 Build 7601 SP1.

I have a network share that has a lot of files in it (42K+).
Using CMD and doing a "dir <path>" starts showing files in five seconds.
Using TCC and doing a "*dir <path>", I killed it after 90 seconds and it still not starting the file display. IOW, visually nothing had happened.

Is there something extra TCC is doing that would cause it to be that much slower?
 
The answer might be "sorting files"; TCC must read in the entire directory before it sorts entries. (I don't have access to the code, but I suspect TCC slurps the entire directory even if you specify /OU.)
 
That's unfortunate, if true.
First, if we specify unsorted (isn't that the default?), it shouldn't need to "slurp" anything.
Second, even if I sort in CMD, i.e. change the CMD command to "dir /od <path>", it starts in 30 seconds.
I decided to let TCC run, so I started an explicit "*dir /ou <path>" and let it run for 5 minutes this time, still no display.
So, at a bare minimum, TCC is an order of magnitude slower displaying unsorted files as CMD is at displaying sorted ones.
That's not "slurping," that's ingesting one byte at a time.
 
Maybe TCC reads some extra metadata for each file? Checks descriptions or such?

I've also noticed that the Filenames (via F7) dialog is very slow on network shares. Even with less than a hundred files say.
 
On all of my tests, TCC is slightly faster than CMD from entering the DIR command to the finish, whether local or network.

However, in the case of a network path the performance is going to be tied to the performance of your network redirector. TCC uses different Windows APIs to query files than CMD, so if your redirector does not implement those APIs (or implements them badly), that will affect TCC and not CMD.

If you select a subset of files (i.e., something like "dir A*.exe") does TCC work as expected? I.e., does it display anything at all for your network path?
 
Yes. And I can do DIRs of other subdirectories on the path at will, it's just this particular subdirectory with a huge volume of files that's the problem.
I did a DIR of a subset that returned 2800 of the files, and it started displaying in 5 seconds or so.
I did one of a nonexistent file string (your A*.exe), and it returned instantly.
I then did one of another subset that returns most of the files (33K of the 42K), and killed it after 4 minutes with no display activity. So it's just returning (or reading?) a large # of files that appears to be the problem.

Whatever TCC is doing appeared to impact CMD, though — during the 4 minute wait above, I tabbed over to CMD and did a plain DIR, and while it started in a few seconds, it only displayed in fits and starts, i.e. maybe a screenful or so at a time, then waited a half-second, then another screenful, then another half-second. When I Ctrl-C'd the TCC command, the CMD then proceeded to blitz through displaying the files as per normal. This wasn't a CPU issue — I had hardly any CPU usage on the machine at all during any of this.

How does one know what redirector one is using, and whether it supports a particular set of API's?
 
It's unlikely that its the size of the directory (I can do a DIR on a local directory with > 60K files and it starts displaying in < 2 seconds.) More likely there's one or more files that are causing the problem. You can narrow that down with something like "dir [a-m]*", "dir [n-z]*" etc.

The redirector in use depends on the network; I have no way of determining that. Is this a (Windows-based) LAN or an exotic network?
 
When the mapped drive is on a VPN, elsewhere on the internet, I see results like vr8ce. The directory has 20,000 0-byte files with names like these.
Code:
2016-11-09  00:41               0  4294214595.txt
2016-11-09  00:41               0  4294369638.txt
2016-11-09  00:41               0  4294764182.txt
2016-11-09  00:41               0  4294840120.txt

With that directory as current directory and a simple "DIR" TCC starts outputting after 30 seconds, and finishes 18 seconds later. CMD starts outputting immediately and (also) finished 18 seconds later.

When the directory (as a mapped drive or not) is on the localhost, both start outputting immediately; TCC finishes in 18 seconds, CMD in 9 seconds.

Those seem somewhat contradictory. I can explain it.

Is there an option that will cause DIR not to sort at all?
 
None of the following suggestions is a solution, but it might help narrowing down the problem-area

- What happens if you use ATTRIB? I find ATTRIB to be faster than DIR ( I use it to find files quickly, like: attrib /s MyFile.txt (/E for TCC )
- What happens if you write the results to a file instead of the screen? (DIR.. > c:\result.txt
- Do you have Everything running? (I can imagine it will try to index all those files; maybe even using it's own "driver")
- Do you access the files through a drive mapping or are you accessing a UNC-path?
- Your network providers can be found in the following way:
start ncpa.cpl
Menu > Advanced > Advanced Settings > [Tab] Provider Order
(or in the registry: HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider )​
 
Last edited:
I'm at work, so the roles of the computers have been switched. In another test, with a mapped (internet/Windows VPN) drive, and 20,000 files with names like
Code:
2016-11-09  11:31               0  15.4287980113
(name random in 0-99, extension random in 0-DWORD_MAX). I asked both TCC and CMD to do "DIR /O:E", figuring both would have to get all the data and do the same amount of sorting.

TCC starts outputting at 30 sec and finishes at 42 sec.
CMD starts outputting at 8 sec and finishes at 15 sec.

With a network capture of incoming data, I see that TCC gets ~12MB while CMD gets ~5MB. The capture also shows that TCC uses 29 of the 30 seconds mentioned above getting the data and that CMD uses 7 of the 8 seconds mentioned above. The different amounts of data do not account for the different times to get the data. It appears that sorting is not an issue. If I turn off ANSI, outputting is not an issue either.
 
Not sure what your point is here -- do you want me to switch to the same APIs that CMD is using for DIR, and remove all of the additional features in TCC's DIR?

Or do you want to do an "alias dir=*cmd /c dir"?
Nope. It's the old curiosity thing. I just want to understand what's going on.
 
Yes. And I can do DIRs of other subdirectories on the path at will, it's just this particular subdirectory with a huge volume of files that's the problem.
I did a DIR of a subset that returned 2800 of the files, and it started displaying in 5 seconds or so.
I did one of a nonexistent file string (your A*.exe), and it returned instantly.
I then did one of another subset that returns most of the files (33K of the 42K), and killed it after 4 minutes with no display activity. So it's just returning (or reading?) a large # of files that appears to be the problem.

I doubt it's the number of files; I have created directories with > 500K files and haven't had any difficulty with DIR.

Try doing a "for %a in (*) do echo %a". That will display each file as it's retrieved (without any sorting or processing any of the numerous other things that DIR does). See if it makes it all the way through the directory or if it hangs on a particular filename.
 
Sorry it took so long to get back, real life intervened.

To answer several of the questions posed:
This is a plain old ordinary Windows 2003 file share.
I am physically on the network (no VPN), and the network itself is a high performance one.
I'm not using a mapped drive, I'm using UNC, e.g. \\servername\dirname.
I do not have Everything running.
It wasn't asked, but there are no description files involved.
The directory I'm querying is not my CWD, so the dir looks like "*dir \\servername\dirname1\dirname2".

For the sake of this discussion, let's say that the files in this directory all start with 302*.
If I do a "*dir 3023*.txt", it takes 30-60 seconds to start, and returns around 3K files.
Once it's finished, though, if I do the same thing again, it starts immediately.
Doing any subset of the original request, e.g. "*dir 30235*.txt", also starts immediately. (Might return 200-300 files.)
Moving to a different set, though, e.g. "dir 3024*.txt" again takes 30-60 seconds to start. (Again around 2-4K files.)
Any subset of either of those two, e.g. 30248* or 30231*, starts immediately.
Moving to a different set (3021*) takes 30-60 seconds to start.
And so on.

Rex, I just saw your "for" request. I'll do it, but based on the above, I don't think it has anything to do with a particular file.
 
The "for" took about 45 seconds to start displaying anything, just like the "dir"s.
It displayed the first several thousand in just a few seconds; they corresponded to the ones I'd been doing a directory of above.
Once it got past any files I'd already done a "dir" of, it goes in spurts of about 475 files; it displays the 475 almost immediately (so fast I had to look at the file names to see how many were being displayed), then pauses for just under two seconds, then displays another 450-500, then pauses for just under two seconds, and so forth.
 
It's still going, and the number of files displayed in each spurt has dropped to under 200. The pauses are still the same, i.e. 1.5-2.0 seconds.

<minute or so passes>
I killed it, then re-started it. It got all the way to where it left off, i.e. probably 20K+ files, almost instantly (maybe 5-10 seconds total). It then resumed its spurts.

It's definitely acting like it's reading/caching something.

<more time passes>
I killed the "for" again and re-started it, this time redirecting it to an output file. It didn't feel like it was faster (except for being faster because of the ones it had already processed), and took around 4 minutes total to finish a total of 46,790 files.

But once it finished, I then did a "dir" of the entire directory, e.g. "*dir \\servername\dirname1\dirname2", and it started in 15 seconds and finished in another 15-20 seconds.

So it's definitely acting like it's reading/caching something.
 
Last edited:
The "for" took about 45 seconds to start displaying anything, just like the "dir"s.
It displayed the first several thousand in just a few seconds; they corresponded to the ones I'd been doing a directory of above.
Once it got past any files I'd already done a "dir" of, it goes in spurts of about 475 files; it displays the 475 almost immediately (so fast I had to look at the file names to see how many were being displayed), then pauses for just under two seconds, then displays another 450-500, then pauses for just under two seconds, and so forth.

The FOR command is requesting one file at a time, so the fact that your network takes 45 seconds to display the first file proves that it's not a matter of TCC retrieving and sorting the entire directory. IMO it indicates that either your redirector is seriously wacky or there's something badly misconfigured in your network.

I agree that *something* is reading/caching the directory, but it's definitely not TCC.

Is the target machine running Windows + NTFS?
 
Are you running antivirus software on either your workstation or the remote server? Try disabling both and see what happens.
 

Similar threads

Back
Top