- Jun
- 127
- 2
I have been using the CDD command since the 4DOS days. The implementatation is very simple, a flat file with a list of all the fully qualified names on the directory. A simple scan in memory could rapidly find the candidates.
However, CDD on my order-of-magnitude-faster disk now with many orders of magnitude more files, is slow to the point of being annoying.
CDD needs an overhaul. Here is what I suggest.
1. keep the flat file around for legacy compatibility. It is nice to have such a list kept up to date for manual scanning.
2. build a TreeMap (see http://mindprod.com/jgloss/treemap.html) to each directory segment name(a name used anywhere as a leg of a fully qualified directory name) which indexes the fully qualified directory name entries that use it. Use that index to help you rapidly find all the candidate directories that might match some wildcard. So for example you could do
cdd *\image\* You would treat cdd image as if it were cdd *\image*
3. Just think what sorts of auxiliary data structures might help you do the various searches you do now.
4. to conserve RAM, intern (see http://mindprod.com/jgloss/interned.html) each of the legs so there is only one copy of the string in RAM.
I would gladly give up some of the flexibility of the current search scheme for extra speed. You might leave the code for the current scheme intact, for people who think otherwise and would like to configure CDD to work as it does now.
However, CDD on my order-of-magnitude-faster disk now with many orders of magnitude more files, is slow to the point of being annoying.
CDD needs an overhaul. Here is what I suggest.
1. keep the flat file around for legacy compatibility. It is nice to have such a list kept up to date for manual scanning.
2. build a TreeMap (see http://mindprod.com/jgloss/treemap.html) to each directory segment name(a name used anywhere as a leg of a fully qualified directory name) which indexes the fully qualified directory name entries that use it. Use that index to help you rapidly find all the candidate directories that might match some wildcard. So for example you could do
cdd *\image\* You would treat cdd image as if it were cdd *\image*
3. Just think what sorts of auxiliary data structures might help you do the various searches you do now.
4. to conserve RAM, intern (see http://mindprod.com/jgloss/interned.html) each of the legs so there is only one copy of the string in RAM.
I would gladly give up some of the flexibility of the current search scheme for extra speed. You might leave the code for the current scheme intact, for people who think otherwise and would like to configure CDD to work as it does now.