By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

Del /W999 2gbFile.ext / latest build / Win7 x64 / MSE / Everything

When I do the above command - it keeps reducing my free space - I guess about 2GB for each Wipe number.
When I do the above command - it keeps reducing my free space - I guess about 2GB for each Wipe number.

Not reproducible here.

A couple of things to note:

1) Anything more than a /W3 is a waste of time; higher values don't do anything to wipe the file "better". (They're only there because of a US Government requirement)

2) A /W999 on a 2Gb file is going to take a LONG time. Probably at least an hour or two depending on your drive type. How long did you wait?
Thank you Rex.

1) I had read the CHM and saw the /Wnumber could be 1 to 999 so wanted to try the max.

2) several hours. When I did finally Ctrl-C - the file had grown tremendously. I would have thought Wiping the file would not have caused the file to grow - but just to be re-written..... and it's a NTFS drive type.... From My Computer, right click on Drive, select Properties,.... @drivetype[c:] == 3
Yes I tried it and had to force the window to close after a few minutes and also noticed the file had grown on each cycle. Not something I would have thought of doing independently though.
I noticed too that with a test file I created (using "fsutil file createnew hugefile.dat 1000000000") that is 1GB in size, using "del /w999 hugefile.dat" seems to write all 0x00's in the first GB of the file, then it writes all 0xFF's to the next GB, etc... It seems like the 0x00's should be written over the file's contents, and then the 0xFF's written over the 0x00's that were just written, etc..., but each additional write seems to make the file grow in 1GB increments, ultimately resulting in the file being overwritten only once, and each additional overwrite being an append rather than an overwrite. I can confirm this both by looking at the file with the VIEW command (in hex mode), and by using a disk editing tool.

Interesting to note as well, the fsutil command to create an empty file doesn't seem to overwrite any unused sectors that the file is now going to occupy. It contains all the old contents, and information in the MFT indicates that the "initialized size" is zero, even though the allocated size indicates that the whole file is allocated space on the disk (1000000000 bytes rounded up to the total clusters allocated). This is according to my disk editor utility. Viewing the file with VIEW in hex mode shows the file containing all 0x00's however. I tried using @fileopen, @filewriteb, and @fileclose to write 0x01 0x02 and 0x03 to the first 3 bytes of the file, then VIEW showed what would be expected, while my disk editor utility showed only the first 512 bytes containing 0x01 0x02 and 0x03 followed by 0x00's. The rest of the file continued to show the old contents of those sectors, and now the MFT shows the "initialized size" as 512 bytes. Using the "del /w" causes the "initialized size" to equal the allocated size, and all of the old contents are now overwritten with 0x00's, but each additional "overwrite" has been appended to the file.

I think that last paragraph is probably describing NTFS behavior, and not a fault of TCC (just something interesting to note), but as to the first paragraph, again, only the first "overwrite" seems to overwrite the file, each additional "overwrite" seems to append to the file rather than doing another overwrite.
Anyone try this with the latest build ?

TCC 21.00.34 x64
[FOX] Ultimate Translator