Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

Problem with FTP copies

Jan
25
0
Take Command v25.00.20 x64

I have a file that's ~123MB on an FTP server.

If I try to copy it to my local PC using TCC, then it stops the transfer nearly at the end.

1570613219845.png


The transferred size is 0x740e000, which looks like a suspiciously round hex number
 
BTW, if I use wget to copy the file, that gets the full file, so it's not an FTP server problem

1570614209945.png
 
Does the copy work if you use IFTP (perhaps with the /P1 option)?

Nope. Still doesn't work

1570629221789.png


That does seem to stop at a random point. I tried it again, and then the downloaded file was 121,397,248 bytes, and again, and it was 121,896,960 bytes. (Wget or other FTP client software consistently downloads everything).

Note that 'testserver' is on my local LAN with no firewall between it and my PC (I'm using it for unit testing our own software), so /P1 shouldn't be necessary.
 

Attachments

  • 1570629198465.png
    1570629198465.png
    101.1 KB · Views: 248
Without access to your ftp server there's not much I can do to debug this. Try IFTP / V and see is that shows anything useful. And maybe/R if it's timing out.

It's just a standard VSFTPD FTP server running on Ubuntu.

Using IFTP /V shows it seeing the correct size of the file and no errors - but the resulting file is still too small

See screenshot below
1570636227915.png


Oddly, I've just set up a test FTP server at Digital Ocean using an *identical* setup to our internal test server (same Linux version, same vsftpd version, same config, etc). That one seems to work fine. The only significant difference I could think of was that our internal test server is on our internal gigabit network, whereas the connection to Digital Ocean is obviously slower.

So,I tried setting bandwidth throttling on the VM where the local test server is running to 5Mbps - hey presto it worked fine (but obviously a lot slower), downloading the whole file, and the SHA1 checksum matches, so the download is correct.

So, it looks like it may be a timing issue somewhere.
 
It's OK here on a gigabit lan. You can try downloading this file if it will help any.

Code:
v:\> timer copy ftp://vefatica.net/120million
Timer 1 on: 12:22:21
ftp://vefatica.net/120million => V:\120million
     1 file copied
Timer 1 off: 12:22:23  Elapsed: 0:00:01.054

v:\> d 12*
2019-10-09  08:19     120,000,000  120million
 
It's OK here on a gigabit lan. You can try downloading this file if it will help any.

Odd. Here, I can download at 1Gbps using wget, FTP, Filezilla etc, but if I use TCC, it truncates it randomly unless I put severe bandwidth throttling on, then it works fine.

(FWIW, I've tried changing our unit tests to download the large file use IPWorks ftp::download() and that works fine for us at 1Gbps, so if TCC is using IPWorks for that as well, something odd is going on...)
 
I can reproduce the issue.

Code:
v:\> option //passiveftp=no

v:\> copy ftp://vefatica.net/120million
ftp://vefatica.net/120million => V:\120million
     1 file copied

v:\> d 12*
2019-10-09  08:19     119,987,983  120million
 
PassiceFtp=No wasn't necessary. It happens sometimes anyway.
Code:
v:\> option PassiveFtp
PassiveFtp=Yes

v:\> do i=1 to 10 ( copy /q ftp://vefatica.net/120million & d 12* )
2019-10-09  08:19     120,000,000  120million
2019-10-09  08:19     120,000,000  120million
2019-10-09  08:19     120,000,000  120million
2019-10-09  08:19     119,994,707  120million
2019-10-09  08:19     119,999,769  120million
2019-10-09  08:19     120,000,000  120million
2019-10-09  08:19     120,000,000  120million
2019-10-09  08:19     120,000,000  120million
2019-10-09  08:19     119,991,310  120million
2019-10-09  08:19     120,000,000  120million
 

Similar threads

Back
Top