Problem listing repository files using DIR http(s)://...

  • This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.
Nov 9, 2016
24
0
Netherlands
#1
Hi all.

I'm writing BTMs to download new versions of specific software automatically by reading the repository content directly, then feeding the latest release to WGET.
If target is a real FTP back-end (URI = FTP or FTPS:), then no problem. I can do the following and easily parse the result :
DIR /B ftp://ftp.symantec.com/public/english_us_canada/antivirus_definitions/symantec_antivirus_corp/*core3sdsv5i64.exe

20171129-002-core3sdsv5i64.exe
20171129-016-core3sdsv5i64.exe
...
20171221-022-core3sdsv5i64.exe

20171222-003-core3sdsv5i64.exe
Works a treat.

But if when the URI is HTTP or HTTPS, then I get nothing. Regardless whether the URL contains only dirs or dirs + files :
DIR https://ftp.mozilla.org/pub/firefox/releases/*

Directory of https://ftp.mozilla.org/pub/firefox/releases/*

1 01 1601 0.00 0
0 bytes in 1 file and 0 dirs

Total for: https://ftp.mozilla.org/pub/firefox/releases/*
0 bytes in 1 file and 0 dirs
The same URL entered in my browser gives me a list of almost 1000 lines.

So for now I do TYPE <url>, which downloads the page's HTML code, which I then have to comb through for the latest release.
Pain in the ass!

I've tried with exotic variations such as "ftp://...:80" and "https://...:21" but found no valid work-around.

Isn't there a way to get the same result with DIR /B http(s)://... as with DIR /B ftp(s)://... ?
I hope so.

Later.
Mark/x13
 
Last edited:
Nov 9, 2016
24
0
Netherlands
#4
@Joe :

Of course I know I can achieve my goal in Powershell or some other scripting environment, but I like TCC because it's no-nosense and versatile.
Besides, this is a JPsoft forum. It seems misplaced to suggest using other software as an answer to my question.

Mark/x13
 
#5
Whatever http(s) shows you, it's definitely not a directory. It's just text displayed however the page's author wanted it displayed.

That said, TCC can help you comb through the page. For example,
Code:
COPY http://site/webpage
TPIPE /input=webpage /simple=16
That will remove the HTML tags. TPIPE has many more features that will allow you to extract and manipulate data from a file.
 
Aug 23, 2010
110
2
#9
Keep in mind that "wget -O filename -- URL" literally equals "wget -O - -- URL > filename", which in turn defeats the most useful feature of wget - server timestamping.