Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

Declined Loop on environment variable names?

May
12,845
164
Is there a way to loop (DO/FOR) on environment variable names? ... say "DO varname in @ENV". The only mechanisms I can think of are cumbersome. I suppose I could write a plugin to provide a word list of variable names, but it would be better built-it to DO.
 
How about:

for var in ( path appdata path ) echo %%var = %var

or the DO equilivent....
----- Original Message -----
From: vefatica
To: [email protected]
Sent: Friday, August 20, 2010 10:43 PM
Subject: [Suggestions-t-2268] Loop on environment variable names?


Is there a way to loop (DO/FOR) on environment variable names? ... say "DO varname in @ENV". The only mechanisms I can think of are cumbersome. I suppose I could write a plugin to provide a word list of variable names, but it would be better built-it to DO.
 
How about:

for var in ( path appdata path ) echo %%var = %var

or the DO equilivent....
----- Original Message -----
From: vefatica
To: [email protected]
Sent: Friday, August 20, 2010 10:43 PM
Subject: [Suggestions-t-2268] Loop on environment variable names?


Is there a way to loop (DO/FOR) on environment variable names? ... say "DO varname in @ENV". The only mechanisms I can think of are cumbersome. I suppose I could write a plugin to provide a word list of variable names, but it would be better built-it to DO.
 
I meant loop on **all** variable names. Like this (with the help of an experimental plugin _VARNAMES):

Code:
v:\> do var in /L %_varnames (echo %var)
ALLUSERSPROFILE
APPDATA
CommonProgramFiles
COMPUTERNAME
ComSpec
DebugVariableExclude
**SNIP**


How about:

for var in ( path appdata path ) echo %%var = %var

or the DO equilivent....
----- Original Message -----
From: vefatica
To: [email protected]
Sent: Friday, August 20, 2010 10:43 PM
Subject: [Suggestions-t-2268] Loop on environment variable names?


Is there a way to loop (DO/FOR) on environment variable names? ... say "DO varname in @ENV". The only mechanisms I can think of are cumbersome. I suppose I could write a plugin to provide a word list of variable names, but it would be better built-it to DO.
 
>I meant loop on **all** variable names. Like this (with the help of
>an experimental plugin _VARNAMES):

Wouldn't this do what you want:

:: LOOP.BTM
@echo off
set > %temp\SET.DAT
do l in @%temp\SET.DAT
set var=%@word["=",0,%l]
set val=%@word["=",1,%l]
echo %var = %val
enddo
del /q %temp\SET.DAT
quit

Best Regards,

* Klaus Meinhard *
<www.4dos.info>
 
On Sat, 21 Aug 2010 04:26:48 -0400, K_Meinhard
<> wrote:

|---Quote---
|>I meant loop on **all** variable names. Like this (with the help of
|>an experimental plugin _VARNAMES):
|---End Quote---
|Wouldn't this do what you want:

Yes. But it is quite cumbersome.

The idea originated when I wanted to process a list of variables like
this one (of denied SMTP connections at my mail server since
2010-06-01).

Code:
IN=135
RU=56
US=50
UA=45
CN=38
BR=35
TW=29
KR=29
VN=23
GB=20
ES=20
CA=19
RO=18
DE=17
NL=15
CO=14
GR=13
KZ=12
EG=11
CL=10
TH=9
PL=9
ID=9
MA=8
SNIP - complete list available

I'm still toying with plugin solutions. At the moment I have a
"VARNAMES regex" command.

Code:
v:\> do var in /P VARNAMES "^^[A-Z]{2}$" (set /a total+=%[%var])

v:\> echo %total
768

And now I'm imagining an enhancement to DO like this, where /E
indicates looping on environment variable names matching the regex.

Code:
DO var in /E "^^[A-Z]{2}$" (set /a total+=%[%var])
 
Have you tried using the /P switch?

DO var in /P SET (echo %@WORD["=",0,%var])

-Scott

vefatica <> wrote on 08/20/2010 10:43:27 PM:


> Is there a way to loop (DO/FOR) on environment variable names? ...
> say "DO varname in @ENV". The only mechanisms I can think of are
> cumbersome. I suppose I could write a plugin to provide a word list
> of variable names, but it would be better built-it to DO.
>
>
>
>
 
On Mon, 23 Aug 2010 19:12:50 -0400, K_Meinhard
<> wrote:

|---Quote---
|>I meant loop on **all** variable names. Like this (with the help of
|>an experimental plugin _VARNAMES):
|---End Quote---
|Wouldn't this do what you want:
|
|:: LOOP.BTM
| @echo off
| set > %temp\SET.DAT
| do l in @%temp\SET.DAT
| set var=%@word["=",0,%l]
| set val=%@word["=",1,%l]
| echo %var = %val
| enddo
| del /q %temp\SET.DAT
| quit

Yes it does. But it's rather cumbersome.
 
On Mon, Aug 23, 2010 at 7:51 PM, vefatica <> wrote:

> On Mon, 23 Aug 2010 19:12:50 -0400, K_Meinhard
> <> wrote:
>
> |---Quote---
> |>I meant loop on **all** variable names. *Like this (with the help of
> |>an experimental plugin _VARNAMES):
> |---End Quote---
> |Wouldn't this do what you want:
> |
> |:: LOOP.BTM
> | *@echo off
> | *set > %temp\SET.DAT
> | *do l in @%temp\SET.DAT
> | * * *set var=%@word["=",0,%l]
> | * * *set val=%@word["=",1,%l]
> | * * *echo %var = %val
> | *enddo
> | *del /q %temp\SET.DAT
> | *quit
>
> Yes it does. *But it's rather cumbersome.

Out of curiosity, how often do you need to do it?

I recall seeing requests back in the old 4DOS days to build in
functionality that could either be provided by third-party
applications called from 4DOS, or implemented through 4DOS BTM files.
A lot of them struck me as "This is something else's job. Use the
something else." cases, and so does this.

I spend a lot of time playing in the Unix sandbox, and in that
environment, my reflex is to create a shell script akin to the above,
and not request that the bash maintainers make it a built-in.
Built-ins are called for where there is a significant performance
improvement, but that assumes it's something I'll be doing a *lot*.
(And is why, for example, the Korn shell added things like integer
arithmetic as a built-in, rather than calling eval as an external:
doing math in a script *did* happen a lot, like keeping count in a
loop. The same goes for making echo a built-in, aliased to the
internal print command, instead of calling /bin/echo every time you
wanted to write something to the screen. The performance increase
made it feasible to write entire applications in the shell.)

For something I'll do occasionally, the time needed to create a debug
a sufficiently generalized shell script to do it, and stick it in a
private bin directory for future use is quite acceptable. A shell
script will run slower than a built-in, but if I'm doing it
infrequently, it's likely the case that I won't care, as the
additional time required won't be enough to really matter.

If you aren't going to do it a lot, and it *can* be done in a BTM, I
see no reason to make it a built-in.

Am I missing something?
_____
Dennis
 
On Mon, 23 Aug 2010 20:42:42 -0400, DMcCunney
<> wrote:

|Out of curiosity, how often do you need to do it?

Recently, once. But if the functionality had been there for the last
20 years or so I probably would have used it a few times

|I recall seeing requests back in the old 4DOS days to build in
|functionality that could either be provided by third-party
|applications called from 4DOS, or implemented through 4DOS BTM files.
|A lot of them struck me as "This is something else's job. Use the
|something else." cases, and so does this.

TCC is chock-full of functionality which is provided by third party
apps, or which can be accomplished with a script (often with no loss
of speed). A good example is DO's processing each line of the output
of a command. The good old fashioned way (a temp file) is much
faster:

Code:
v:\> type dodo.btm & dodo.btm
timer /q

do i=1 to 500
        do line in /P set /x
                echo %line > nul
        enddo
enddo

timer
timer /q

do i=1 to 500
        set /x > vars.txt
        do line in @vars.txt
                echo %line > nul
        enddo
        del /q vars.txt
enddo

timer
Timer 1 off: 21:33:39  Elapsed: 0:00:16.16
Timer 1 off: 21:33:47  Elapsed: 0:00:07.70

|I spend a lot of time playing in the Unix sandbox, and in that
|environment, my reflex is to create a shell script akin to the above,
|and not request that the bash maintainers make it a built-in.
|Built-ins are called for where there is a significant performance
|improvement, but that assumes it's something I'll be doing a *lot*.
|(And is why, for example, the Korn shell added things like integer
|arithmetic as a built-in, rather than calling eval as an external:
|doing math in a script *did* happen a lot, like keeping count in a
|loop. The same goes for making echo a built-in, aliased to the
|internal print command, instead of calling /bin/echo every time you
|wanted to write something to the screen. The performance increase
|made it feasible to write entire applications in the shell.)

You're pretty much stuck with that in the UNIX world. How often do
UNIX shells get new features? I was a tcsh junkie 15 (or so) years
ago. I'll bet it hasn't changed much.

I think it's Rex's goal to (within reason) give the users what they
want.

And speed ... I'm obsessed with it a little more than most.
 
Is there a way to loop (DO/FOR) on environment variable names? ... say "DO varname in @ENV". The only mechanisms I can think of are cumbersome. I suppose I could write a plugin to provide a word list of variable names, but it would be better built-it to DO.

If you have the full version, you could use array operations and a for loop. Here are a few lines that might do it.

Code:
setarray x[1024]
set y=%@execarray[x,set]
for /l %f in (0,1,%@dec[%@arrayinfo[x,1]]) do if %@len[%x[%f]] GT 0 echo %@word["=",0,%x[%f]] = %[%@word["=",0,%x[%f]]]

Tim
 
On Mon, Aug 23, 2010 at 10:14 PM, vefatica <> wrote:

> On Mon, 23 Aug 2010 20:42:42 -0400, DMcCunney
> <> wrote:
>
> |Out of curiosity, how often do you need to do it?
>
> Recently, once. *But if the functionality had been there for the last
> 20 years or so I probably would have used it a few times

"A few" != "many".


> |I recall seeing requests back in the old 4DOS days to build in
> |functionality that could either be provided by third-party
> |applications called from 4DOS, or implemented through 4DOS BTM files.
> |A lot of them struck me as "This is something else's job. *Use the
> |something else." cases, and so does this.
>
> TCC is chock-full of functionality which is provided by third party
> apps, or which can be accomplished with a script (often with no loss
> of speed). *A good example is DO's processing each line of the output
> of a command. *The good old fashioned way (a temp file) is much
> faster:

<...>

And the good old fashioned way is with us because MS-DOS didn't
support connecting the standard output of a program directly to the
standard input of another program the way Unix does things. (Since
DOS was single tasking, it couldn't. You had to use a temp file. I
would do that on a RAM disk to speed things further.)


> |I spend a lot of time playing in the Unix sandbox, and in that
> |environment, my reflex is to create a shell script akin to the above,
> |and not request that the bash maintainers make it a built-in.
> |Built-ins are called for where there is a significant performance
> |improvement, but that assumes it's something I'll be doing a *lot*.
> |(And is why, for example, the Korn shell added things like integer
> |arithmetic as a built-in, rather than calling eval as an external:
> |doing math in a script *did* happen a lot, like keeping count in a
> |loop. *The same goes for making echo a built-in, aliased to the
> |internal print command, instead of calling /bin/echo every time you
> |wanted to write something to the screen. *The performance increase
> |made it feasible to write entire applications in the shell.)
>
> You're pretty much stuck with that in the UNIX world. *How often do
> UNIX shells get new features? *I was a tcsh junkie 15 (or so) years
> ago. *I'll bet it hasn't changed much.

The most recent version of tcsh is 6.17.00, dating from 7/10/09. I
don't believe the functionality has changed in years. Most versions
are likely bug fixes or due to ports to other environments. (Like
*nix itself, tcsh tries to be portable, and runs on things that aren't
PCs and under OSes that aren't Unix. There's a Win32 port under
Cygwin, and I believe there's a MinGW version.)

The question is what new features a Unix shell *needs*. The shell has
always been two things: an interactive interpreter that serves as the
user's command line interface to the system, and a scripting language.
And since Unix was portable and ran on lots of architectures, and
since there were differences between Unix implementations (like BSD vs
SysV), there would be things you probably wouldn't *want* to try to
build into the shell because different *nixes did them in different
ways.

I started on the Bourne shell, but happily moved to the Korn shell
when it became fully sh compatible, and could be installed as sh on
most systems. The Korn shell added useful built-ins like print and
let, and improved the shell as an interactive interface by adding
things like command line history, recall, and editing.

I never cared for csh or tcsh because I wasn't thrilled by the script
language. Since most Unix systems used an assortment of Bourne shell
scripts as part of the infrastructure of the system, I saw no point to
learning another scripting language when the stuff I'd be concerned
with as a sysadmin was all Bourne shell.

These days, pretty much everyone is moving to bash. Bash tries to be
one-size-fits-all, and is what happens when you can't decide whether
you want a shell like the Bourne shell or a shell like the C shell,
and you compromise and create one that combines most features of both,
plus a few wrinkles of its own.

But the Unix philosophy has always been one tool for one job, and
using scripts to tie tools together when you needed to do more than
one job as part of your process. So "new features" will be things to
improve it as an interactive interface, like command history, recall,
and editing, or language constructs to improve it as a script
language. When you start wanting to add things that are normally the
province of an external program, instead of calling that external
program from a script, you are arguably missing the point of what the
shell is supposed to be.


> I think it's Rex's goal to (within reason) give the users what they
> want.

The question comes down to what TCC ought to be. From my perspective,
it's a shell, just like the Unix shells. It provides an interactive
command line interface to the OS, and a glue language to tie together
other tools to do more complex processing. At some point, you have to
draw a line, and say "That's not the command processor's job". The
question is where you draw the line.

Rex has been wonderful about giving users what they want, but the
80/20 rule applies. A lot of features people ask for require
significant effort to implement, and one question that has to be asked
is "Is this feature generally useful enough to *all* users to make the
effort worth it?" If I'm Rex, the requested feature is something that
will be used occasionally by a few users, and can be accomplished
already in a batch file using existing facilities, the answer may be
"no" unless the code required is trivial.


> And speed ... I'm obsessed with it a little more than most.

So am I, but I reserve my real desires for things that need it. If a
batch file or shell script I'll use once in a while takes 5 seconds
where a built-in would take a fifth of a second, I won't *care*. And
while the built-in may let me do on one line of code what the script
would require 5 for, the syntax is likely to be complex enough that it
won't be all that much faster to write it and get it right.
_____
Dennis
 
On Mon, 23 Aug 2010 23:02:47 -0400, TimButterfield
<> wrote:

|If you have the full version, you could use array operations and a for loop. Here are a few lines that might do it.
|
|
|Code:
|---------
|setarray x[1024]
|set y=%@execarray[x,set]
|for /l %f in (0,1,%@dec[%@arrayinfo[x,1]]) do if %@len[%x[%f]] GT 0 echo %@word["=",0,%x[%f]] = %[%@word["=",0,%x[%f]]]
|---------

All I want is to loop on the variable **names**. I can get the
**names** (only) into an array thus:

Code:
setarray varnames[512]

set i=0

for /f "delims==" %var in ('set /x') (set varnames[%i]=%var & set /a
i+=1)

Then I'd have to loop on the array elements (0 ... i-2).

Something like this would be a lot easier.

Code:
DO varname in @ENV:

Better yet, throw in a regular expression to match:

Code:
DO varname in /E [regex]
 
On Mon, 23 Aug 2010 23:43:21 -0400, DMcCunney
<> wrote:

|When you start wanting to add things that are normally the
|province of an external program, instead of calling that external
|program from a script, you are arguably missing the point of what the
|shell is supposed to be.

So what do you think of TCC? It's already far beyond your definition
of a shell.
 
On Tue, Aug 24, 2010 at 12:00 AM, vefatica <> wrote:

> On Mon, 23 Aug 2010 23:43:21 -0400, DMcCunney
> <> wrote:
>
> |When you start wanting to add things that are normally the
> |province of an external program, instead of calling that external
> |program from a script, you are arguably missing the point of what the
> |shell is supposed to be.
>
> So what do you think of TCC? *It's already far beyond your definition
> of a shell.

So it is. But what gets tricky is what you mean by the shell.

In the MS-DOS days when 4DOS ruled, the default shell was COMMAND.COM.
But MS made it possible to use something else, with the appropriate
SHELL= line in CONFIG.SYS, so you could use, say, 4DOS instead.

I took that a step further, because I used the MKS Toolkit. The
Toolkit was a collection of MS-DOS versions of all the Unix utilities
that made sense in a single-user, single tasking environment,
including a remarkably complete version of the Korn shell, that had
everything except asynchronous sub-processes because DOS couldn't do
that. Installed in fullest Unix compatibility mode, you set the MKS
INIT.EXE program as the shell in CONFIG.SYS.

When you booted the system, and drivers were installed, then INIT.EXE
ran, and printed a Login: prompt. When you entered a userid, INIT
called LOGIN.EXE. LOGIN looked in a Unix compatible /etc/passwd file,
and if it found the ID, it changed to whatever was listed as that ID's
home directory, and ran whatever was listed as that ID's shell. I had
IDs that ran vanilla COMMAND.COM, 4DOS, the MKS Korn shell, and
Desqview. Exit the shell, and control was returned to INIT which
printed Login: again. So I could change the environment I was working
in without rebooting. Just log off and log back on.

When Windows 3.X came along, the "shell" was Program Manager. But you
could change that to something else by changing an entry in the
SYSTEM.INI file. I looked at an assortment of Program Manager
replacements. I still ran the Toolkit, with INIT underneath, so I had
IDs that copied a custom copy of SYSTEM.INI over the master one, and I
could run Win 3.X with whatever shell I preferred by selecting the
appropriate ID.

MS-DOS was still underneath it all for Win9.X, and INIT was still
there, so I played other games with the configuration, and could do
things like run a CLI environment using 4DOS or th Korn shell, run
Desqview, run Windows with Explorer, or run an Explorer replacement
like LiteStep.

Win2K/XP are proper 32 bit OSes, and the old Tookit was 16 bit, so I
regretfully bid it farewell. These days, I use Cygwin to provide a
*nix environment. As far as NT based Windows versions are concerned,
Windows Explorer is your shell. The command line interface is still
there, but it's a good bet the majority of Windows users never go near
it.

I haven't really explored TCC in detail, so I can't give you a full
answer. Rex once mentioned that 80% of the users used 20% of the
features in his products, but everyone used a *different* 20%. That's
as true for me as anyone else. What I loved about 4DOS back when was
three basic things: built-in command line recall and editing, the
ability to evaluate variables on the command line, and the
enhancements to the batch language that made it possible to write
actual programs in it. Things like a built-in file browser akin to
LIST and extensive help bound to F1 were icing on the cake.

I make roughly the same use of what TCC offers, and probably haven't
done more than scratch the surface.

But Windows is a very different OS than Unix, so a CLI for Windows
will be different. Windows *doesn't* have the "one tool for one job"
philosophy. It doesn't have the rich array of tools to call from and
tie together in scripts that Unix does. I would expect more things to
get built in to TCC, simply because they won't be conveniently
available elsewhere unless you install third party stuff like Cygwin.

My rough rule of thumb in thinking about this is that a lot of
requests for enhancements in any program are to handle particular
special cases. I start to wonder what the special case is an instance
of, and wonder if a more general enhancement can't be made to address
the underlying class of which the special case is an instance.
_____
Dennis
 
These days, pretty much everyone is moving to bash. Bash tries to be one-size-fits-all, and is what happens when you can't decide whether you want a shell like the Bourne shell or a shell like the C shell, and you compromise and create one that combines most features of both, plus a few wrinkles of its own.
I'd say it's not "totally true".
I observe quite opposite trend in attempt to make everything strictly sh-compatible (and it's really not hard). Even bash itself leaning toward closer resemblance of POSIX sh. You maintan built-in perfomance improvements of, say, bash, but you get as much compatibility as possible out of the box.
And I can't name it as bad thing.
 
On Wed, Aug 25, 2010 at 9:26 PM, AnrDaemon <> wrote:

> ---Quote (Originally by DMcCunney)---
> These days, pretty much everyone is moving to bash. *Bash tries to be one-size-fits-all,
> and is what happens when you can't decide whether you want a shell like the Bourne
> shell or a shell like the C shell, and you compromise and create one that combines most
> features of both, plus a few wrinkles of its own.
> ---End Quote---


> I'd say it's not "totally true".
> I observe quite opposite trend in attempt to make everything strictly sh-compatible
> (and it's really not hard). Even bash itself leaning toward closer resemblance of POSIX sh.
> You maintan built-in perfomance improvements of, say, bash, but you get as much
> compatibility as possible out of the box.

When I say "everyone is moving to bash", I'm mostly thinking of Linux,
where every distro I've seen ships with bash as the default shell.
Others are available, and you can get things like pdksh, tcsh, and zsh
if you prefer them. For that matter, I've seen at least one distro
that uses Python in place of Bourne shell scripts for system
configuration.

But if bash is increasingly POSIX sh compatible, I don't think it much
matters. If it runs all the system scripts (which are likely to be
Bourne shell compatible, and not use the added features of bash), it
might as well *be* sh.

(And in fact, it may well use ash, which as far as I can tell is a
subset of bash intended for running scripts, which keeps the script
language but leaves out a lot of the interactive enhancements in the
interests of speed and size. You aren't likely to run ash
interactively, so you don't care.)

Along similar lines, I have vim here. Most Linux systems ship with
vim as vi. I learned vi back when, and for what I do with vim, it
might as well *be* vi. Fortunately, it's compatible enough. I think
there are a few obscure edge cases where what vim does isn't exactly
what vi does, but I've never encountered one. I just do "vi
<filename>" like I always have. The additional features are there if
I want them, but for the most part I don't, and meanwhile, they don't
get in the way.


> And I can't name it as bad thing.

I can't either.

My concern is keeping track of what feature is compatible with what
shell. There *are* systems that don't use bash as the default shell,
and I try to keep scripts portable, and leave out or make conditional
things not guaranteed to work everywhere. The trick is knowing what
those "not guaranteed to work everywhere" features are.


> Rex Conn
> JP Software
_____
Dennis
 
| But the Unix philosophy has always been one tool for one job, and
| using scripts to tie tools together when you needed to do more than
| one job as part of your process.

Sorry to reply this late (I had read-only internet access), but therein
lies the problem: each tool has its own, unique command line, with different
tools having completely different methods of specifying the same option.
There is not even any consistency in how to specify that most common file
processing case, a single input file and a single output file. Some require
redirection, some require "source destination" order, some require
"destination source" order. Cf. 4DOS/4NT/TCC - there is nearly 100%
consistency between commands on how to specify each available option. Once
you learned the basic syntax, you have it for all commands. And another
point - because it is a single program, its documentation include all
"tools". In the POSIX world, when you need to do a specific task, its up to
your memory whether or not you can find the actually available tool. If you
don't know its name, you may need to spend many hours browsing "man-pages"
to find it.
--
Steve
 
I'm still toying with plugin solutions. At the moment I have a "VARNAMES regex" command.

Code:
v:\> do var in /P VARNAMES "^^[A-Z]{2}$" (set /a total+=%[%var])

v:\> echo %total
768
And now I'm imagining an enhancement to DO like this, where /E
indicates looping on environment variable names matching the regex.

Code:
DO var in /E "^^[A-Z]{2}$" (set /a total+=%[%var])

Why not just go the plugin route? For example, I use Charles Dye's ISO8601 plugin everyday, mainly for the QCAL command.

Should TCC have a builtin QCAL command?

Loading the ISO8601 plugin, calling the command, and unloading the plugin is fast on my system, calling QCAL.BTM;

Code:
@setlocal
@echo off
if not plugin iso8601 plugin /l c:\utils\iso8601
if isplugin qcal qcal %1 %2 %3 %4 %5
if plugin iso8601 plugin /u iso8601
endlocal
If I want a quick calendar, I have the following alias defined;

Code:
@@ctrl-q=qcal /i /3
I got into a discussion about this many years back when Luchezar Georgiev was updating 4DOS. At that time, should 4DOS be modified, making it a larger program, to accommodate whatever the user wanted, or should "Installable Commands" be created to achieve the same thing for those that required the enhancement?

Only Rex can change TCC, and only a few have written plugins to enhance TCC. The plugins that have been written have enhanced TCC for those that required the enhancement. Leaving these enhancements out of TCC allows TCC to be a smaller program, which loads faster, and uses less system resources.

Joe
 
Why not just go the plugin route? For example, I use Charles Dye's ISO8601 plugin everyday, mainly for the QCAL command.

Should TCC have a builtin QCAL command?

If that's what you want, then I suggest you use the QCAL plugin instead. It only provides the QCAL command and a minimal subset of functions (just enough to parse Holidays.ini.)
 
On Thu, 26 Aug 2010 10:04:59 -0400, Joe Caverly
<> wrote:

|Why not just go the plugin route?

I suppose that's what I'll do since there hasn't been much interest
expressed.

Getting the varnames (even matching a regex) is almost trivial but
thete are two things I can't overcome with a plugin.

"DO var IN /P command" is far slower than "DO var IN /L list". And if
a varname contained a space (rare but possible), DO /L would not
process a list (say from _VARNAMES or @VARNAMES[regex]) correctly.
Nearly everywhere else in TCC a quoted string is treated as a single
token; but not for DO /L.

Code:
v:\> for %v in (a "b c" d) echo %v
a
"b c"
d

v:\> do v in /L a "b c" d (echo %v)
a
"b
c"
d
 
On Thu, Aug 26, 2010 at 9:08 AM, Steve Fábián <> wrote:

> | But the Unix philosophy has always been one tool for one job, and
> | using scripts to tie tools together when you needed to do more than
> | one job as part of your process.
>
> * *Sorry to reply this late (I had read-only internet access), but therein
> lies the problem: each tool has its own, unique command line, with different
> tools having completely different methods of specifying the same option.
> There is not even any consistency in how to specify that most common file
> processing case, a single input file and a single output file. Some require
> redirection, some require "source destination" order, some require
> "destination source" order.

Yes, and it's an occasional source of eye rolling in the *nix word.
It's not surprising, as in the original Unix development, the tools
were created by individual developers scratching an itch, and creating
a tool that did something they needed to do. While you can argue that
they should have, no one was really thinking about compatibility back
then, and how nice it would be if commands had regular syntax. The
Unix devs were building a new OS that would be a better environment
for their work as software developers, and I don't think anyone
dreamed it would be successful enough to escape Bell Labs and make a
good try at taking over the world. Back then, you could probably pick
up the phone or walk down the hall and talk to the guy who write a
particular utility if you weren't sure what was going on.

The Gnu versions of the various tools have tried to address that, so
you can be reasonably confident that at least "foo --help" will give
you a usage screen for foo, and that there should be a corresponding
foo man page with more details.


> Cf. 4DOS/4NT/TCC - there is nearly 100%
> consistency between commands on how to specify each available option. Once
> you learned the basic syntax, you have it for all commands.

Yep. It's a strength. The weakness is that it's not portable: if you
are using 4DOS/4NT/TCC you are running a flavor of MS-DOS/Windows. If
you are running something that isn't an Intel architecture PC or an OS
that wasn't made by Microsoft, You must use a different tool set. The
reverse is less true - ports of the *nix tools exist for Windows in
things like Cygwin or Microsoft Services For Unix.


> And another
> point - because it is a single program, its documentation include all
> "tools". In the POSIX world, when you need to do a specific task, its up to
> your memory whether or not you can find the actually available tool. If you
> don't know its name, you may need to spend many hours browsing "man-pages"
> to find it.

<shrug> If you're in the POSIX world, you learn. I suspect the 80/20
rule applies there, too - 80% of the users use 20% of the commands,
and become familiar with the syntax for the commands they use. If
they find themselves needing to do something not covered by the stuff
they know, they need to find out what does it and how it's invoked
(and may need to install it if it's not part of the default install on
their system.)

If you are working on Windows and want to use a command line, TCC is
an essential tool. But you still face the question of what should be
built into TCC, and what should really be the job of something else
you call from TCC


_____
Dennis
 
I suppose that's what I'll do since there hasn't been much interest
expressed.

Getting the varnames (even matching a regex) is almost trivial but
thete are two things I can't overcome with a plugin.

"DO var IN /P command" is far slower than "DO var IN /L list". And if
a varname contained a space (rare but possible), DO /L would not
process a list (say from _VARNAMES or @VARNAMES[regex]) correctly.
Nearly everywhere else in TCC a quoted string is treated as a single
token; but not for DO /L.

Code:
v:\> for %v in (a "b c" d) echo %v
a
"b c"
d

v:\> do v in /L a "b c" d (echo %v)
a
"b
c"
d

But an equals sign can never occur in a variable name, right? So use that as your delimiter:

Code:
do var in /t"=" /l a=b c=d ( echo %var )
(No parentheses around the string set in this syntax. No, I don't know why.)
 
On Thu, 26 Aug 2010 10:55:24 -0400, Charles Dye
<> wrote:

|But an equals sign can never occur in a variable name, right? So use that as your delimiter:
|
|
|Code:
|---------
|do var in /t"=" /l a=b c=d ( echo %var )
|---------
|(No parentheses around the string set in this syntax. No, I don't know why.)

Good idea! The result would look odd

Code:
VARNAME WITH SPACE=ALLUSERSPROFILE=APPDATA=CLIENTNAME

but it's not meant to be looked at and should work. We have the
ability to specify delimiters just about everywhere it would be
needed.

Code:
v:\> for /T"=" %v in (VARNAME WITH
SPACE=ALLUSERSPROFILE=APPDATA=CLIENTNAME) echo %v

VARNAME WITH SPACE
ALLUSERSPROFILE
APPDATA
CLIENTNAME

v:\> do v in /T"=" /L VARNAME WITH
SPACE=ALLUSERSPROFILE=APPDATA=CLIENTNAME (echo %v)

VARNAME WITH SPACE
ALLUSERSPROFILE
APPDATA
CLIENTNAME
 
| good try at taking over the world. Back then, you could probably
| pick up the phone or walk down the hall and talk to the guy who write a
| particular utility if you weren't sure what was going on.

Yes, about 1986 I was working as a contractor at Bell Lab, and just
picked up the phone to ask a question about ksh - from none other than Dr.
Korn! But even then Bell Lab people forced to use Unix for such things as
accounting (and not software development) hated its absolutely and
completely user unfriendly interface.

| ---Quote---
|| Cf. 4DOS/4NT/TCC - there is nearly 100%
|| consistency between commands on how to specify each available
|| option. Once you learned the basic syntax, you have it for all
|| commands.
| ---End Quote---
| Yep. It's a strength. The weakness is that it's not portable: if
| you are using 4DOS/4NT/TCC you are running a flavor of MS-DOS/Windows.
| If you are running something that isn't an Intel architecture PC or an
| OS that wasn't made by Microsoft, You must use a different tool set.
| The reverse is less true - ports of the *nix tools exist for Windows in
| things like Cygwin or Microsoft Services For Unix.

That's because it is not commercially viable to port it. Were it otherwise,
the oft requested 4NIX would have long been available. However, I have used
4DOS long ago on Solaris' which had a DOS emulator.

| ---Quote---
| And another
|| point - because it is a single program, its documentation include
|| all "tools". In the POSIX world, when you need to do a specific
|| task, its up to your memory whether or not you can find the
|| actually available tool. If you don't know its name, you may need
|| to spend many hours browsing "man-pages" to find it.
| ---End Quote---
| <shrug> If you're in the POSIX world, you learn. I suspect the
| 80/20 rule applies there, too - 80% of the users use 20% of the commands,
| and become familiar with the syntax for the commands they use. If
| they find themselves needing to do something not covered by the stuff
| they know, they need to find out what does it and how it's invoked
| (and may need to install it if it's not part of the default install
| on their system.)

The issue in the POSIX world is "how do you learn" - what's missing is the
equivalent of the sections of TCC documentation titled "... by category".
All programming language manuals published by their vendors I've ever seen
(and there are many) are organized by category. Only the POSIX world demands
rote learning.
--
Steve
 
(No parentheses around the string set in this syntax. No, I don't know why.)

Because we shouldn't be putting parentheses around the string list, period. It's incorrect syntax, and the fact that it works without /T is just lagniappe from the parser.
 
On Thu, Aug 26, 2010 at 11:32 AM, Steve Fábián
<> wrote:

> | good try at taking over the world. *Back then, you could probably
> | pick up the phone or walk down the hall and talk to the guy who wrote a
> | particular utility if you weren't sure what was going on.
>
> Yes, about 1986 I was working as a contractor at Bell Lab, and just picked
> up the phone to ask a question about ksh - from none other than Dr. Korn!

An old friend who was around back then commented that even if you
weren't part of AT&T/Bell Labs, you *could* get support on Unix - you
simply needed to know who to call at Bell Labs, and have a question or
suggestion they considered interesting.


> But even then Bell Lab people forced to use Unix for such things as
> accounting (and not software development) hated its absolutely and
> completely user unfriendly interface.

Sure, and I don't blame them. It wasn't designed for them, it was
designed for programmers. And the terse user interface stemmed from
the fact that the standard interface to the system when it was being
designed was a dial-up hardcopy terminal. The design placed a premium
on getting the most output for the least input. (I recall Ken
Thompson commenting that is he had to do it over, he'd spell creat
with a trailing "e" :-) )

But the rest of the world had similar problems dealing with MS-DOS and
the dreaded C:\ prompt.


> | ---Quote---
> || Cf. 4DOS/4NT/TCC - there is nearly 100% consistency between
> || commands on how to specify each available option. Once you learned
> || the basic syntax, you have it for all commands.
> | ---End Quote---
> | Yep. *It's a strength. *The weakness is that it's not portable: if you are
> | using 4DOS/4NT/TCC you are running a flavor of MS-DOS/Windows.
> | If you are running something that isn't an Intel architecture PC or an
> | OS that wasn't made by Microsoft, You must use a different tool set.
> | The reverse is less true - ports of the *nix tools exist for Windows in
> | things like Cygwin or Microsoft Services For Unix.
>
> That's because it is not commercially viable to port it. Were it otherwise,
> the oft requested 4NIX would have long been available. However, I have used
> 4DOS long ago on Solaris' which had a DOS emulator.

I recall requests back when for a 4nix, and it *wouldn't* have been
commercially viable.

Part of it would have been the inherent resistance in getting folks
used to tools that were free and open source to pay for a closed
source tool. The bigger part, I think, is that *nix didn't *need* it.

4DOS became very popular on MS-DOS because COMMAND.COM was brain dead.
*nix shells like ksh and bash *weren't* brain dead, and already had
the sorts of bells and whistles 4DOS brought to the MS-DOS world.

And even if you *did* port it, what would a port look like? The
underlying OS and tool set on *nix are very different. Would a 4nix
implement the tools and command syntax of the native tools, or would
it attempt to provide MS-DOS commands and syntax, and call the
underlying tools to do the work?

I have a product created by AT&T engineers way back that attempted to
provide an MS-DOS like environment under *nix. It was a set of Korn
shell aliases and functions that did the latter, letting you use
MS-DOS commands and converting to the native syntax to do the work.
But it was intended as training wheels for people coming to *nix from
the MS-DOS world to help them learn and become comfortable with the
Unix command set, and there was an implicit assumption that at some
point you would remove the training wheels and do things the *nix way.

A 4nix that did the former would still require the learning curve
involved in knowing what the underlying tools were and how they were
used, unless it attempted to re-implement the underlying tools as part
of the command processor. That would be a lot of work, and I don't
think that would be the way to go. What would happen to a user used
to 4nix who found herself required to work on a system that didn't
have it installed?

I have a couple of different DOS emulators for *nix, and could run
4DOS under them. The question is what I'd do with it. (I do have
4DOS installed on an old notebook that has FreeDOS on a partition as
the default shell when I boot into the DOS environment. I don't try
to use it from the Win2K or Linux installations on the box.)


> | ---Quote---
> | And another
> || point - because it is a single program, its documentation include
> || all "tools". In the POSIX world, when you need to do a specific
> || task, its up to your memory whether or not you can find the
> || actually available tool. If you don't know its name, you may need
> || to spend many hours browsing "man-pages" to find it.
> | ---End Quote---
> | <shrug> *If you're in the POSIX world, you learn. *I suspect the
> | 80/20 rule applies there, too - 80% of the users use 20% of the commands,
> | and become familiar with the syntax for the commands they use. *If
> | they find themselves needing to do something not covered by the stuff
> | they know, they need to find out what does it and how it's invoked
> | (and may need to install it if it's not part of the default install
> | on their system.)
>
> The issue in the POSIX world is "how do you learn" - what's missing is the
> equivalent of the sections of TCC documentation titled "... by category".
> All programming language manuals published by their vendors I've ever seen
> (and there are many) are organized by category. Only the POSIX world demands
> rote learning.

There are third-party efforts to address that lack, but you're right.
One point I've made a lot to folks is that man pages are references,
not tutorials. They implicitly assume you already know certain
things, or have a guru on site to tell you. If you don't happen to
have a guru on site (like you're a Linux user at home) things get more
complicated.


_____
Dennis
 
If that's what you want, then I suggest you use the QCAL plugin instead. It only provides the QCAL command and a minimal subset of functions (just enough to parse Holidays.ini.)

I'm still using the @DATE function from your plugin that solved my leap year problem, so I'll stick with the ISO8601 plugin.

Besides, on my XP system, it takes 0:00:00.02 seconds in a .BTM to load your ISO8601 plugin, execute the QCAL command, and finally unload your plugin.

I don't think that speed can be improved on.

Joe
 
setlocal
set >%@unique[]
set data=%@execstr[*dir /a:-d /h /o:-t /f]
timer on
do r=0 to %@lines[%data]
echo %@field["=",-1,%@line[%data,%r]]
enddo
timer off
del /q %data
endlocal

Still to much ?


> ---Quote---
> >I meant loop on **all** variable names. Like this (with the help of
> >an experimental plugin _VARNAMES):
> ---End Quote---
> Wouldn't this do what you want:
>
> :: LOOP.BTM
> @echo off
> set > %temp\SET.DAT
> do l in @%temp\SET.DAT
> set var=%@word["=",0,%l]
> set val=%@word["=",1,%l]
> echo %var = %val
> enddo
> del /q %temp\SET.DAT
> quit
>
> Best Regards,
>
> * Klaus Meinhard *
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 
On Thu, 26 Aug 2010 21:37:13 -0400, Kachupp
<> wrote:

|setlocal
|set >%@unique[]
|set data=%@execstr[*dir /a:-d /h /o:-t /f]
|timer on
| do r=0 to %@lines[%data]
| echo %@field["=",-1,%@line[%data,%r]]
| enddo
|timer off
|del /q %data
|endlocal

You're cheating with the timing by not counting the time to
create/delete the file.

|Still to much ?

I really think it should be a one-liner.

Code:
v:\> do v in /T"=" /L %@varnames[] ( echo %v )
ALLUSERSPROFILE
APPDATA
CLIENTNAME
CommonProgramFiles
(snip)

In its latest incarnation, @VARNAMES[[regex]] returns an =-sepatated
list of variable names [matching regex]. I want a way to reverse the
sense of the regex matching (i.e., process a non-match). That's not
built-into regexes themselves. Some syntaxes allow embedded comments,
like Perl's (?#comment) which I could use, but that's not universal. I
could use something as simple as prefixing the regex with '!' to
reverse the match sense. If the user wanted to start the regex with a
real '!' he'd just have to use an extra one. Sound good?
 

Similar threads

Back
Top