Re: What does an optimal scientific programming language/environment need? [message #36550 is a reply to message #36516] |
Sat, 27 September 2003 19:11   |
grunes
Messages: 68 Registered: September 1993
|
Member |
|
|
There are way too many responses to my original post to adequately
address them all. But a many of them are rather interesting.
===============
For example:
> and what is the value of 3**3**3 (or, if you prefer, 3^3^3)?
> -- there appears to be no consensus on this one
I assume you are using ** (FORTRAN) or ^ (BASIC) to represent
exponentiation (though ^ is more or less the standard math/logic
symbol for logical AND).
I thought there was consesus on x superscript y superscript z. It
means x raised to the (y raised to the z) power, just like FORTRAN
x**y**z means x**(y**z). The other meaning is already compactly
represented as x superscript yz, at least for positive reals.
I agree that a computer language notation must remove the ambiguities
present in standard mathematical notation.
==============
I agree that signal processing should proably include FFT, and a few
other commonly used transforms.
======================
At one point, I wanted to create my own IDL interpreter/compiler
clone, and spent quite a lot of time figuring out how, but gave up
after researching the complex legal status of "reverse engineering" in
the USA.
In the mean time, I am following up on the suggestion one person made
that "J", perhaps combined with calls to FORTRAN and C for what needs
to be efficient, already answers the major needs. If that works well
enough, fine.
|
|
|