Yet, a failure to cite related parent projects certainly needs addressed. Maybe forgivable if it was a first year student. =3
kscarlet 9 hours ago [-]
And there don't seem to be much non-trivial code written under this project, it's just loosely putting together some existing work and adding some READMEs with the same format.
A bit disorienting for someone looking for statistical computing environment in CL, to say the least. Maybe I'm stupid but this is no where near what (a somewhat complete environment) it makes itself look like.
submeta 8 hours ago [-]
Chose the right tool for the right task. I‘ll go with R and RStudio or even Python for data analysis and statistics. Opting for Lisp is like trying to use a swiss knife to cut a tree just because you love your swiss knife.
ofalkaed 5 hours ago [-]
>Opting for Lisp is like trying to use a swiss knife to cut a tree just because you love your swiss knife.
First thing I did when I got my Swiss Army pocket knife was go to the woods by my house and cut down a tree with its little saw. It was a small, aspen or poplar maybe 3" thick and it took some doing but it came down. That was my first pocket knife and the first tree I cut down, believe I was in third grade. Still remember the smell of the freshly cut wood and the damp humus, the feeling of the sap running over my hand; it was one of those shadowless overcast days, early fall before leaves started turning. I avoided washing my hands all day just to keep the smell of the sap with me. I did love my Swiss knife, took it with me everywhere I went for years. Thanks for the memories.
anonzzzies 8 hours ago [-]
... which is not a bad reason in some cases.
I for instance find Python the most horrible language + ecosystem outside the js ecosystem (but I like js the language more and that's saying something), so I would always opt for lisp (or pen + paper) over Python. R / Rstudio are nice though.
I don't think it really tracks either; Lisp is quite ergonomic for this type of thing and, if you have been doing it for a while, you'll have your own tooling to work faster/more efficient in that lisp and of course, the comparison falls down then as the swiss knife now has a chainsaw option which is as good or better than other options to cut down trees.
TurboHaskal 8 hours ago [-]
Yeah I don't get it either. Lisp is perfectly fine for this task although probably makes less sense now that Julia is a thing.
Reminder that before Python was used for data science, people used things like BioPerl and PDL and that didn't stop people from working on pandas and the like.
Also let people have fun.
hatmatrix 7 hours ago [-]
Lispers might not like that it's not a Lisp, but I remember Luke Tierney also making a statement to the effect that the statisticians have spoken and they don't prefer the Lisp syntax.
So Julia is a happy middle ground - MATLAB-like syntax with metaprogramming facilities (i.e., macros, access to ASTs). Its canonical implementation is JIT, but the community is working on allowing creation of medium-sized binaries (there has been much effort to reduce this footprint).
eigenspace 5 hours ago [-]
Julia isn't a lisp, but I think it's the most lispy non-S-expression based language around these days. The language creators took the lessons from lisp very seriously, and it shares a lot of functionality and philosophy with lisps.
hatmatrix 5 hours ago [-]
Well I think the original author was a fan of Lisp and implemented the first Julia parser in femtolisp, IIRC. (And femtolisp was a lightweight Lisp of his own.)
Joel_Mckay 26 minutes ago [-]
Julia is somewhat different:
1. readability with explicit broadcast operators
2. interoperability with other languages including R and Python
3. performance often exceeding numpy and C/C++ code
The idea of using Lisp or Prolog in a production environment doesn't sound fun at all. Yet, they do make some types of problems easier to handle. =3
nomilk 7 hours ago [-]
It cites inability to compile to machine code as a reason for preferring lisp to R and Python.
What are the benefits of an ability to compile to machine code? Does it mean you can make stand alone binaries (I.e. programs that can run without the language - lisp|R|python - installed), or is there some other advantage, eg performance?
bheadmaster 7 hours ago [-]
In my view, the biggest advantages of ahead-of-time compilation is lower binary size, higher performance, and binary portability (in a sense of being able to copy the binary and run it on another system with same architecture and OS, not in the usual sense of being easy to run to a different system architecture or OS).
It is IMO not known widely enough that Python itself can be compiled, using Nuitka [0] compiler. It still runs Python code, so the performance increase is not as extreme as one would get from rewriting in a fully statically typed code, but the AOT compiled C code is still faster than the interpreter.
Is lower binary size or binary portability really a major concern for statistical computing? In my experience with statistical computing with R and using R I've never once had a situation where producing a binary was required? As for portability, I mean can just share the script and the data right?
disgruntledphd2 5 hours ago [-]
If you want to build data applications, it's extremely helpful. For instance, if you built some marketing models making it easier for marketers to work with these will pay off significantly.
hatmatrix 7 hours ago [-]
Both.
There are some optimizations that can be made a compile-time that can speed up the computations. It also makes it portable provided that the executables are provided for each desired platform.
jinlisp 2 hours ago [-]
I am thinking about developing in Common Lisp a version of J. And this could be a useful library to use with that program.
b0a04gl 6 hours ago [-]
nice to see lisp getting some attention outside the usual circles. if this ships with solid plotting and can handle real datasets without pain, it might actually get used. still feels early. would be good to see a full example that goes from data to chart to binary without touching the REPL. that's where most lisp tools fall short
akashi9 6 hours ago [-]
Interesting and cool idea but by far the biggest strength of R for statistical computer is the wealth of libraries and documentation out there for the language, obviously Rome wasn't built in a day but does lisp-stat offer any of these things?
awaymazdacx5 10 hours ago [-]
the lispworks test package typically contains xlib-stat over tcp-udp transport protocls that should designate BMP-strings
fud101 10 hours ago [-]
I loved Xlisp-stat, the book was gorgeous and when I discovered Lisp-stat, i was using a Windows XP machine in a college Lab machine - it just worked and I used it as my first lisp. Such a good piece of software. Not sure about the new package - I'm long past my lisping days now.
Seems to be this company in Singapore: https://opencorporates.com/companies/sg/201923570D
As opposed to the Symbolics company: https://en.wikipedia.org/wiki/Symbolics
https://github.com/Lisp-Stat/lisp-stat/blob/2514dc3004b09942...
And
https://lisp-stat.dev/blog/2021/05/09/statistical-analysis-w...
https://www.youtube.com/watch?v=sV7C6Ezl35A
Yet, a failure to cite related parent projects certainly needs addressed. Maybe forgivable if it was a first year student. =3
A bit disorienting for someone looking for statistical computing environment in CL, to say the least. Maybe I'm stupid but this is no where near what (a somewhat complete environment) it makes itself look like.
First thing I did when I got my Swiss Army pocket knife was go to the woods by my house and cut down a tree with its little saw. It was a small, aspen or poplar maybe 3" thick and it took some doing but it came down. That was my first pocket knife and the first tree I cut down, believe I was in third grade. Still remember the smell of the freshly cut wood and the damp humus, the feeling of the sap running over my hand; it was one of those shadowless overcast days, early fall before leaves started turning. I avoided washing my hands all day just to keep the smell of the sap with me. I did love my Swiss knife, took it with me everywhere I went for years. Thanks for the memories.
I for instance find Python the most horrible language + ecosystem outside the js ecosystem (but I like js the language more and that's saying something), so I would always opt for lisp (or pen + paper) over Python. R / Rstudio are nice though.
I don't think it really tracks either; Lisp is quite ergonomic for this type of thing and, if you have been doing it for a while, you'll have your own tooling to work faster/more efficient in that lisp and of course, the comparison falls down then as the swiss knife now has a chainsaw option which is as good or better than other options to cut down trees.
Reminder that before Python was used for data science, people used things like BioPerl and PDL and that didn't stop people from working on pandas and the like.
Also let people have fun.
So Julia is a happy middle ground - MATLAB-like syntax with metaprogramming facilities (i.e., macros, access to ASTs). Its canonical implementation is JIT, but the community is working on allowing creation of medium-sized binaries (there has been much effort to reduce this footprint).
1. readability with explicit broadcast operators
2. interoperability with other languages including R and Python
3. performance often exceeding numpy and C/C++ code
4. usability in numerous workflows:
https://www.queryverse.org/
The idea of using Lisp or Prolog in a production environment doesn't sound fun at all. Yet, they do make some types of problems easier to handle. =3
What are the benefits of an ability to compile to machine code? Does it mean you can make stand alone binaries (I.e. programs that can run without the language - lisp|R|python - installed), or is there some other advantage, eg performance?
It is IMO not known widely enough that Python itself can be compiled, using Nuitka [0] compiler. It still runs Python code, so the performance increase is not as extreme as one would get from rewriting in a fully statically typed code, but the AOT compiled C code is still faster than the interpreter.
[0] https://nuitka.net/
There are some optimizations that can be made a compile-time that can speed up the computations. It also makes it portable provided that the executables are provided for each desired platform.