Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Lisp Questions

Name: Anonymous 2011-11-18 19:59

So I have some questions about lisp. I've been learning programming but I hear some conflicting information so I need some things cleared up.

First of all, I keep hearing that the ability to wrote code others can read is huge, but since lisp is incredibly hard to read, wouldn't that make lisp programmers inherently bad programmers?

Also, can someone post some non-trivial lisp code? Every time I see someone post lisp code it's always something very short and trivial. Is it possible to write anything real with lisp?

Name: Anonymous 2011-11-18 20:05

>>1
Hello, python faggot, glad that you asked about "readability".

I am far from an expert at Python, but I have done a couple of semi-serious projects in the language and will try to recall specifically what I didn't like.

- Everything you write will be open source. No FASLs, DLLs or EXEs. Developer may want to have control over the level of access to prevent exposure of internal implementation, as it may contain proprietary code or because strict interface/implementation decomposition is required. Python third-party library licensing is overly complex. Licenses like MIT allow you to create derived works as long as you maintain attrubution; GNU GPL, or other 'viral' licenses don't allow derived works without inheriting the same license. To inherit the benefits of an open source culture you also inherit the complexities of the licensing hell.
- Installation mentality, Python has inherited the idea that libraries should be installed, so it infact is designed to work inside unix package management, which basically contains a fair amount of baggage (library version issues) and reduced portability. Of course it must be possible to package libraries with your application, but its not conventional and can be hard to deploy as a desktop app due to cross platform issues, language version, etc. Open Source projects generally don't care about Windows, most open source developers use Linux because "Windows sucks".
- Probably the biggest practical problem with Python is that there's no well-defined API that doesn't change. This make life easier for Guido and tough on everybody else. That's the real cause of Python's "version hell".
- Global Interpreter Lock (GIL) is a significant barrier to concurrency. Due to signaling with a CPU-bound thread, it can cause a slowdown even on single processor. Reason for employing GIL in Python is to easy the integration of C/C++ libraries. Additionally, CPython interpreter code is not thread-safe, so the only way other threads can do useful work is if they are in some C/C++ routine, which must be thread-safe.
- Python (like most other scripting languages) does not require variables to be declared, as (let (x 123) ...) in Lisp or int x = 123 in C/C++. This means that Python can't even detect a trivial typo - it will produce a program, which will continue working for hours until it reaches the typo - THEN go boom and you lost all unsaved data. Local and global scopes are unintuitive. Having variables leak after a for-loop can definitely be confusing. Worse, binding of loop indices can be very confusing; e.g. "for a in list: result.append(lambda: fcn(a))" probably won't do what you think it would. Why nonlocal/global/auto-local scope nonsense?
- Python has a faulty package system. Type time.sleep=4 instead of time.sleep(4) and you just destroyed the system-wide sleep function with a trivial typo. Now consider accidentally assigning some method to time.sleep, and you won't even get a runtime error - just very hard to trace behavior. And sleep is only one example, it's just as easy to override ANYTHING.
- Crippled support for functional programming. Python's lambda is limited to a single expression and doesn't allow conditionals. Python makes a distinction between expressions and statements, and does not automatically return the last expressions, thus crippling lambdas even more. Assignments are not expressions. Most useful high-order functions were deprecated in Python 3.0 and have to be imported from functools. No continuations or even tail call optimization: "I don't like reading code that was written by someone trying to use tail recursion." --Guido
- Python's syntax, based on SETL language and mathematical Set Theory, is non-uniform, hard to understand and parse, compared to simpler languages, like Lisp, Smalltalk, Nial and Factor. Instead of usual "fold" and "map" functions, Python uses "set comprehension" syntax, which has overhelmingly large collection of underlying linguistic and notational conventions, each with it's own variable binding semantics. Using CLI and automatically generating Python code is hard due to the so called "off-side" indentation rule (aka Forced Indentation of Code), also taken from a math-intensive Haskell language. This, in effect, makes Python look like an overengineered toy for math geeks. Good luck discerning [f(z) for y in x for z in gen(y) if pred(z)] from [f(z) if pred(z) for z in gen(y) for y in x]
- Python hides logical connectives in a pile of other symbols: try seeing "and" in  "if y > 0 or new_width > width and new_height > height or x < 0".
- Python indulges messy horizontal code (> 80 chars per line), where in Lisp one would use "let" to break computaion into manageable pieces. Get used to stuff like self.convertId([(name, uidutil.getId(obj)) for name, obj in container.items() if IContainer.isInstance(obj)])
- Quite quirky: triple-quoted strings seem like a syntax-decision from a David Lynch movie, and double-underscores, like __init__, seem appropriate in C, but not in a language that provides list comprehensions. There are better ways to mark certain features as internal or special than just calling it __feature__. self everywhere can make you feel like OO was bolted on, even though it wasn't.
- Python has too many confusing non-orthogonal features: references can't be used as hash keys; expressions in default arguments are calculated when the function is defined, not when it’s called. Why have both dictionaries and objects? Why have both types and duck-typing? Why is there ":" in the syntax if it almost always has a newline after it? The Python language reference devotes a whole sub-chapter to "Emulating container types", "Emulating callable Objects", "Emulating numeric types", "Emulating sequences" etc. -- only because arrays, sequences etc. are "special" in Python. Subtle data types (list and tuple, bytes and bytearray) will make you wonder "Do I need the mutable type here?", while Clojure and Haskell manage to do with only immutable data.
- Python's GC uses naive reference counting, which is slow and doesn't handle circular references, meaning you have to expect subtle memory leaks and can't easily use arbitrary graphs as your data. In effect Python complicates even simple tasks, like keeping directory tree with symlinks.
- Patterns and anti-patterns are signs of deficiencies inherent in the language.  In Python, concatenating strings in a loop is considered an anti-pattern merely because the popular implementation is incapable of producing good code in such a case. The intractability or impossibility of static analysis in Python makes such optimizations difficult or impossible.
- Problems with arithmetic: no Numerical Tower (nor even rational/complex numbers), meaning 1/2 would produce 0, instead of 0.5, leading to subtle and dangerous errors.
- Poor UTF support and unicode string handling is somewhat awkward.
- No outstanding feature, that makes the language, like the brevity of APL or macros of Lisp. Python doesn’t really give us anything that wasn’t there long ago in Lisp and Smalltalk.

Name: Anonymous 2011-11-18 20:06

>>2
I don't like Python. Nice kopipe though.

Name: Anonymous 2011-11-18 20:12

>>3
Haskell-fag? Then why bitching about "readability"? It's 100% subjective concept.


Lisp                        | Haskell
----------------------------|---------------------------------------------------
$ sbcl                      | $ ghci
* +3                        | Prelude> +3
3                           | <interactive>:1:0: parse error on input `+'
*                           | Prelude>


Lisp                     | Haskell
-------------------------|------------------------------------------------------
* (setf Tree             | data Tree a = Empty | Node (Tree a) a (Tree a)
        '((n 5 (n 5 n))  | tree = Node
          4              |       (Node Empty 5 (Node Empty 5 Empty))
         ((n 5 n) 6 n))) |        4
                         |       (Node (Node Empty 5 Empty) 6 Empty)


Lisp                       | Haskell
---------------------------|----------------------------------------------------
Prefix notation.           | Prelude> let fac 1 = 1; fac n = n * fac n-1
                           | Prelude> fac 4
                           | *** Exception: stack overflow
                           |
                           | Prelude> let f 1 = 1; f n = ((*)(n)((f) ((-) n 1)))
                           | Prelude> ((f) 4)
                           | 24

Name: Anonymous 2011-11-18 20:25

>>4
Nope. I'm a C-bro if you insist on knowing.

Name: Anonymous 2011-11-18 21:42

>>5
C can also become unreadable.

Example: http://advsys.net/ken/buildsrc/kenbuild.zip

Name: Anonymous 2011-11-18 21:57

Not sure if trolling but in case you aren't.

since lisp is incredibly hard to read
Lisp really isn't hard to read... the Schemes are super straight-forward to learn for example, read a 50 page report and you'll know all the syntax, all the semantics, the base standard functions that every scheme will have, the grammar definitions and everything else you need to know to not only program Scheme but also implement a Scheme. The function names are long and descriptive at near COBOL levels. A function ending with a question mark are testing predicates and a function ending with a exclamation mark are destructive updating procedures (Ruby apparently either borrowed this or came up with it in an identical fashion). You define variables wherever you want as long as it is before a function call in the current body. In some Lisps you can use brackets and curlies as distinctive versions of parenthesises, while that well known java Lisp (clojure) have reserved those for specialized use.
Lisp IDEs autoindent and some automatch parenthesis (so if you type close paren key 3 times after ([( you'll get )])), and the indentation is always consistent from programmer to programmer because they never do the indenting themselves, ever. In Racket's IDE when an exception occurs it shows a series of red arrows as a visual stacktrace, and when you hover over something you now get real-time syntax checks that show blue arrows that point to the original definition.

Every time I see someone post lisp code it's always something very short and trivial.
Github is full of lisp code if you know where to look. Most people who write tools for dealing with a lisp (like an editor or whatever) will very likely write it in the same lisp dialect.

Is it possible to write anything real with lisp
There's a million or so webpages that list the same usual `real' Lisp projects: Viaweb, ITA, Jak & Daxter.

Name: Anonymous 2011-11-18 22:04

neat juden

Name: Anonymous 2011-11-18 22:44

Reddit was written in lisp. Hahahaha lispers.

Name: Anonymous 2011-11-18 23:49

>>1

maxima is one:

http://maxima.sourceforge.net/

an example of enterprise mit development from the sixties.

Name: Anonymous 2011-11-19 0:03

I've another Lisp question: is there a way to use http://docs.racket-lang.org/reference/unsafe.html in older Lisps?

Newer versions refuse install on my PC.

Name: Anonymous 2011-11-19 0:04

For now I found only http://download.plt-scheme.org/doc/103p1/html/mzc/node5.htm

which produces an ELF *.o file, which I think colud be dlopened from C++.

Name: Anonymous 2011-11-19 0:15

>>12
A hacker, I believe, can use IDA + hexeditor to crack them removing safety checks.

Name: Anonymous 2011-11-19 1:43

I have a lisp question. How could lisp be any more awesome?!

Name: Anonymous 2011-11-19 2:49

>>14

It could stop being shit.

Name: Anonymous 2011-11-19 9:25

Here's a tip: use Chicken Scheme

Although Racket might be more friendly to experienced programmers, Chicken follows the SICP terms more closely I think. Chicken even can be aligned with MingW and compile to standalones pretty easily, and there's tons of libraries as "eggs", besides supporting scm files and tons of others.

Name: Anonymous 2011-11-19 9:29

>>16
negative, racket (because of drracket) is the (supreme (scheme)).

also, imho, lisp < scheme.

Name: Anonymous 2011-11-19 9:33

>>17
lisp < scheme
ttp://dis.4chan.org/read/prog/1296473324/6

Name: Anonymous 2011-11-19 9:34

>>17
(because-of 'drracket)*

(supreme? 'scheme)

Name: Anonymous 2011-11-19 15:10

Lisp is shit.

Name: Anonymous 2011-11-19 22:17

If it's Lisp, it's shit.

Name: Anonymous 2011-11-19 22:24

>>17
lisp < scheme.
Lisp:

DEFINE-SYMBOL-MACRO


Scheme:

You have to write your own code-walker, implementating of half of Common Lisp.

Name: Anonymous 2011-11-19 22:50

First of all, I keep hearing that the ability to wrote code others can read is huge, but since lisp is incredibly hard to read, wouldn't that make lisp programmers inherently bad programmers?
Lisp isn't hard to read. Lisp is only subjectively hard to read to someone who hasn't been exposed to it for enough time (a week should be more than enough time, given that you're actually spending it learning the language and using a decent environment/implementation while doing so). Once you know Lisp, it's easy enough to read. It's actually easier to read than other pieces of code written in more verbose languages: consider a language (C-like or even more verbose) where your algorithm takes some 10k LOC, now consider a Lisp version which takes 1-2k LOC. The Lisp version will take less time to read, but if you're not familiar, reading might take more time, mostly because more semantically relevant content is contained in the same amount of space (in a way, you can think the same of natural language and math - they can sometimes represent an idea with less space than a formalized representation would, the only difference being that Lisp is already a formalized representation).
Also, can someone post some non-trivial lisp code? Every time I see someone post lisp code it's always something very short and trivial. Is it possible to write anything real with lisp?
Sure, I write it all the time, but I'm not going to post code from my own projects as that could identify me and this is a pseudonymous messageboard. I would suggest you go to cliki or http://common-lisp.net/projects.shtml or some other site hosting Lisp code and just read, there is a lot of non-trivial code. If you want to see something more specific, you'll need to tell us what kind of code you'd like to see.

Name: Anonymous 2011-11-19 23:06

>>23
Which algorithms takes 10k to implement in C?

Name: Anonymous 2011-11-19 23:07

>>24
Just take a look at some optimizing compilers, I'm sure you'll find many.

Name: Anonymous 2011-11-19 23:24

>>25
Why wouldn't they just be written in lisp?

Name: Anonymous 2011-11-19 23:34

>>26
Lisp is slower than molasses. Compiling a small project would take hours instead of minutes/seconds. Compiling a large project would take a week instead of overnight.

Name: Anonymous 2011-11-19 23:41

Lisp is slower than molasses
Molasses compilers have actually improved a great deal, and are now competitive with C.

Name: Anonymous 2011-11-19 23:49

>>27
Depends on implementation and declarations used (speed/space trade-offs). There are many implementations with many different options. For example, SBCL generates reasonably fast code, but the compiler isn't the fastest one around (takes some 3 minutes to build itself, generating a total of 20-40MB of binary code). ClozureCL is much faster, but generates lower-quality code (speed-wise). There are other implementations with different characteristics (benchmark them if you want). I mostly use SBCL and am fairly happy with the code it generates, it's not perfect, but runs quite fast.

Name: Anonymous 2011-11-19 23:54

Am I doing it right? Cuz retards say that switching on event type is "anti-pattern" or something.


(define (test-sdl)
  (SDL_Init '(SDL_INIT_VIDEO SDL_INIT_TIMER))
  (SDL_Resize 640 480)
  (let ([done #f])
    (till done
      (block event-loop
        (while #t
          (let ([e (SDL_GetEvent)])
            (cond [(false? e) (event-loop #f)]
                  [(SDL_ResizeEvent? e) (SDL_Resize (SDL_ResizeEvent-w e)
                                                    (SDL_ResizeEvent-h e))]
                  [(SDL_QuitEvent? e) (set! done #t)]
                  [(SDL_KeyboardEvent? e)
                   (say "kb $(if (eq? (SDL_KeyboardEvent-state e) SDL_RELEASED) 'released 'pressed)")]
                  [(SDL_MouseButtonEvent? e) (say "mice-button")]
                  ;;[(SDL_MouseMotionEvent? e) (say "mice-motion")]
                  ))))
      #| access SDL_Surface-pixels here |#
      (SDL_Flip SDL_Screen)
      )
    (SDL_Quit)
    ))

Name: Anonymous 2011-11-19 23:56

>>30
Also, Scheme should include anaphoric `while` by default, so I could just (while (SDL_GetEvent) (switch (type-of it) ...))

Name: Anonymous 2011-11-20 0:00

>>31
anaphoric `while`
If you don't like lexical introduction of variables and their scope, why are you using scheme?

Name: Anonymous 2011-11-20 0:02

>>32
Scheme has continuations, JIT-compiler and more flexible eval. I think Scheme should work faster a scripting language, because I can just `load` my *.scm files, without compiling them to fasls.

Name: Anonymous 2011-11-20 0:46

>>33
I think CLISP has a S-exp-based interpreter (instead of just compiling the code wrapped in a lambda and calling it), but then I don't think it's very fast. Only reason I still use it sometimes is because of its arbitrary precision floats.

Name: Anonymous 2011-11-20 0:52

>>34
They're called ``rationals''.

Name: Anonymous 2011-11-20 1:08

>>34
SBCL too has interpreter mode, but the problem is that I have to think which code must be compiled, and which - interpreted. Better if runtime decides.

Name: Anonymous 2011-11-20 1:35

>>35
No, rationals are different. In CL (and in math, in general), a rational is just a pair of 2 integers which are coprime.
Floats are this weird thing which approximately store a rational (masquerading mostly as output of functions which in math would be reals, but since reals require infinite amount of information, they may or may never exist in reality, but certainly won't exist in computation) using a fixed number of bits (32 or 64). However, CL does let you have floats as long as you want, but how long depends on the implementation. CLISP, unlike most other implementations decided to use gmp to implement this type of float, and you have a special (dynamically scoped) variable which you can bind or set that will control the amount of precision (how many decimal places, and thus the size in bits) a float can have.
To put it simple: rational = pair of coprime numbers (fixnum or bignum), float = fixed in size, limited to some decimal precision, in some implementation precision can be changed by the user, but typically it's 32bits or 64bits (inexact).

Name: Anonymous 2011-11-20 1:37

>>37
Yes, rationals are actually sensible if you're going to stick with arbitrary precision.

Name: Anonymous 2011-11-20 1:39

>>38
Except some functions (for example, trig functions) tend to give irrational outputs, thus there's no way you could store them exactly in a rational. Might as well use floats for them and expect the output to have a certain error (which can be estimated if you wanted to estimate it).

Name: Anonymous 2011-11-20 1:51

>>39
Please be kidding.

Name: Anonymous 2011-11-20 1:57

>>40
Some error will exist anyways when dealing with functions that were originally meant to have real outputs. There's no way you're going to store the output or something like the square root of 2 in a rational without there being some error, no matter what precision ("infinite" precision would give you the actual number, but it won't be a rational anymore, nor would you be able to compute it in full as the process would be non-halting).

Name: Anonymous 2011-11-20 2:23

>>40

arbitrary precision rational numbers are shit. If you iteravely multiply them together, they double in size after every multiplication, which makes you algorithm take explonential time and memory.

Name: Anonymous 2011-11-20 2:58

>>42
floating point numbers are shit. If you iteravely multiply them together, they double in error after every multiplication, which makes you algorithm take explonentially wrong results.

Name: Anonymous 2011-11-20 5:59

No >>43, YOU are shit. Floating point numbers are designed to work that way and it is up to YOU to understand this.

Name: Anonymous 2011-11-20 13:00

>>44
Yes, I know floating point numbers are designed to be shit.

Name: Anonymous 2011-11-20 17:29

From a Jew's face
The wicked Devil speaks to us,
The Devil who, in every country,
Is known as evil plague.

Would we from the Jew be free.
Again be gay and happy.
Then must youth fight with us
To get rid of the Jewish Devil.

Name: Anonymous 2011-11-20 17:43

>>43
explonentially
excellent term

Name: Anonymous 2011-11-20 17:50

>>9
and was rewritten to python because it was unmaintainable

Name: Anonymous 2011-11-20 18:23

>>47

it was an accident. I'll make typos where I blend words together, like withe for with the, and I'll blend of the beginnings of different words together. I decided to not correct that one.

Name: Anonymous 2011-11-20 19:16

>>49
Accident or not, we should keep it.

Name: Anonymous 2011-11-21 12:15

>>41
Yes, some error will always exist once we have redundant representations. This is not the problem. The problem is how floating point numbers deal with that redundancy. Fast floating point units in hardware are a reasonable compromise in some cases, especially when the errors will not reasonably compound. (I still think IEEE standard is bad. But whatever, that's life.) But if you are going to use arbitrary precision floats then what is the point? Clearly speed is not your concern. In that case you might as well just use direct rationals, especially for common functions which have continued fraction representations. In such computations you can limit your error directly without sacrificing precision as there is no better rational approximation of that particular real. That's not my opinion, that's a fucking theorem.

tl;dr floats are fucking garbage and you're a tool

Name: Anonymous 2011-11-22 10:03

ellen page

Name: Anonymous 2011-11-23 1:17

>>42
The implementation detail you complain about is not essential to the problem

Name: Anonymous 2011-11-23 3:49

>>53

If you want complete precision, and if you happen to be squaring a rational over and over, the number of digits in the numerator and denominator with double after every squaring, regardless of the implementation for the multiplication. Dividing out a greatest common divisor is not possible in this situation, assuming the fraction starts out in a simplified form. The point is that there are times where arbitrary precision is not possible (or at least no worth the time and memory, which can grow exponentially with the number of squaring operations), and you will need to sacrifice precision after a certain point. This isn't hard to do, but if you are going to do it, you may as well just use fixed point decimal and use enough digits to represent the range of values you'll need.

Name: Anonymous 2011-11-23 9:33

>>54
I am sure I could concoct fantasy worlds where arbitrary precision floats were actually a better solution than arbitrary precision fractions if I thought about it hard enough, but most applications would be wrong to use it.

Name: Anonymous 2011-11-23 10:28

>>54
Yes but what is in question is using arbitrary precision floats versus arbitrary precision rationals. This should be a no-brainer, even if you're still truncating the floats after some non-hardware-supported-level of precision. Floats are not particularly well-behaved and understanding how errors compound is highly calculation dependent.  Choose one formula and you need three times as many bits just to avoid losing precision from the input value. Choose another formula that should be mathematically equivalent (say, manipulation through trig identities), and you don't need the extra precision at all. This kind of result using continued fractions is impossible.

Name: Anonymous 2011-11-25 12:48

Lisp is shit.

Name: Anonymous 2011-11-25 14:39

>>56

yeah, I was just thinking of high precision fixed point decimals. It could also be a fraction where the denominator is always a high, constant value.

Don't change these.
Name: Email:
Entire Thread Thread List