Common Lisp is not a functional programming language.
It is a mutli-paradigm programming language.
Imperative programming is featured in e.g. various prog forms, and mutation through setq and various abstractions built ontop of setq e.g. setf.
Structured imperative programming is featured in e.g. various do forms e.g. [/code]dotimes[/code] and of course do itself.
Unstructured imperative programming is featured in e.g. the go and tagbody forms.
Now, functional programming is featured through the special forms lambda and function, and the many many forms built ontop of them (apply, funcall etc.).
Object oriented programming is supported through CLOS (defclass, defgeneric, defmethod). CLOS is also the most advanced object oriented system I know if in a serious language*
Metaprogramming is, of course, what makes Common Lisp (and its predecessors) famous. Metaprogramming is accomplished through defmacro. Unfortunately, ANSI Common Lisp has less features for Metaprogramming than CLTL1 or CLTL2. Luckily, most implementations also implement the latter two standards.
Logic programming is easily accomplished, although ANSI provides no standard interface. A simple implementation of Prolog can be found in Norvig's PAIP book (recommended reading after SICP :)
Common Lisp is also useful for symbolic programming (for example writing symbolic differentiation or integration programs). It is also trivial to implement auto-differentiation in Common Lisp, which is very useful (for example, this was useful in my previous job which involved image processing).
Numerical programming is also very well supported. Specifically the arrays dictionary is very powerful. You are able to create multiple "views" into the same array of different size and rank.
Common Lisp is strongly typed. Types can be declared using a simple syntax, and advanced implementations of Common Lisp have type inference. There are libraries for typed collections, e.g. LIL.
Common Lisp runs in a live image that you interact with. The development environments for Common Lisp are very advanced, because various Lisps were some of the pioneers in this field (e.g. InterLisp and the work of Warren Teitelman).
Last but not least, some Common Lisp implementations (e.g. SBCL and CCL) are systems programming languages. You can write inline assembler, and manual memory allocation with all the pointer arithmetic you want. This means that even a Common Lisp running on a UNIX and not a Lisp Machine can be used to write device drivers and kernel modules.
Common Lisp is the most widely used language of an old tradition of Lisps. Lisp 1.5 -> MACLISP -> ZETALISP -> Common Lisp. Common Lisp also has some InterLisp thrown in, which is from the same tradition, and a teensy bit of Scheme. Common Lisp was not an amalgamation of dialects. It was an evolution of the original Lisp.
There are other evolutions of this tradition, beyond Common Lisp. None are as popular as Common Lisp (for better or worse).
The tradition can be described as any language with the following features:
+ Lexical non-local exits.
+ Unwind protection.
+ A condition system with restarts.
+ Lexical binding.
+ Dynamic binding (thread local)
+ First class functions with optional, keyword and rest parameters.
+ A meta object protocol
+ Powerful, low level macros (and all this implies e.g. first class symbols).
+ Multiple return values.
Some languages in this tradition are EULisp, ISLisp, Dylan, Goo and PLOT (afaict).
A note on Scheme: I think Scheme is a Lisp, however it is not in the tradition of the original Lisps (it's not general purpose, nor multi paradigm, it is its own branch). However, the larger Scheme implementations (e.g. Racket) are closer to the Lisp tradition.
The Lisp tradition has nothing to do with languages such as Haskell, ML, Miranda or other "Functional" programming languages. These other languages are very restrictive, and are at odds with the Lisp tradition in almost every way. Lisp is not a functional programming language for one, it is an imperative, structured, unstructured, functional, object oriented, meta programming systems language. It doesn't care about being pure.
No language with mandatory static typing can ever be a Lisp. There is a huge difference between run-time and compile time typing, just as there is with run-time and compiletime code generation. I think these two quotes sum it up the best
From the Common Lisp mailing list in 1981, Richard Stallman said
"But =member= is supposed to work on any type which does or ever will
exist. It should not be necessary to alter the definition of =member=
and 69 other functions, or even recompile them, every time a new type
is added...
The extensible way of thinking says that we want to make it as easy as
possible to get to as many different useful variations as possible. It
is not one program that we want, but options on many programs."
Ray Dillinger said (in an LtU thread)
"I flatly refuse to limit the inputs to analysis to just the text of
the program. I will use whatever information I have available about
the current state of the program as well, including information that
does not become available before runtime, such as inputs...
That view of type theory (static typing) does not encompass the case
in which the source code may change during a program's run, nor the
case where the definition of a type may change during a program's run."
In conclusion, please stop treating Common Lisp as a functional programming language. I find calling it one is a put off to many good programmers that do not want to use a restrictive and annoying language such as Haskell. Common Lisp is more than that.
Name:
Mr. Irrational2013-12-04 20:31
For example, this is a valid Common Lisp program (an implementation of some algorithm from Knuth, copy pasted from the book Practical Common Lisp (which is a great newbie book))
(defun algorithm-s (n max) ; max is N in Knuth's algorithm
(let (seen ; t in Knuth's algorithm
selected ; m in Knuth's algorithm
u ; U in Knuth's algorithm
(records ())) ; the list where we save the records selected
(tagbody
s1
(setf seen 0)
(setf selected 0)
s2
(setf u (random 1.0))
s3
(when (>= (* (- max seen) u) (- n selected)) (go s5))
s4
(push seen records)
(incf selected)
(incf seen)
(if (< selected n)
(go s2)
(return-from algorithm-s (nreverse records)))
s5
(incf seen)
(go s2))))
A C programmer might cringe at the use of nreverse. No fear, here's a simple utility function with efficient use of pointers.
>>1
Common lisp has one big wart - no static typing. It is irremediable and one should not do any serious work in Common Lisp as the language is simply too restrictive to allow for proper linguistic expression of types, which is necessary for error checking, polymorphism, performance, extensionability of code and documentation.
Name:
Anonymous2013-12-07 8:46
The best Lisp nowadays is Racket because it's got excellent CONTRACTS which are WAY BETTER than any static typing.
It's also got CLEANED UP MACROS which are a lot LESS ERROR PRONE.
And a zillion other things which you can see for yourself.
Name:
Anonymous2013-12-07 11:27
You have been visited by le lambda of doom, repost this in 10 threads or loose your SICP!
λ
λ
λ
λ
λ
λ λ
λ λ
λ λ
λ λ
λ λ
λ λ
Name:
Mr. Irrational2013-12-07 17:46
>> 21
That is not true.
Common Lisp has optional static typing. Furthermore, Common Lisp implementations often perform type inference.
Also, static typing is not the least bit necessary for error checking, polymorphism, and ESPECIALLY extensibility. In fact, it greatly hinders the last two.
Static typing is good for documentation and compiler hints, however.
Name:
Anonymous2013-12-08 5:32
>>24
If CL cannot check all types statically, then it doesn't have static typing.
And static typing catches all stupid mistakes like mixed up function arguments or missed characters, which is about 95% of all errors.
And polymorphism and extensibility are ONLY possible in a statically typed language as they require constraints to be expressible by the language. Ain't there no way to write something like "f :: (Context a b) => b -> a b -> a b" in Lisp, because it's so restrictive. And without that, you cannot extend your code to work for new types in an efficient and error-checked way.
Name:
Anonymous2013-12-08 5:37
I love it how lispers keep on blabbering about polymorphism and extensibility, when their language is filled with superfluous bullshit like "char>=" and "string>=". If Lisp is so extensible, why couldn't you extend ">=" to work for chars and strings too? Ah, it's because you need static types to achieve that kind of flexibility. Fucking annoying limiting piece of shit that can't even figure out the types of all terms without me having to write tests for fucking everything.
Now, I can (without recompiling anything) add new functions over existing data types.
CL-USER> (defgeneric encounter (predator prey))
#<STANDARD-GENERIC-FUNCTION ENCOUNTER #x18BA144E>
CL-USER> (mapc #'speak *animals*)
I am not a rabbit, I am a horse!
Meow.
Woof.
(#<RABBIT #x18B943DE> #<CAT #x18B104C6> #<DOG #x18B10726>)
So, as you can see, Common Lisp is not at all restrictive. I am not sure why you think otherwise.
Also, notice how I never defined the type of my *animals* list, and this allowed me to fill it with heterogenous objects. This is usually not so easy to do in (only) statically typed languages.
By definition, static typing restricts your program to only accept types which can be known a-priori. In many cases, this is premature optimization. Dynamic typing imposes no such restriction.
Now, if you want to impose these sorts of restrictions you can. (Even on collections (see the https://github.com/fare/lisp-interface-library)). However, they are restrictions. They do not increase extensibility, and they do not make your program more polymorphic, rather the opposite. Your program will now require recompilation to either add a function over an existing data type (e.g. adding a method to a class definition in Java or C++) or it will require recompilation to add a datatype to a function (e.g. pattern matching in Haskell or ML). So your program will be LESS extensible. Your program will be less polymorphic, because it will only be polymorphic on those types that can be ascertained statically, at COMPILE TIME (from ONLY the program text, not from e.g. USER OR OTHER INPUT AT RUN TIME).
If you wish to, you may define a generic >= function quite easily (defgeneric >= (a b))
Name:
Anonymous2013-12-08 7:45
On the subject of types, I once again repeat that Common Lisp is very strongly typed,a nd that it's type system is VERY powerful e.g.
CL-USER> (defun equidimensional (array)
(or (< (array-rank array) 2)
(apply #'= (array-dimensions array))))
EQUIDIMENSIONAL
>>28
You do realize that the words "strongly typed" carry no meaning, right?
Name:
Anonymous2013-12-08 10:09
Common Lisp does not have a static type system and here's proof:
[c]
(defun foo (x)
(if (> x 0)
7
"A string"))
(defun bar (x)
(+ 1 (foo x)))[/c]
Function "foo" returns a string or a fixnum on a whim, while function "bar" expects "foo" to return always a fixnum. That's a type error, but the code compiles successfully. No static type system!
Now, suppose these two functions are in two remote modules of a large project. The programmer who is looking at function "bar" has no clue that the function "foo" may sometimes return a string, causing a type error. In order to find that out, he would have to look at the implementation of "foo", which is impractical as he cannot traverse the whole codebase and check that there are no square pegs shoved into round wholes anywhere. So the programmer must write a test. But writing tests is a time-consuming operation if you have to check every trivial type error in every control branch, every ramification of every "if", "when", "cond" etc. in your codebase. What if the programmer's test erroneously always calls "foo" with a positive argument? Then the type error will not be detected by that test and will happen only at runtime. Whereas with a good static type system the compiler would be able to traverse the whole code automatically and infer a type error from the definitions no matter how large the project is, how far away the definitions are, and how deeply nested the calls of those functions are. That's one of the benefits of a static type system: the totality of typechecking coverage allows the compiler to detect errors globally while requiring the programmer to make declarations only locally. That's also why "optional typing" is completely impractical: as soon as your typed code starts interfacing with untyped code, the totality of checks evaporates and the reliability of the code drops.
Name:
Anonymous2013-12-08 11:19
Detecting type errors at runtime is like realizing you are eating shit only when you've alredy swallowed half a turd.
Name:
Anonymous2013-12-08 13:47
(recommended reading after SICP :)
CLOSE YOUR FUCKING PARENTHESES MORON WHY ARE YOU DOING THIS??? DON'T USE FUCKING EMOTICONS AND IF YOU DO THEN THE CLOSING PARENS IS THE MOUTH AND YOU'RE PARENTHESES ARE NOT PROPERLY CLOSED!!!!!!! YOU MUST CLOSE THE FUCKING PARENS IS IT FUCKING HARD TO UNDERSTAND?
Your proof is not very good. For one, I don't see any type declarations in your Lisp program. Neither for parameters, nor for return types (although a good compiler will infer types from literals).
Try declaring types and then compiling in an implementation with type inference.
If you do not like your compiler's type inference, do your own type checking. You may redefined compile and eval[code]. The languages Qi and Shen do their own static type analysis using a prolog, which I think is a good approach.
Also, it is obviously *NOT* erroneous to call the function with only a positive argument. That is useful and correct functionality. In type terms, if you do [code](defun bar (x) (declare (type (integer 0 *)) ... ) then there should not be a compile time type error.
Furthermore, you should not be calling functions if you do not know what types they return for what inputs, both in statically or dynamically typed languages. If you are using Emacs, then M-. should instantly transport you to a functions definition. You can also use the SLIME inspector, or
Also you don't seem to understand tests. Tests aren't meant to be exhaustive. Tests fulfill a dual role of proving useful properties of your program and as documentation.
You are correct (but not necessarily so, just practically) that mandatory static typing does give the compiler more type information than optional.
Name:
Anonymous2013-12-08 17:19
CMUCL:
(defun f (x)
(declare (type (single-float -0.5 0.5) x))
(/ (sqrt (- 1 (* x x)))))
(describe #'f)
f is a function.
Arguments:
(x)
Its defined argument types are:
((SINGLE-FLOAT -0.5 0.5))
Its result type is:
(SINGLE-FLOAT 1.0 1.1547005)
Common Lisp is not "optionally typed". It's typed all the time.
Name:
Anonymous2013-12-09 12:19
>>36
You are wrong, "foo" does not return a tagged union. And the snippet compiles with SBCL (one of the most popular implementations of Common Lisp) even with explicit type declarations: (defun foo (x)
(declare (type fixnum x))
(if (> x 0)
7
"A string"))
and the compiler erroneously overlooks a possible type error:
(describe 'bar)
MYPACKAGE::BAR
[symbol]
BAR names a compiled function:
Lambda-list: (X)
Derived type: (FUNCTION (FIXNUM) (VALUES (INTEGER 8 8) &OPTIONAL))
>>37
It is possible to omit type declarations which makes it an untyped language. Which means that good, fully statically typed code may have to interact with unreliable untyped code from libraries.
>>34
Having to improve a language's type system means that the language is inadequate for professional work.
>it is obviously *NOT* erroneous to call the function with only a positive argument
And yet you or another programmer might call it with a nonpositive argument somewhere else, which will lead to an unexpected type error whose source will be hard to find without a static type checker with total code coverage.
If you are using Emacs, then M-. should instantly transport you to a functions definition
Having to check all the definitions in a large project is a heavy and unnecessary burden that better be automated so programmers have more time to find nontrivial logic errors instead of checking for trivial type errors.
Tests aren't meant to be exhaustive. Tests fulfill a dual role of proving useful properties of your program
Inexhaustive tests do not prove anything because they leave room for unpredictable behavior. If mathematics was based on tests instead of rigorous proofs, we would still be busy writing out consecutive prime numbers in an effort to prove that there is only a finite set of them. Whereas a short but rigorous proof is sufficient to know there is an infinity of prime numbers - without having to explicitly produce even 10 of them. The more things can be proven rigorously about something, the more reliable it becomes and the less tests are necessary to ensure its safety. Compulsory static typing is like that - it allows and forces the programmer to rigorously prove more truths about his program so that a great many tests become unnecessary and one gets more firm guarantees about the runtime behavior of the program.
Name:
Anonymous2013-12-09 13:07
"foo" does not return a tagged union
I never claimed that~
Name:
Anonymous2013-12-09 13:20
Static typing is VERY helpful for iterative development. Whenever you start playing with a piece of code, the compiler readily shows what other pieces must be changed to make the code into what you currently want it to be. Whereas in a dynamic language your code is a hostile unknown mass: when you change a piece, you can only guess what other pieces still work and which will bite you when you try to run them.