/program/, help me. I'm learning Haskell, and I love it. What can I do to rectify my correct opinion that functional programming is a wonderful and powerful tool?
Name:
Anonymous2012-01-23 7:26
Also, I'm receiving nudes from girls I know on a regular basis, so its not my social life I'm scared for.
Haskell: Variables must start with lowercase.
Erlang: Variables must start with uppercase.
What the fuck?
Name:
Anonymous2012-01-23 8:59
>>1
A declarative style of programming sounds specious. It seems all you have to do is provide a description of the problem without having to worry about details of how it is solved. In practice though, performance matters and you can't just give any description. You have to pick a specific description based on the imperative program it corresponds to. In the end you have to create both an imperative and a declarative program. The declarative program may look more elegant and concise than the imperative one would have, but it's a leaky abstraction.
Take this example of memoization from haskellwiki
memoized_fib = (map fib [0 ..] !!)
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
This works, because the listmap fib [0 ..] is cached between subsequent calls to the function and because !! will evaluate the elements of the list up to the one requested (and no others) in the right order.
Because the memoization relies on the imperative program the haskell compiler generates, if you want to write or understand such code, you must know the imperative program it compiles to.
Name:
Anonymous2012-01-23 9:01
- Just like C++, Haskell is very hard to learn, and takes years to master. Things like Monads, Functors, Monoids, Higher-Order Types and a myriad of morphisms are hard to understand, especially without mathematical background. Most programmers probably don't have the ability or will to learn Haskell. Learning Haskell's syntax, libraries, functional programming techniques won't bring you closer to understanding: the true path to understand Haskell lies through Monoid-Functor-Applicative-Arrow-Monad. And even if you mange to learn Haskell, programming it still hogs a lot of brain resources, which could have been put to something more useful than just showing off about how clever you can be. "Haskell for Kids" even proposes exposing children to Haskell from yearly age, meaning Haskell, similar to mathematics and natural language, will be much harder to grasp at older age. "Zygohistomorphic prepromorphism: Zygo implements semi-mutual recursion like a zygomorphism. Para gives you access to your result à la paramorphism.", "Haskell is not 'a little different,' and will not 'take a little time.' It is very different and you cannot simply pick it up" -- HaskellWiki
- Poor backward compatibility: haskellers "don't want to replicate Java, which is outright flawed in order to avoid even the most unlikely of broken code". Meaning, they don't care if new version of GHC will break your code. Haskell projects are struggling under the weight of "DLL hell": typical Haskell package consist of just a few lines of code, thus many other projects depend on dozens of different packages, either directly or indirectly. It's near-impossible to embark on a serious Haskell project without spending time fighting dependency version issues.
- Haskell is slow and leaks memory. GHC's inefficient stop-the-world GC does not scale. A good understanding of evaluation order is very important for writing practical programs. People using Haskell often have no idea how evaluation affects the efficiency. It is no coincidence that Haskell programmers end up floundering around with space leaks that they do not understand. "The next Haskell will be strict." -- Simon Peyton-Jones
- Haskell's API lacks higher levels of abstraction, due to absence of variadic functions, optional arguments and keywords. Macros aren't possible either, due to overly complex syntax of Haskell. API documentation is very lacking: if you want to use regexes, you start at Text.Regex.Posix, seeing that =~ and =~~ are the high level API, and the hyperlinks for those functions go to Text.Regex.Posix.Wrap, where the main functions are not actually documented at all, so you look at the type signatures, trying to understand them and they are rather intimidating (class RegexOptions regex compOpt execOpt => RegexMaker regex compOpt execOpt source | regex -> compOpt execOpt, compOpt -> regex execOpt, execOpt -> regex compOpt where). They are using multi-parameter type classes and functional dependencies. The signature really wont give you any clue to how to actually use this API, which is a science in itself. Haskell is a language where memoization is a PhD-level topic.
- Haskell programming relies on mathematical modeling with type system (a version of mathematical Set Theory). If one does not use the type system for anything useful, it obviously will be nothing but a burden. Programs are limited by the expressiveness of the type system of the language - e.g. heterogeneous data structures aren't possible w/o reinventing explicit tagging. All that makes Haskell bad for prototyping or anything new, due to need of having design document with all types beforehand, which changes often during prototyping. Complex project are forced to reinvent dynamic typing. For instance, Grempa uses dynamic typing because the semantic action functions are put in an array indexing rule and production numbers (Ints) to functions, and they all have different types and so can not be put in an ordinary array expecting the same type for each element.
- The IDE options cannot be as good as those of dynamic programming languages, due to absence of run-time information and access to running program's state. Haskell's necrophilia forces you to work with "dead" code. Like other static languages, Haskell isn't well-known for its "reload on the fly" productivity. No eval or self-modifying code. Haskell code can't be changed without recompiling half of application and restarting the process. GHCI - is the best Haskell's interactivity can get, and still wont allow you to change types during runtime, while the single assignment semantics prevents redefinition of functions. As said Simon Peyton-Jones, "In the end, any program must manipulate state. A program that has no side effects whatsoever is a kind of black box. All you can tell is that the box gets hotter."
- Type system produced compile-time and link-time errors are distracting and make it harder to run and test your code, while type-checking isn't a substitute for testing, it is about correspondence to mathematical model, which has nothing to do with correctness - i.e. two numbers can be integers, but their quotient can still result into division by zero. Even though you may hear strong static-typing advocates say, "When your program type-checks, you'll often find that it just works", this is simply not true for large, intricate programs. Although type-checking may help you find model-related errors, it is not the same as testing.
- Absence of dynamic scope, implicit open recursion, late binding, and duck typing severely limits Haskell, since there are things that can't be done easily without these features: you can't implement dynamic scope in general (and be type-safe) without converting your entire program to use tagged values. So in this respect, Haskell is inferior to dynamic typing languages.
- Haskell makes it easy to write cryptic programs that no-one understands, not even yourself a few days later. Rich, baroque syntax, lazy evaluation and a tradition defining an operator for every function - all help obfuscation a lot. As a general rule, Haskell syntax is incredibly impenetrable: who in their right mind thought up the operators named .&., <|> and >>=?
Name:
Anonymous2012-01-23 11:33
Haskell is shit. Fuck you faggot.
Name:
Anonymous2012-01-23 11:49
Lisp is shit too.
Name:
Anonymous2012-01-23 12:28
>>7
I like this type of kopipe because it implies whoever made it probably knows enough of the language to disguise the fact it's an elaborate troll.
Name:
Anonymous2012-01-23 12:52
>>10
Or.. crazy idea.. it's not a troll? I just skimmed through, but at least the last points were quite spot on. And I like Haskell. But it's rather shitty. I'm sure the author of that text once liked Haskell as well, used it a lot, then got tired of it and wrote about all its flaws. I've had similar experience with Java. (though I never really liked the language, I merely tolerated it)
Absence of dynamic scope, implicit open recursion, late binding, and duck typing severely limits Haskell, since there are things that can't be done easily without these features: you can't implement dynamic scope in general (and be type-safe) without converting your entire program to use tagged values. So in this respect, Haskell is inferior to dynamic typing languages.
Name:
Anonymous2012-01-23 13:52
It is definitely a troll. I learnt enough reading last night to know a lot of that was simply incorrect. And I have yet to build up a real opinion, but I find it to be unnecessarily complex, at least in the type system,where Erlang is elegant, but I also did it to have all the familiarities of functional programming I've known from Erlang, which is good, and I really do like functional programming. The only real advantages I've seen over Erlang it it's speed and lazy evaluation. Speed in Erlang is sacrificed for the VM which gives it all sorts of benifits. God damn, lazy evaluation is like an end all optimization.
Name:
Anonymous2012-01-23 14:34
`` i.e. two numbers can be integers, but their quotient can still result into division by zero.''
Lol wut r guards?
Name:
Anonymous2012-01-23 14:38
I had a similar problem with my daughter. She would rub herself until she would get abrasions that would then sting when she went to the bathroom. I took her to the doctor to make sure there was nothing physically wrong and then we explained to her that she had to be gentle with her body. When she didn't stop, I forbade her from touching herself there until her ''owies'' healed. She would ask me every day to check to see if it healed. Once it did, I told her she was free to explore but that she had to be gentle with her body. It happened a few more times - where I had to forbid her from touching herself until her abrasions healed - but eventually she figured out how to please herself without hurting herself. In all of that, the most important part was keeping it clean so that it could heal without infection.
- Just like C++, Haskell is very hard to learn, and takes years to master. Things like Monads, Functors, Monoids, Higher-Order Types and a myriad of morphisms are hard to understand, especially without mathematical background. Most programmers probably don't have the ability or will to learn Haskell. Learning Haskell's syntax, libraries, functional programming techniques won't bring you closer to understanding: the true path to understand Haskell lies through Monoid-Functor-Applicative-Arrow-Monad. And even if you mange to learn Haskell, programming it still hogs a lot of brain resources, which could have been put to something more useful than just showing off about how clever you can be. "Haskell for Kids" even proposes exposing children to Haskell from yearly age, meaning Haskell, similar to mathematics and natural language, will be much harder to grasp at older age. "Zygohistomorphic prepromorphism: Zygo implements semi-mutual recursion like a zygomorphism. Para gives you access to your result à la paramorphism.", "Haskell is not 'a little different,' and will not 'take a little time.' It is very different and you cannot simply pick it up" -- HaskellWiki
Go scrub another toilet you mental midget. - Poor backward compatibility: haskellers "don't want to replicate Java, which is outright flawed in order to avoid even the most unlikely of broken code". Meaning, they don't care if new version of GHC will break your code. http://chrisdone.com/posts/2012-01-22-ghc-upgrade.html - Haskell is slow and leaks memory. GHC's inefficient stop-the-world GC does not scale. A good understanding of evaluation order is very important for writing practical programs. People using Haskell often have no idea how evaluation affects the efficiency. It is no coincidence that Haskell programmers end up floundering around with space leaks that they do not understand. "The next Haskell will be strict." -- Simon Peyton-Jones http://blog.johantibell.com/2010/09/slides-from-my-high-performance-haskell.html - Haskell's API lacks higher levels of abstraction, due to absence of variadic functions, optional arguments and keywords. Macros aren't possible either, due to overly complex syntax of Haskell. API documentation is very lacking: if you want to use regexes, you start at Text.Regex.Posix, seeing that =~ and =~~ are the high level API, and the hyperlinks for those functions go to Text.Regex.Posix.Wrap, where the main functions are not actually documented at all, so you look at the type signatures, trying to understand them and they are rather intimidating (class RegexOptions regex compOpt execOpt => RegexMaker regex compOpt execOpt source | regex -> compOpt execOpt, compOpt -> regex execOpt, execOpt -> regex compOpt where). They are using multi-parameter type classes and functional dependencies. The signature really wont give you any clue to how to actually use this API, which is a science in itself. Haskell is a language where memoization is a PhD-level topic.
Go scrub another toilet you mental midget. - Haskell programming relies on mathematical modeling with type system (a version of mathematical Set Theory). If one does not use the type system for anything useful, it obviously will be nothing but a burden. Programs are limited by the expressiveness of the type system of the language - e.g. heterogeneous data structures aren't possible w/o reinventing explicit tagging. All that makes Haskell bad for prototyping or anything new, due to need of having design document with all types beforehand, which changes often during prototyping. Complex project are forced to reinvent dynamic typing. For instance, Grempa uses dynamic typing because the semantic action functions are put in an array indexing rule and production numbers (Ints) to functions, and they all have different types and so can not be put in an ordinary array expecting the same type for each element. http://okmij.org/ftp/Haskell/types.html#HList data HNil = HNil
data HCons e l = HCons e l
type e :*: l = HCons e l
e .*. l = HCons e l
Fuck LISP. - The IDE options cannot be as good as those of dynamic programming languages, due to absence of run-time information and access to running program's state. Haskell's necrophilia forces you to work with "dead" code. Like other static languages, Haskell isn't well-known for its "reload on the fly" productivity. No eval or self-modifying code. Haskell code can't be changed without recompiling half of application and restarting the process. GHCI - is the best Haskell's interactivity can get, and still wont allow you to change types during runtime, while the single assignment semantics prevents redefinition of functions. As said Simon Peyton-Jones, "In the end, any program must manipulate state. A program that has no side effects whatsoever is a kind of black box. All you can tell is that the box gets hotter." http://code.google.com/p/scion-lib/ - Type system produced compile-time and link-time errors are distracting and make it harder to run and test your code, while type-checking isn't a substitute for testing, it is about correspondence to mathematical model, which has nothing to do with correctness - i.e. two numbers can be integers, but their quotient can still result into division by zero. Even though you may hear strong static-typing advocates say, "When your program type-checks, you'll often find that it just works", this is simply not true for large, intricate programs. Although type-checking may help you find model-related errors, it is not the same as testing.
Go scrub another toilet you mental midget. - Absence of dynamic scope, implicit open recursion, late binding, and duck typing severely limits Haskell, since there are things that can't be done easily without these features: you can't implement dynamic scope in general (and be type-safe) without converting your entire program to use tagged values. So in this respect, Haskell is inferior to dynamic typing languages.
Go scrub another toilet you mental midget. - Haskell makes it easy to write cryptic programs that no-one understands, not even yourself a few days later. Rich, baroque syntax, lazy evaluation and a tradition defining an operator for every function - all help obfuscation a lot. As a general rule, Haskell syntax is incredibly impenetrable: who in their right mind thought up the operators named .&., <|> and >>=?
Go scrub another toilet you mental midget.
* \> Increment the pointer.
* \< Decrement the pointer.
* + Increment the byte at the pointer.
* \- Decrement the byte at the pointer.
* . Output the byte at the pointer.
* , Input a byte and store it in the byte at the pointer.
* [ Jump forward past the matching ] if the byte at the pointer is zero.
* ] Jump backward to the matching [ unless the byte at the pointer is zero.
-}
data Command = IncPtr
| IncPtrBy !Int -- ^ Increment pointer by set amount
| DecPtr
| IncByte
| IncByteBy !Int -- ^ Increment by a set amount
| DecByte
| OutputByte
-- | InputByte
| JmpForward !Int -- ^ nesting level
| JmpBackward !Int -- ^ nesting level
| SetIpTo !Int -- ^ Sets the instruction ptr to a specific value
| Halt
| Ignored
deriving (Show, Eq)
type Core = IOUArray Int Word8
type InstPtr = Int
type CorePtr = Int
data BF = BF !Core !CorePtr !InstPtr
instance Show BF where
show (BF _ cp ip) = "BF <core> CorePtr = " ++ show cp ++ " InstPtr = " ++ show ip
doCommand :: Array Int Command -> BF -> IO BF
doCommand cmds bf@(BF _ _ ip) = doCommand' (cmds ! ip) cmds bf
where
doCommand' :: Command -> Array Int Command -> BF -> IO BF
doCommand' Halt _ _ = undefined
doCommand' Ignored _ (BF c cp ip) = {-# SCC "Ignored" #-} do
when debug $ putStrLn $ "Ignored " ++ show bf
return (BF c cp (incIP ip))
doCommand' IncPtr _ bf@(BF c cp ip) = {-# SCC "IncPtr" #-} do
when debug $ putStrLn $ "IncPtr " ++ show bf
return (BF c (incCP cp) (incIP ip))
doCommand' DecPtr _ bf@(BF c cp ip) = {-# SCC "DecPtr" #-} do
when debug $ putStrLn $ "DecPtr " ++ show bf
return (BF c (decCP cp) (incIP ip))
doCommand' (IncPtrBy n) _ bf@(BF c cp ip) = {-# SCC "IncPtrBy" #-} do
when debug $ putStrLn $ "IncPtrBy " ++ show n ++ " " ++ show bf
return (BF c ((cp + n) `mod` coreSize) (incIP ip))
doCommand' IncByte _ bf = {-# SCC "IncByte" #-} do
when debug $ putStrLn $ "IncByte " ++ show bf
updateByte bf (+1)
doCommand' DecByte _ bf = {-# SCC "DecByte" #-} do
when debug $ putStrLn $ "DecByte " ++ show bf
updateByte bf (subtract 1)
doCommand' (IncByteBy n) _ bf = {-# SCC "IncByteBy" #-} do
when debug $ putStrLn $ "IncByteBy " ++ show n ++ " " ++ show bf
updateByte bf (+ fromIntegral n)
doCommand' OutputByte _ bf@(BF c cp ip) = {-# SCC "OutputByte" #-} do
when debug $ putStrLn $ "OutputByte " ++ show bf
c' <- unsafeRead c cp
putChar (word8ToChr c')
return (BF c cp (incIP ip))
{-
doCommand' InputByte _ bf@(BF c cp ip) = {-# SCC "InputByte" #-} do
when debug $ putStrLn $ "InputByte " ++ show bf
c' <- getChar
let newByte = chrToWord8 c'
unsafeWrite c cp newByte
return (BF c cp (incIP ip))
-}
doCommand' (JmpForward n) cmds bf@(BF c cp ip) = {-# SCC "JmpForw" #-} do
c' <- unsafeRead c cp
case c' of
0 -> {-# SCC "JmpForward1" #-} do
when debug $ putStrLn $ "JmpForward1 " ++ show bf
return (BF c cp newInstPtr)
_ -> {-# SCC "JmpForward2" #-} do
when debug $ putStrLn $ "JmpForward2 " ++ show bf
let newBF = (BF c cp (incIP ip))
when debug $ putStrLn $ "JmpForward3" ++ show newBF
return newBF
where
-- we add one to go one past the next back jump
newInstPtr = (nextJmp cmds ip (+1) (JmpBackward n)) + 1
doCommand' (JmpBackward n) cmds bf@(BF c cp ip) = {-# SCC "JmpBack" #-} do
c' <- unsafeRead c cp
if (c' /= 0)
then do when debug $ putStrLn $ "JmpBackward1 " ++ show bf
return (BF c cp newInstPtr)
else do when debug $ putStrLn $ "JmpBackward2 " ++ show bf
return (BF c cp (incIP ip))
where
newInstPtr = nextJmp cmds ip (subtract 1) (JmpForward n)
doCommand' (SetIpTo i) _ bf@(BF c cp ip) = {-# SCC "SetIPTo" #-} do
c' <- unsafeRead c cp
when debug $ putStrLn $ "SetIpTo " ++ show i ++ " "
++ show bf ++ " @" ++ show c'
Name:
Anonymous2012-01-23 15:58
>>25
-- jmping behaves differently depending on jmp forward vs. backward
-- we handle that with pos. vs. neg addresses
-- Note: SetIpTo 0 is always a JmpBackward
-- Because the first instruction cannot be SetIpTo 0
if i > 0
then if (c' == 0)
then return $ BF c cp i
else return $ BF c cp (incIP ip)
else if (c' /= 0)
then return $ BF c cp (-i)
else return $ BF c cp (incIP ip)
nextJmp :: Array Int Command
-> InstPtr
-> (InstPtr -> InstPtr) -> Command -> InstPtr
nextJmp cmds ip f cmd = if cmds ! ip == cmd
then ip
else nextJmp cmds (f ip) f cmd
chrToWord8 :: Char -> Word8
chrToWord8 = fromIntegral . ord
-- adding a halt on to the end fixes a bug when called from an irc session
loadProgram prog = optimize (cs++[Halt])
where
cs = fst $ runState (mapM decode prog) 0
n = length cs -- strictness
optimize :: [Command] -> Array Int Command
optimize cmds = listArray (0, (length reduced)-1) reduced
where
reduced = phase3 . phase2 . phase1 $ cmds
-- phase1 removes ignored things
phase1 :: [Command] -> [Command]
phase1 = filter (/=Ignored)
-- in phase2 group inc/dec into special instructions
phase2 :: [Command] -> [Command]
phase2 cs = concat $ map reduce $ groupBy (==) cs
where
reduce :: [Command] -> [Command]
reduce cs
| all (==IncPtr) cs = [IncPtrBy (length cs)]
| all (==DecPtr) cs = [IncPtrBy (-(length cs))]
| all (==IncByte) cs = [IncByteBy (length cs)]
| all (==DecByte) cs = [IncByteBy (-(length cs))]
| otherwise = cs
-- now we can turn jumps into changes of the ip
phase3 :: [Command] -> [Command]
phase3 cmds = updates (updates cmds jmpBs) jmpFs
where
jmpBs = calcJmpBs (zip [0..] cmds)
jmpFs = calcJmpFs (zip [0..] cmds)
update :: [a] -> (Int, a) -> [a]
update xs (i, a) = take i xs ++ [a] ++ drop (i+1) xs
updates :: [a] -> [(Int, a)] -> [a]
updates xs [] = xs
updates xs (u:us) = updates (update xs u) us
nested :: Command -> Int
nested (JmpForward n) = n
nested (JmpBackward n) = n
nested _ = undefined
isJmpB (JmpBackward _) = True
isJmpB _ = False
isJmpF (JmpForward _) = True
isJmpF _ = False
calcJmpBs :: [(Int, Command)] -> [(Int, Command)]
calcJmpBs cmds = catMaybes $ map newCmd (filter (isJmpB . snd) cmds)
where
newCmd (i, c) = absJmpB (i, findPrevJmpF (map snd cmds) i (nested c))
calcJmpFs :: [(Int, Command)] -> [(Int, Command)]
calcJmpFs cmds = catMaybes $ map newCmd (filter (isJmpF . snd) cmds)
where
newCmd (i, c) = absJmpF (i, findNextJmpB (map snd cmds) i (nested c))
absJmpB :: (Int, Maybe Int) -> Maybe (Int, Command)
absJmpB (_, Nothing) = Nothing
absJmpB (i, Just n) = Just $ (i, SetIpTo (-n))
absJmpF (_, Nothing) = Nothing
absJmpF (i, Just n) = Just $ (i, SetIpTo (n+1))
findPrevJmpF :: [Command]
-> Int -- ^ index to start at
-> Int -- ^ nesting level to match
-> Maybe Int -- ^ index of next JmpF
findPrevJmpF _ i _ | i < 0 = Nothing
findPrevJmpF cmds i n = case (cmds !! i) of
(JmpForward l) | l == n -> Just i
_ -> findPrevJmpF cmds (i-1) n
findNextJmpB :: [Command]
-> Int -- ^ index to start at
-> Int -- ^ nesting level to match
-> Maybe Int -- ^ index of next JmpF
findNextJmpB cmds i _ | i >= length cmds = Nothing
findNextJmpB cmds i n = case (cmds !! i) of
(JmpBackward l) | l == n -> Just i
_ -> findNextJmpB cmds (i+1) n
execute :: Array Int Command -> Int -> BF -> IO ()
execute cmds n bf@(BF _ _ ip) = do
if ip >= n || cmds ! ip == Halt
then halt
else doCommand cmds bf >>= execute cmds n
halt = if debug
then putStrLn "Machine Halted.\n"
else putStrLn "\n"
>>39 I love how their cleanest solution is a "little restrictive" and go read the fucking paper.
Name:
Anonymous2012-01-24 2:09
>>6
the only interesting response to OP and it gets no responses.
What I will say is that you are mostly wrong for a couple of reasons. First, your argument depends on the assumption that premature optimization is a good thing. Second, it's just incorrect. Declarative programming ideally ONLY leaks in performance, and with a "sufficiently intelligent compiler", not by much. Otherwise, declarative programming IS the abstractions that you are trying to express, and that's just one small part of what makes it so powerful. I honestly have trouble approaching a project in an imperative language without first going "ok, how do I get the effects of closures, garbage collection, lazy lists, data-as-code, code-as-data, persistent data structures, interned strings, destructuring, etc etc" Most of times the work arounds for the lack of these features are not worth my time or are inefficient (eg, for persistent data structures, just doing a naive linear copy) and I think, "OK, I'd honestly rather implement a better language than work in this one."
These general purpose abstracts REALLY ARE general purpose. There are VERY FEW applications for which they leak, and when they do it is ONLY in performance.
"ok, how do I get the effects of closures, garbage collection, lazy lists, data-as-code, code-as-data, persistent data structures, interned strings, destructuring, etc etc"
Note that none of these amazingly useful things is declarative programming.
praimu GET
Name:
Obama2012-01-24 2:15
Who was my moronic predecessor?
Why DUBya.
Name:
Anonymous2012-01-24 8:22
>>41
Why does the argument depend on when you optimize?
Also I wouldn't even call it optimizing if you merely try to get the same performance an imperative program would have without the programmer even thinking about performance.
As for the "sufficiently smart compiler": Compilers do almost nothing to optimize algorithms.
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
fib n = fib' 0 1 n
where
fib' a _ 0 = a
fib' a b n = fib' b (a+b) (n-1)
Here we have 2 times the exact same declarative program (also the same as the one in >>6). The first one will run in exponential (not by much??) time and the second one in linear time. In an imperative language anyone would implement it as a simple iteration without thinking about optimization:
a = 0; b = 1
n.times { a, b = b, a + b }
In haskell you can't write this, because you don't have assignment and state, but the haskell programmer has to think of this imperative program first and then he has to know enough about the implicit imperative semantics attached to haskell (which do make sense if you think about how computers work but are completely arbitrary from a declarative perspective) to be able to translate it to exactly the right declarative program.