Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

Endofunctors

Name: Anonymous 2013-07-29 17:30

What is the difference between an endofunctor and a functor?

Name: Anonymous 2013-07-29 17:34

An endofunctior has the same domain and codomain while a functor might have different codomain from domain. An endofunctor is a functor.

Name: Anonymous 2013-07-29 17:40

An endofunctor is a functor that maps a category to itself.

In Haskell, all Functors are endofunctors, because they map Haskell types and functions to new Haskell types and functions. For example, Maybe : HaskHask maps each type a to Maybe a, and each function a -> b to a function Maybe a -> Maybe b. (How this is done is exactly what the definition of fmap tells you.)

Name: Anonymous 2013-07-29 17:50

>>1-3
You guys should stop using Jewish mathematics to describe everything.

Name: Anonymous 2013-07-29 17:55

>>4
Yes, they should move to al-Muslimi (Halaal Certified TM) mathematBOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOMMMMMMMMMMMMMMMMMMMMMMMMMtdstktksrskrskrkrkrssssssssss

Name: Anonymous 2013-07-29 20:56

>>4
Describe my anus

Name: Anonymous 2013-07-29 21:47

>>6
Sure.

Let * be a binary operator on the set 8 and Kabbalah Shabbat infinity fuck arabs and palestines.

Name: Anonymous 2013-07-29 22:06

What is the difference between an exofunctor and a functor?

Name: Anonymous 2013-07-29 22:08

What's the difference between disco and funk?

Name: Anonymous 2013-07-29 22:11

>>9

Disco is dance music, and funk is for fuckin'.  They are two points on a time scale.

Name: Anonymous 2013-07-29 22:48

>>5
I'm thrilled you have returned. The balance is finally back.

Name: Anonymous 2013-07-29 23:16

Name: Anonymous 2013-07-29 23:46

>>3
An endofunctor is a functor that maps a category to itself.
So when it's said that monads are simply monoids in the category of endofunctors, it means they're in the category of mappings from a category to the same category?

Name: Anonymous 2013-07-29 23:50

>>13
http://en.wikipedia.org/wiki/Casuistry


It seems clear that Nazi Germany did severely persecute what it defined as “Jewish mathematics”. In his book “History of Mathematics: A Supplement” (Springer 2007) Craig Smorynski said: “… the change of mathematical direction … would reach an extreme in the 1930s with the nazi distinction between good German-Aryan anschauliche (intuitive) mathematics and the awful Jewish tendency toward abstraction and casuistry.

Name: Anonymous 2013-07-30 1:07

MAP MY CATEGORY OF ANI!

Name: Anonymous 2013-07-30 7:34

>>8

In haskell it would mean an functor, which is not an endofunctor. This can be only functors, which map from a subcategory of hask to hask or from hask to a subcategory of hask or from a subcategory of hask to a different subcategory of hask.

But a functor itself is a much broader structure, than it is represented in haskell.

You have forgetful functors, which are functors which forget some of their structure. Like the functor ?:Grp -> Set, which is also faithful because it is surjective. That means multiple groups can share the same set. There is also a dual notion, which is the full functor. This means the mapping is always injective. You have the inclusion functor, which maps a subcategory to it category. An opposite functor, which can be obtained by swapping the arrows of the mapping. These can yield an contravariant functor. (b -> a) -> f a -> f b (In computations often seen with consumers).

There are a lot of functors and some are quite interesting, most aren't though.

Name: Anonymous 2013-07-30 8:02

>>14
``Anschaulich'' doesn't mean intuitive.

Name: Anonymous 2013-07-30 8:12

- Like C++, Haskell is huge in size, enormously hard to learn, and takes decades to master (just Haskell's extensions list exceeds 70 entries http://hackage.haskell.org/packages/archive/hint/0.3.3.6/doc/html/Language-Haskell-Interpreter-Extension.html). Things like Monads, Functors, Monoids, Higher-Order Types and a myriad of morphisms are hard to understand, especially without mathematical background. Most programmers probably don't have the ability or will to learn Haskell. Learning Haskell's syntax, libraries, functional programming techniques won't bring you closer to understanding: the true path to understand Haskell lies through Monoid-Functor-Applicative-Arrow-Monad. And even if you mange to learn Haskell, programming it still hogs a lot of brain resources, which could have been put to something more useful than just showing off about how clever you can be. "Haskell for Kids" even proposes exposing children to Haskell from yearly age, meaning Haskell, similar to mathematics and natural language, will be much harder to grasp at older age. "Zygohistomorphic prepromorphism: Zygo implements semi-mutual recursion like a zygomorphism. Para gives you access to your result à la paramorphism.", "Haskell is not 'a little different,' and will not 'take a little time.' It is very different and you cannot simply pick it up" -- HaskellWiki
- Poor backward compatibility: haskellers "don't want to replicate Java, which is outright flawed in order to avoid even the most unlikely of broken code", meaning they don't care if new version of GHC will break your code. Haskell projects are struggling under the weight of "DLL hell": typical Haskell package consist of just a few lines of code, thus many other projects depend on dozens of different packages, either directly or indirectly. It's near-impossible to embark on a serious Haskell project without spending time fighting dependency version issues.
- Haskell is slow and leaks memory. GHC's inefficient stop-the-world GC does not scale. Despite being statically typed, Haskell can't deliver the bare metal speed, with typical Haskell code being 30x to 200x times slower than C/C++ (http://honza.ca/2012/10/haskell-strings http://www.mail-archive.com/haskell-cafe%40haskell.org/msg25645.html). A good understanding of evaluation order is very important for writing practical programs. People using Haskell often have no idea how evaluation affects the efficiency. It is no coincidence that Haskell programmers end up floundering around with space and time leaks that they do not understand. Functions take time to complete and debugging for time leaks is much more difficult than to debug for type errors. "The next Haskell will be strict." -- Simon Peyton-Jones
- Haskell's API lacks higher levels of abstraction, due to absence of variadic functions, optional arguments and keywords. Macros aren't possible either, due to overly complex syntax of Haskell, which impedes metaprogramming and complicates parsing for human and machine alike. API documentation is very lacking: if you want to use regexes, you start at Text.Regex.Posix, seeing that =~ and =~~ are the high level API, and the hyperlinks for those functions go to Text.Regex.Posix.Wrap, where the main functions are not actually documented at all, so you look at the type signatures, trying to understand them and they are rather intimidating (class RegexOptions regex compOpt execOpt => RegexMaker regex compOpt execOpt source | regex -> compOpt execOpt, compOpt -> regex execOpt, execOpt -> regex compOpt where). They are using multi-parameter type classes and functional dependencies. The signature really wont give you any clue to how to actually use this API, which is a science in itself. Haskell is a language where memoization is a PhD-level topic. Some people even employ C/C++ macros to work-around Haskell's complexity (http://lukepalmer.wordpress.com/2007/07/26/making-haskell-nicer-for-game-programming/)
- Haskell programming relies on mathematical modeling with type system (a version of mathematical Set Theory). If one does not use the type system for anything useful, it obviously will be nothing but a burden. Programs are limited by the expressiveness of the type system of the language - e.g. heterogeneous data structures aren't possible w/o reinventing explicit tagging. All that makes Haskell bad for prototyping or anything new, due to need of having design document with all types beforehand, which changes often during prototyping. Complex project are forced to reinvent dynamic typing. For instance, Grempa uses dynamic typing because the semantic action functions are put in an array indexing rule and production numbers (Ints) to functions, and they all have different types and so can not be put in an ordinary array expecting the same type for each element.
- The IDE options cannot be as good as those of dynamic programming languages, due to absence of run-time information and access to running program's state. Haskell's necrophilia forces you to work with "dead" code. Like other static languages, Haskell isn't well-known for its "reload on the fly" productivity. No eval or self-modifying code. Haskell code can't be changed without recompiling half of application and restarting the process. GHCI - is the best Haskell's interactivity can get, and still wont allow you to change types during runtime, while the single assignment semantics prevents redefinition of functions. As said Simon Peyton-Jones, "In the end, any program must manipulate state. A program that has no side effects whatsoever is a kind of black box. All you can tell is that the box gets hotter."
- Type system produced compile-time and link-time errors are distracting and make it harder to run and test your code, while type-checking isn't a substitute for testing, it is about correspondence to mathematical model, which has nothing to do with correctness - i.e. two numbers can be integers, but their quotient may still produce division by zero. Static-typing advocates say, "When your program type-checks, you'll often find that it just works", but this is simply not true for large, intricate programs: although type-checking may help you find model-related errors, it can't replace testing.
- Absence of dynamic scope, implicit open recursion, late binding, and duck typing severely limits Haskell, since there are things that can't be done easily without these features: you can't implement dynamic scope in general (and be type-safe) without converting your entire program to use tagged values. So in this respect, Haskell is inferior to dynamic typing languages.
- Syntax craziness: Haskell has a>>b>>c, do{a; b; c}, "do a; b; c", where `;` could be substituted by newlines, which is incredible confusing, especially when combined with currying and if/then/else blocks.
- Haskell makes it easy to write cryptic programs that no-one understands, not even yourself a few days later. Rich, baroque syntax, auto currying, lazy evaluation and a tradition defining an operator for every function - all help obfuscation a lot. As a general rule, Haskell syntax is incredibly impenetrable: who in their right mind thought up the operators named .&., <|> and >>=? Currying everywhere, besides making language slower, leads to spaghetti code and surprising Perl-style errors, where call with a missing or superfluous argument produces error in some unrelated part of the code.

Name: Anonymous 2013-07-30 8:43

>>18
Reading what seemed like legit criticism of Haskell, I can't but get the feeling that the person who wrote this is a Lisp troll. In all his points, I was like 'oh but lisp does this'. I searched the internet for this sort of post since it is obviously copypasta, and the only place I've found it at is a russian imageboard called 0chan.hk, which is surprising because the author of the post seems to have a good grasp of the English language. Nevermind, I found a link from /prog/ with the person (I think he's just one person, right?) who writes Symta (his language that I don't believe an implementation of exists) admitting to have posted said kopipe to 0chan.hk.

So, I'm too bored to search further about the origins of this kopipe and figure out the context in which this was written.

Name: Anonymous 2013-07-30 8:45

>>19
Some of this criticism is related to Common Lisp and was fixed in Scheme or Racket. So it is mostly about why you should prefer Racket to Common Lisp or Clojure.

1. CONS-pairs simplicity is deceiving: CONS introduces mutability, together with silly notions of proper and improper lists, while impeding advanced and optimized list representations, so you can't have lists with O(log2(N)) random access, catenation and insertion, like Haskell's finger trees.
2. NIL punning (treatment of NIL of as empty list, false and void) breaks strong-typing and goes against lambda calculus, which hints us that IF should accept only TRUE and FALSE, which should be functions (Church Booleans), so all objects must be explicitly coerced to boolean for use with IF. Common Lisp has a confusing lot of NIL/CONS related predicates: there is CONSP, LISTP, ATOM, ENDP and NULL, while TYPECASE discerns between CONS, LIST, ATOM, BOOLEAN and NULL. The ATOM predicate considers NIL an atom, so if you have some code that should be invoked on every list, you can miss some empty lists. Hash-table access returns NIL, when key isn't present, making it harder to store empty lists. Some Lisps demonstrate half-broken behavior: for example, Clojure, discerning between nil and empty-list, still allows using nil in-place of empty-list; moreover, Clojure introduces true and false, duplicating nil and confusing semantics even further. NIL punning is the single worst Lisp wart, reminiscent of PHP and JavaScript horrors.
3. Non-Lispy behavior: FORMAT function competes with regular expressions in providing cryptic and unreadable DSL, indulging obfuscated code like "~{~#[<empty>~;~a~;~a and ~a~:;~@{~a~#[~;, and ~:;, ~]~}~]~:}", which could have been easily replaced with SEXP based format. On the other hand, Lisp misses string construction, like "Name=$name, Age=$age", so you have to carry FORMAT everywhere. LOOP macro employs baroque Pascal-like syntax with a lot of clauses, having complex interplay with each other, which would confuse newbies and annoy Lispers, who love simpler solutions, playing nicely with lambda calculus, like letrec.
4. Over-engineered OOP: despite Lambda Calculus prescribing using lambdas to construct objects, most Lisps implement full blown class-based OOP, like CLOS, conflicting with Lisp's minimalism and complicating semantics. Moreover, lists/conses and booleans don't have first class citizen rights in these ad-hoc object systems, meaning you can't overload CAR or CDR, so you can't implement your own lists, like, for example, a list-like interface to filesystem, where a directory could behave as a list of files and every file as a list of bytes. CLOS generics (multi-methods) clutter global scope and collide with functions. Package symbols have dynamic scope, which is argumentably bad and shouldn't be indulged, while packages are global and first-class, so you can't easily get a sandboxed environment (like Unix's chroot) by making a package with only safe functions. Instead of providing viable encapsulation, some Lisps treat the symptoms by introducing second namespace for variables, so that identifiers would have less chances to collide. Other Lisps, like Clojure, disapprove OOP, using some ad-hoc package systems and kludges, like "protocols".
5. Duplication is not the right thing: CONS duplicate arrays and lists; QUOTE duplicates QUASIQUOTE, which for some reason implemented as reader macro, so you can't overload it for you own use; LOOP duplicates DO; symbols duplicate strings; chars duplicate single-letter symbols. Package encapsulation duplicates OOP, which in turn duplicates encapsulation features provided by lexical scope. Growing from broken OOP encapsulation, there is an explosion of comparison functions, just to compare for equality we have: =, char=, string=, eq, eql, equal, equalp, tree-equal, string-equal, char-equal; worser, `eq` compares pointers and has undefined behavior, so it should be part FFI, instead of being exposed with normal interface. Analogously, AREF, SVREF, ELT, NTH, CHAR, SCHAR, BIT, SBIT - all do exactly the same.
6. Verbosity and inconsitent naming: define and lambda are most used keyword, yet take 6 character each; then we have monstrosities like destructuring-bind and remove-if-not, which could be named just bind and keep. Verbosities like MAKE-HASH-TABLE, MAKE-ARRAY and (DECLARE (INTEGER X)) ensure that you will avoid optimized structures and type-checking at all cost, making your code slower. In rare cases, when names ain't verbose, they are just cryptic, like PSETF, CDAADR and REPLACA. Macros LET and COND have especially bloated syntax: where simpler (let name value …) would have been enough, LET adds additional 2 levels of parentheses (let ((name value)) …) just to annoy you. CLOS and defstruct syntaxes are especially verbose, which is aggravated by absence of self/this context. Many people complain about absence of infix expressions, which could have been easily supported through reader macro, like {a*b+c}, and while Scheme does support infix expressions, it misses operator precedence, making it incompatible with formulas you may copy-paste from your math textbook. LISP lefts unused a lot of special characters, which otherwise would made code succinct and provided visual cues: for example, you have to write (list 1 2 3), instead of [1 2 3]. Coercion naming scheme impede composition: (y->z (x->y ...)) hides the `y` entrepot, while (z<-y (y<-x ...)) would have underlined it.
7. Missing features: While Lisp does support complex and rational numbers, it doesn't support vector arithmetic, so you have to write something like (map 'vector (vector x1 y1 z1) (vector x2 y2 z2)), making geometry and physics code exceedingly verbose. A few important functions, like SPLIT and JOIN, are missing from standard library, which for some reason includes rarely used string-trim and string-right-trim, although JOIN usually simulated with (format nil "~{~a~^-~}" '("a" "b" "c")), making code impenetrably cryptic. Absence of good type system, call-by-name (lazy evaluation) and immutability, which really makes functional programming shine. Although, Qi does provide acceptable type system and Clojure introduces immutability, we can't have all this in a single production quality Lisp. Call-by-name is more of an on-demand-feature to be used in some contexts (like implementing if/then/else special-form), but no Lisp features it, despite call-by-name being natural semantics of Lambda Calculus. Some popular Lisps, like Clojure and Common Lisp, don't even guarantee TCO (tail call optimization), meaning that expressing advanced control structures would be hard. No Lisp, beside Scheme, supports continuations - silver bullet control-flow feature, although some Lisps do support goto - limited form of continuations. "It needs to be said very firmly that LISP is not a functional language at all. My suspicion is that the success of Lisp set back the development of a properly functional style of programming by at least ten years." -- David Turner.
8. Horrible community: Lispers are unhelpful, inert and elitist, so much that they continuously fail to coordinate the development of Lisp. The backwardness of Lispers shows even in their choice of long obsolete GNU/Emacs (an orthodox command line text editor without mouse support or any GUI at all) as the only true IDE and tool for everything. Lispers are the ones who will make fun of newbies, instead of helping them. Lispers will call you "retard", if you don't know the meaning of buzzwords like "pandoric eval", "CPS" or "reversible computation".

Name: Anonymous 2013-07-30 9:31

>>20
Very silly criticism, that of >>20

Name: Anonymous 2013-07-30 10:10

>>18

This is written from a perspective of a person:

* Who dislikes math and thinks mathematics and computations aren't linked.
* Who likes a weak and dynamic type system and doesn't understand how to let the type system work for you.
* Who believes dynamic languages are superior to static languages.
* Who doesn't understand type classes
* Who doesn't update his critique, which is a little bit dated.
* Who clearly doesn't like syntax.
* Who hasn't used haskell a lot
* But he has a lot of valid points.

The only two points, which I think, are  bunk (even from his perspective) is the IDE point and the type error point. I don't see how dynamic languages can provide more information to the IDE. They miss all kind of static information, which surely makes them slightly inferior. And I don't see, why it is bad you are distracted by type errors. It are errors, they have to be fixed. Better early than later, I would argue.

Duck typing is from the point of a haskell programmer a horrible imprecise way to implement polymorphism. He rather names it with type classes.

About the cryptic programs, I actually found it difficult to write code, but really easy to read it. And if I don't understand it, I can always work it out. Lambda calculus has only one operator: apply. It isn't that difficult.

Compare to PHP, PHP is easy write, hellish to read. 
 
About performance, haskell is somewhere between java and c++. That's not too bad. In web environments haskell is quite good, it has very low latency compared with other frameworks. Stop the world GC is replaced by a parallel GC, which is somewhat better, but it could be upgraded to more efficient models.

And sometimes one wants to build a dynamic type system, it can be handy. But it is not used often enough to make it a language builtin. As a library such a type system is fine.

What I actually miss is some real problems haskell has at the moment. The type system is not type safe. Here is some sample code:



import Control.Exception

import Data.Typeable

newtype Foo = Foo (() -> IO ())

{- set Foo’s TypeRep to be the same as ErrorCall’s -}

instance Typeable Foo where

  typeOf _ = typeOf (undefined :: ErrorCall)

instance Show Foo where  show _ = “”

instance Exception Foo

main = Control.Exception.catch (error “kaboom”) (\ (Foo f) -> f ())

Taken from: http://existentialtype.wordpress.com/2012/08/14/haskell-is-exceptionally-unsafe/

This is really interesting, typeable instances are inherently unsafe.

The numerical classes are complete junk. It should suit you if you add this happy thing in there. Nobody seems to be able to remember, how all classes are connected. We have fractional, floating, integral, num, realfloat, realnum and enum. Sometimes leading to types, with the help of type inference, which doesn't have inhabitants:


f :: (Integral a, Fractional a) => a -> a -> a 
f x y = div x 2 + y / 2


In this case the type is simply uninhabited, because there is no integral type, which is also a fractional type.

The relations between Monad, Applicative and Functor are not stated in the class hierarchy. Code duplication comes forth from this problem and sometimes the applicative functor is a different functor than we could have derived from the monad instance. Some code duplication problems: Eg: (liftA, liftM, fmap) (pure, return) (*>, >>) (<<, <*). Every group does the same action. It gets even worse. *> and >> should be interchangeable. But with different instances for applicatives and monads, it is possible that they are not interchangeable.

Instances have the spooky ability to pollute other modules spaces, even if that module did not import them directly.

And another fun one:

Dead code can effect the type system:

bla c = do
  let b = c + 1 :: Integer
  return c

In larger functions this will lead to problems.

Name: Anonymous 2013-07-30 11:17

>>20
you are fucking retarded

Name: Anonymous 2013-07-30 13:00

>>3
covariant endofunctors

Don't change these.
Name: Email:
Entire Thread Thread List