Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

The Zen of Python is a LIE

Name: Anonymous 2009-12-13 3:09

Flat is better than nested.

Is that why implementing function decorators involves returning nested functions? Or context managers require blocks, so opening two files in a class method starts me off 4 levels deep?

Python code is incredibly nested. Modules in any moderately complex Python framework go quite deep, because Python programmers apparently refused to put more than a dozen functions in a module. Witness os, os.path, shutil, dircache, io, etc. for the most basic of file manipulation.

Namespaces are one honking great idea -- let's do more of those!

...which directly contradicts flat vs. nested. That explains all the fucking modules.

Beautiful is better than ugly. / Readability counts.

What's with all the fucking underscores? Why are built-in methods like __init__() done this way instead of just claiming init()? Plenty of languages do this; Objective-C comes to mind. Why is a *leading underscore* the way the interpreter enforces encapsulation?

Why is the syntax for list and dict comprehensions so ass backwards? How is it even possible that you can nest them, so that [(i,f) for i in nums for f in fruit] is valid Python? And there aren't even any filters there!

Although practicality beats purity.

...according the language designers' twisted definitions of those words. People aren't asking for proper lambdas for 'purity' reasons; they are just fucking practical! The current lambdas are so crippled as to be nearly unusable, but they want to keep the language 'pure' from pervasive anonymous functions.

Another great example is the interpreter itself. In order to keep the language 'pure', basically everything is a mutable dict, including the contents of modules; this means in order to call any named function, you have to lock its module to make sure no other thread is modifying it. Hence the GIL, and the general impossibility of making Python run fast.

There should be one-- and preferably only one --obvious way to do it.

You mean like how you have to call super(self.__class__, self).__init__(*args, **kwargs) even if you inherit directly from object, in case your class is used in a multiple inheritance tree and needs to construct the next class in the mro? Yeah that's pretty fucking obvious. Don't forget to take variable keyword arguments and pop the ones you want.

How about the fact that you have to create a blank file called __init__.py in a folder to make it a module? Is that fucking obvious? Who the fuck thought that was a good idea?

Although that way may not be obvious at first unless you're Dutch.

A continuation of the above; the whole syntax of the language violates the 'one way' rule. Why is there a different syntax for certain specific class methods? For instance you implement X.__len__() so that your class can be used as len(x). Why the fuck don't you just implement X.len() to do x.len(), like all the other class methods?

Why is it x.sort() and sorted(x)? Why couldn't it just be x.sort() and x.sorted(), or vice versa? The behaviour can still be the same folks, where x.sort() returns None and x.sorted() returns a copy; this is purely syntactic. Even C++ is not this confused about standard method call syntax, and it's the only other language I can think of that allows both.

Explicit is better than implicit.

I've generally found this to be the biggest lie of all. A common idiom for libraries and tools in Python is to masquerade as a built-in class. You often get a 'file-like' object, or a 'dict-like' object from a callback. And it behaves *almost* like the real thing, but there's always a gotcha. They are fucking full of 'em. Sometimes they don't implement context managers. Sometimes you have to 'flush' them. But one thing is for sure, you will always fucking stumble upon a gotcha; some magic behavior that there was no possible way for you to have known about.

Name: Anonymous 2009-12-14 3:30

I had to use some Python script written by someone else today:
- I run the script
- The script popped a dialog box to ask me for some input
- I enter the input correctly
- Script crashes with a division by zero.
- I examine the source code: division by zero was used to signal an error(poor man's exception handling? It executed 1/0 when something went wrong), further checking indicates that it relies on the user's cursor as a second source of input. If the cursor isn't pointing at the exact right spot (there is only one possible valid spot it can point, but it's not something which is supposed to be obvious, at least without reading the source code, and understanding what exactly it wants). If the cursor is pointing in the wrong place, it generates an error by executing 1/0.
- Luckily for me, the script did its job fine, and I was able to use its output which will be processed by a mixed Lisp/C toolset that I'm developing myself.

As someone who does not have much of a clue of Python, are such strange hacks idiomatic Python? Dividing by zero to cause an error? Adding 0.0 to an integer to cast it to a float? Seems awfully hacky to me.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List