because Hindley-Milner rocks
I’ll get to Python in just a second but first, something uplifting and positive. I love me some Scala. I really, really do. With the new 2.8 release it has only gotten better. They’ve learned quite a bit about their implementation of the Actor model and melded it quite well with the Java fork/join. Don’t let me ruin it, here’s their own statement:
New Reactors provide more lightweight, purely event-based actors with optional, implicit sender identification. Support for actors with daemon-style semantics was added. Actors can be configured to use the efficient JSR166y fork/join pool, resulting in significant performance improvements on 1.6 JVMs. Schedulers are now pluggable and easier to customize.
Which is great. Lighter weight and more performant? Sign me up! And here I was looking at the Akka project’s Actors implementation for potential improvements (not to mention the integration with an STM.)
Speaking of STM, read a nice article on those recently linked via Hacker News that basically attempts to shoot it down. There’s some convincing links to papers and such on it but in the end, it’s really a question of “does it solve problems or create more?” To me, I see that people are down on it when not coming from an immutable state language. Haskell and Clojure both seem to be going great guns with an STM. I guess the jury is still out and I have nothing to add since I’ve never used either but it’s strange, never the less.
Finally, there’s Python and it’s little friend the GIL. That damned thing is why we have to use the multiprocessing library with it’s pipes, shared memory mapping and what not to back-end around a lack of good, implicit threading support. I mean, look at what one of the guidelines are for using this “environment” within your code:
If you use JoinableQueue then you must call JoinableQueue.task_done() for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow raising an exception.
Even the semaphore isn’t tied to the life cycle of the tasks its associated with.
Using multiple cores in Python. How to best express it? It’s ugly. It’s too “complicated.” It involves too much hand twiddling to get right. There are rules for Mac OS, rules for Windows, Linux and things like that which turns what should be a portable script into a maintenance nightmare. (By the way, I’m not trying to rip on the author’s of this module. They’ve done a tremendous job with the limitations imposed upon them. My hats off to them.)
Guiddo is still fooling himself by going on about how his view of the future of hardware design will render the point of “multi-threaded applications” moot. He thinks that an Actor based model that side-steps issues associated with the GIL is the right approach. I’m happy to see him seriously exploring Actors but I’m worried that not adapting to the changing world is going to cost us engineers.
Python has got to be one of the easiest languages to learn on and one of the easiest languages to script from. It combines a number of features from functional and OO languages well. It’s got its warts (that’s several more articles.) Perhaps its place will always be as a scripting language. After all, I’ve got a nice statically typed languages like C++, Java or Scala to fall back on for large scale development.
I’ll have to wait to find out the “future” of concurrency. What I do know or understand is that it appears for the present the oldest form of concurrency support, Actors, are taking a special bow.