It's really essential that the distinction between language and implementation / execution model be made. This is especially true in the case of so-called 'interpreted' languages, since, well, it's really fundamentally wrong
to call a language
'interpreted.' A language
is a specification of a grammar, semantics, and the precise functionality that those semantics should provide. 'Interpreted' refers to an execution model of a piece of code, whereby the 'code' that's being interpreted is not run as native machine code, but rather by proxy through a bytecode engine, which generally is
native machine code (at least, a bytecode engine is usually
the form taken by interpreters, and is in the case of Python).
However, there is nothing
preventing the static compilation of a language like Python. And I'm not even talking about Cython or required static typing, I'm saying that there's nothing preventing the static compilation
of a dynamically-typed
program. It's hard but feasible (and yes, there is always a performance hit for dynamic-typing, but it's nothing compared to the performance hit of running through an interpreter). OTOH if we restrict to static typing, it's downright easy to statically-compile Python. And on the flip-side of the coin, there are interpreted versions of C++ (AngelScript) and C (CINT).
LT's Python looks like Python and, in some places, behaves like interpreted Python. In other places, it looks like python and behaves like beautifully-crafted-and-optimized statically-compiled C.
Also, four more points:
1. Python's numpy library has been shown to match the performance of native C for large computations. That's because it's calling native C libraries...so essentially there is very little difference.
2. Python's default memory management is refcounting, GC is optional
. This is, in my mind, perfect. For those who aren't great with memory management, leave the GC on. For those who know how to use refcounting without ever needing GC (aka know how to never create ref cycles), we can simply turn off the GC (as, of course, will be the case for LT) and have the performance of just refcounting (which, by the way, is what the old LT engine used, just in C++).
3. For those who want to run serious computation in Python, pypy's dynamic analysis and JITing is nothing short of magical. Use it and enjoy the order-of-magnitude performance increase for general computation.
4. Above all else, the ease of manipulating sequences, collections, strings, etc. in Python makes it an incredible choice for just about anything, if not as the language that executes
the program, then as the language that generates the C that will execute the program
(LT's Python-to-native magic is written in...surprise...Python! At some point, once it is powerful enough, I will be able to use it to compile itself
to native code so that Python-to-native can be even faster
(Just to balance out all this positive, there are plenty of design decisions that I don't
like about Python. But above all it was the ideal choice for LT.)