Home   Archive   Permalink

Rebol performance?

Just wondering how Rebol compares in terms of performance with other interpreted languages like Python and Perl? I write quite a lot of number crunching code, and although it obviously can't compare with a language like C or Fortran, it doesn't seem too bad. R3's performance will be better, we are told...

posted by:   Jules       31-Oct-2010/8:49:58-7:00

I too would be interested to see these stats.
there was another post where I was told someone was building a JIT and that was going to be available mid 2010. However haven't seen anything since.

posted by:   yuem       31-Oct-2010/9:50:55-7:00

I have only some coarse indications, but when I tested pure processing once, REBOL 2 was twice as fast as Ruby. Now Ruby is fairly slow, so REBOL is probably slower than the likes of Python and Perl.
My estimate is that REBOL is generally in the order of hundred times slower than a compiled C program. This is the kind of thing where JITs and RebCode can help, which usually yield speed improvements of tens of times, like twenty to thirty times faster. Cyphre's JIT is for REBOL 3 and is in private development, but you can ask him for it.
However, this seldomly matters. In most programs, algorithms dominate overall performance, instead of pure processing power. In REBOL, a smarter solution can often be found, leading to smaller code, which is a performance advantage in itself.
I'm developing a web framework on top of Cheyenne. Cheyennne performance compares well with other web servers, and the performance of my framework can also compete well with web frameworks in other languages.

posted by:   Kaj       31-Oct-2010/13:35:14-7:00

Thanks Kaj. I'm happy to trade the increase in running time for Rebol programs for the vast decrease in developing time, and of course you can always write a routine in C if you really need that speed increase, then call it from Rebol.

posted by:   Jules       31-Oct-2010/14:29:11-7:00

There will always be ways to put REBOL in a place where it performs badly compared to other languages. It often is about implementation of an algorithm or a principle that you can find places where REBOL is fast.
For example PARSE is much faster than you'd expect, so when you write your REBOL scripts to be dialect heavy, they could be very fast. Doing the same thing in other languages, might be very slow.
Math is not very fast in REBOL, not even in R3, so now that we have extensions, people should take advantage of C-based math libraries, if speed is critical.
Also as Kaj says, given that REBOL is very small and has small scripts, so these produce real-world side effects in very quick download and startup times.

posted by:   Henrik       31-Oct-2010/17:28:45-7:00

I found some useful info here for those wanting to optimise loops:
REPEAT is the winner, and it seems sensible to assume that using native functions will in general be faster than mezzanine.

posted by:   Jules       31-Oct-2010/17:55:52-7:00

Natives are always faster than others as they run in lowest level.
As Kaj & Henrik said, algorithm is the most important thing for performance.
Here is 2 functions I use to test speed of running a block:
     benchmark: func [b /local s] [s: now/precise loop 1000000 [do b] difference now/precise s]
     benchmark2: func [
         /local sa sb r
     ] [
         sa: now/precise loop 1000000 [do a] sa: difference now/precise sa
         sb: now/precise loop 1000000 [do b] sb: difference now/precise sb
         r: (to-decimal sa) / (to-decimal sb)
         print [
            "Execution time for the #1 job:" sa newline
            "Execution time for the #2 job:" sb newline
            either sa > sb [
                "#1 is slower than #2 by factor ~"
            ] [
                "#1 is faster than #2 by factor ~"
            either r > 1 [r] [1 / r]
First one executes the given block for 1.000.000 times, you can change the iteration number because simple loops usually take less than a second.
Benchmark2 function takes 2 blocks and compares execution times of them.
Here is a quick exmaple: (create a string for one million times)
     >> benchmark [s: make string! 1024]
     == 0:00:01.297
     >> benchmark [s: make string! 256]
     == 0:00:00.61
     >> benchmark2 [s: make string! 1024] [s: make string! 256]
     Execution time for the #1 job: 0:00:01.313
     Execution time for the #2 job: 0:00:00.593
     #1 is slower than #2 by factor ~ 2.2141652613828

posted by:   Endo       1-Nov-2010/4:57:17-7:00

And here is a real example: I need 10 million random generated rows (to insert a database)
First, I do it by simple creating a string and appending lines by a loop, then save it to a file.
This was the slowest one, because it is too expensive to grow a string.
Second, I created a block, append 10m blocks then form it (convert) to string then save to a file. This wasn't that bad.
And last, I opened a file in direct mode (without buffering) and append the lines to file. Takes just seconds.
There are some other minor things, for example if you have loops inside loops, you may need to choose the correct functions/commands.
For example, FORALL is a function which uses FORSKIP which is another function uses WHILE (native) to iterate.
If you don't really need it, then you can use FOREACH (native).
But I would prefer readability over speed most of the time, for ex. if loops runs only once or two. No need to optimize.
Trying to optimize everything is a illness. I know because I'd have that also. I came from C64 & 68K assembly, we were trying to optimize every single op code. But this is not the case anymore.
Write readable code and let the compiler, interpreter, JIT etc. do the optimizations.

posted by:   Endo       1-Nov-2010/5:43:45-7:00