Hi guys, As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly. I wrote a blog entry here: http://www.macruby.org/blog/2009/03/28/experimental-branch.html 2 big features in this branch: a LLVM-based JIT compiler and a new IO subsystem. Obviously performance-related, we really need to be faster. Current status of the branch: - The compiler is now able to pass most of the Ruby AST. It compiles nodes directly from the parser in a lazy fashion. - Only JIT compilation for the moment (AOT is planned for later). - The VM is still under development, it's not as complete as the compiler yet. - Early performance benchmarks are very promising. - By the time of this writing, we pass the vast majority of the language RubySpecs (a week ago we weren't even able to even bootstrap mspec!), about 1190 expectations. Work is very active in this area, we are also making sure the specs do run with the original Ruby 1.9 and writing missing specs. - The new IO subsystem is mostly functional, but there are still many methods that are not implemented yet and we are working on this. Once it will be complete we will integrate a default runloop in the VM and expose asynchronous IOs. For the near future, the goals are: - Improving the compiler. Currently compile time has not really been optimized yet. - Be able to run IRB. We are almost there. - Remove the libffi code used to call C/ObjC implementations and instead JIT compile stubs and insert them in the dispatcher cache. We should be way closer to ObjC then (and maybe faster once we enable secondary compilations of hotspots and inline the stubs). - Rewrite the BridgeSupport side using LLVM types. - Implement full concurrent threading! (this is a big one :-)). - And many more (check the TODO file for more info). Once we arrive at a point when the branch is as functional as trunk, it will be merged in trunk and we will then work on stabilizing it, to later on release it as 0.5. The schedule for this release is unknown, it will be released when it will be ready (preferably this year though). If you want to help let me know! Laurent
Will OSA support be a 0.5 or a 0.6 task? On Mar 28, 2009, at 14:37, Laurent Sansonetti <lsansonetti@apple.com> wrote:
Hi guys,
As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
I wrote a blog entry here: http://www.macruby.org/blog/2009/03/28/experimental-branch.html
2 big features in this branch: a LLVM-based JIT compiler and a new IO subsystem. Obviously performance-related, we really need to be faster.
Current status of the branch:
- The compiler is now able to pass most of the Ruby AST. It compiles nodes directly from the parser in a lazy fashion. - Only JIT compilation for the moment (AOT is planned for later). - The VM is still under development, it's not as complete as the compiler yet. - Early performance benchmarks are very promising. - By the time of this writing, we pass the vast majority of the language RubySpecs (a week ago we weren't even able to even bootstrap mspec!), about 1190 expectations. Work is very active in this area, we are also making sure the specs do run with the original Ruby 1.9 and writing missing specs. - The new IO subsystem is mostly functional, but there are still many methods that are not implemented yet and we are working on this. Once it will be complete we will integrate a default runloop in the VM and expose asynchronous IOs.
For the near future, the goals are:
- Improving the compiler. Currently compile time has not really been optimized yet. - Be able to run IRB. We are almost there. - Remove the libffi code used to call C/ObjC implementations and instead JIT compile stubs and insert them in the dispatcher cache. We should be way closer to ObjC then (and maybe faster once we enable secondary compilations of hotspots and inline the stubs). - Rewrite the BridgeSupport side using LLVM types. - Implement full concurrent threading! (this is a big one :-)). - And many more (check the TODO file for more info).
Once we arrive at a point when the branch is as functional as trunk, it will be merged in trunk and we will then work on stabilizing it, to later on release it as 0.5. The schedule for this release is unknown, it will be released when it will be ready (preferably this year though).
If you want to help let me know!
Laurent _______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
Most likely 0.6, unless someone volunteers to do it now :-) Laurent On Mar 28, 2009, at 12:48 PM, Jordan Breeding wrote:
Will OSA support be a 0.5 or a 0.6 task?
On Mar 28, 2009, at 14:37, Laurent Sansonetti <lsansonetti@apple.com> wrote:
Hi guys,
As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
I wrote a blog entry here: http://www.macruby.org/blog/2009/03/28/experimental-branch.html
2 big features in this branch: a LLVM-based JIT compiler and a new IO subsystem. Obviously performance-related, we really need to be faster.
Current status of the branch:
- The compiler is now able to pass most of the Ruby AST. It compiles nodes directly from the parser in a lazy fashion. - Only JIT compilation for the moment (AOT is planned for later). - The VM is still under development, it's not as complete as the compiler yet. - Early performance benchmarks are very promising. - By the time of this writing, we pass the vast majority of the language RubySpecs (a week ago we weren't even able to even bootstrap mspec!), about 1190 expectations. Work is very active in this area, we are also making sure the specs do run with the original Ruby 1.9 and writing missing specs. - The new IO subsystem is mostly functional, but there are still many methods that are not implemented yet and we are working on this. Once it will be complete we will integrate a default runloop in the VM and expose asynchronous IOs.
For the near future, the goals are:
- Improving the compiler. Currently compile time has not really been optimized yet. - Be able to run IRB. We are almost there. - Remove the libffi code used to call C/ObjC implementations and instead JIT compile stubs and insert them in the dispatcher cache. We should be way closer to ObjC then (and maybe faster once we enable secondary compilations of hotspots and inline the stubs). - Rewrite the BridgeSupport side using LLVM types. - Implement full concurrent threading! (this is a big one :-)). - And many more (check the TODO file for more info).
Once we arrive at a point when the branch is as functional as trunk, it will be merged in trunk and we will then work on stabilizing it, to later on release it as 0.5. The schedule for this release is unknown, it will be released when it will be ready (preferably this year though).
If you want to help let me know!
Laurent _______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
Well if you offer guidance and the target for 0.5 really is just "this year" then I might be able to give it a shot and help out. Of course that also depends on whether I get a job immediately after graduation in August and how busy that keeps me. On Mar 28, 2009, at 14:56, Laurent Sansonetti <lsansonetti@apple.com> wrote:
Most likely 0.6, unless someone volunteers to do it now :-)
Laurent
On Mar 28, 2009, at 12:48 PM, Jordan Breeding wrote:
Will OSA support be a 0.5 or a 0.6 task?
On Mar 28, 2009, at 14:37, Laurent Sansonetti <lsansonetti@apple.com> wrote:
Hi guys,
As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
I wrote a blog entry here: http://www.macruby.org/blog/2009/03/28/experimental-branch.html
2 big features in this branch: a LLVM-based JIT compiler and a new IO subsystem. Obviously performance-related, we really need to be faster.
Current status of the branch:
- The compiler is now able to pass most of the Ruby AST. It compiles nodes directly from the parser in a lazy fashion. - Only JIT compilation for the moment (AOT is planned for later). - The VM is still under development, it's not as complete as the compiler yet. - Early performance benchmarks are very promising. - By the time of this writing, we pass the vast majority of the language RubySpecs (a week ago we weren't even able to even bootstrap mspec!), about 1190 expectations. Work is very active in this area, we are also making sure the specs do run with the original Ruby 1.9 and writing missing specs. - The new IO subsystem is mostly functional, but there are still many methods that are not implemented yet and we are working on this. Once it will be complete we will integrate a default runloop in the VM and expose asynchronous IOs.
For the near future, the goals are:
- Improving the compiler. Currently compile time has not really been optimized yet. - Be able to run IRB. We are almost there. - Remove the libffi code used to call C/ObjC implementations and instead JIT compile stubs and insert them in the dispatcher cache. We should be way closer to ObjC then (and maybe faster once we enable secondary compilations of hotspots and inline the stubs). - Rewrite the BridgeSupport side using LLVM types. - Implement full concurrent threading! (this is a big one :-)). - And many more (check the TODO file for more info).
Once we arrive at a point when the branch is as functional as trunk, it will be merged in trunk and we will then work on stabilizing it, to later on release it as 0.5. The schedule for this release is unknown, it will be released when it will be ready (preferably this year though).
If you want to help let me know!
Laurent _______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
_______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
So, I think there are 2 things to do in order to achieve that goal: 1) making MacRuby an OSA component, so that it's recognized by the system (osalang, Script Edit, etc.). I have no idea on how to implement this. If you're willing to check it out, the documentation is there: http://developer.apple.com/DOCUMENTATION/Carbon/Reference/Open_Scripti_Archi... I unfortunately think that nobody made an open-source OSA component yet. In fact, AFAIK the only other OSA component is a JavaScript implementation which is not maintained anymore (and closed-source). 2) bundling an API to send AppleEvents. This part is not hard, since we do have RubyOSA, rb-appscript or ScriptingBridge.framework to reuse. I guess we would have to do some design conversation first and see which model we would pick, it's even possible to reuse existing code. Let me know if you want to investigate that and I will try to help :-) Laurent On Mar 28, 2009, at 1:00 PM, Jordan Breeding wrote:
Well if you offer guidance and the target for 0.5 really is just "this year" then I might be able to give it a shot and help out.
Of course that also depends on whether I get a job immediately after graduation in August and how busy that keeps me.
On Mar 28, 2009, at 14:56, Laurent Sansonetti <lsansonetti@apple.com> wrote:
Most likely 0.6, unless someone volunteers to do it now :-)
Laurent
On Mar 28, 2009, at 12:48 PM, Jordan Breeding wrote:
Will OSA support be a 0.5 or a 0.6 task?
On Mar 28, 2009, at 14:37, Laurent Sansonetti <lsansonetti@apple.com> wrote:
Hi guys,
As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
I wrote a blog entry here: http://www.macruby.org/blog/2009/03/28/experimental-branch.html
2 big features in this branch: a LLVM-based JIT compiler and a new IO subsystem. Obviously performance-related, we really need to be faster.
Current status of the branch:
- The compiler is now able to pass most of the Ruby AST. It compiles nodes directly from the parser in a lazy fashion. - Only JIT compilation for the moment (AOT is planned for later). - The VM is still under development, it's not as complete as the compiler yet. - Early performance benchmarks are very promising. - By the time of this writing, we pass the vast majority of the language RubySpecs (a week ago we weren't even able to even bootstrap mspec!), about 1190 expectations. Work is very active in this area, we are also making sure the specs do run with the original Ruby 1.9 and writing missing specs. - The new IO subsystem is mostly functional, but there are still many methods that are not implemented yet and we are working on this. Once it will be complete we will integrate a default runloop in the VM and expose asynchronous IOs.
For the near future, the goals are:
- Improving the compiler. Currently compile time has not really been optimized yet. - Be able to run IRB. We are almost there. - Remove the libffi code used to call C/ObjC implementations and instead JIT compile stubs and insert them in the dispatcher cache. We should be way closer to ObjC then (and maybe faster once we enable secondary compilations of hotspots and inline the stubs). - Rewrite the BridgeSupport side using LLVM types. - Implement full concurrent threading! (this is a big one :-)). - And many more (check the TODO file for more info).
Once we arrive at a point when the branch is as functional as trunk, it will be merged in trunk and we will then work on stabilizing it, to later on release it as 0.5. The schedule for this release is unknown, it will be released when it will be ready (preferably this year though).
If you want to help let me know!
Laurent _______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
_______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
Hi Laurent, On Mar 29, 2009, at 11:29 AM, Laurent Sansonetti wrote:
1) making MacRuby an OSA component, so that it's recognized by the system (osalang, Script Edit, etc.). I have no idea on how to implement this. If you're willing to check it out, the documentation is there:
http://developer.apple.com/DOCUMENTATION/Carbon/Reference/Open_Scripti_Archi...
I unfortunately think that nobody made an open-source OSA component yet. In fact, AFAIK the only other OSA component is a JavaScript implementation which is not maintained anymore (and closed-source).
Hmm, have you looked at: OSA Components Release Notes http://www.vcn.bc.ca/~philip/osa/ The OSA components included in this release enable scripting languages like Perl, Python, Ruby, sh, and Tcl to work in the OSA environment at peer level with AppleScript. TclOSAScript - Exec for MacTcl http://www.sagecertification.org/publications/library/proceedings/tcl97/full... TclOSAScript provides the ability for MacTcl scripts to run scripts in any other OSA compatible language on the Macintosh. Since the OSA is the standard mechanism for interapplication communication on the Mac, this allows MacTcl to run other applications, and provides an exec like facility (though arguably using a much richer communication model.) Frontier/ UserLand (Mac OS 9?) http://docserver.userland.com/osa/ http://www.scripting.com/frontier/snippets/userTalkEverywhere.html Not sure if any of these is what you were looking for, but figured I may as well pass them along, if only for background. -- Ernie P .
This is awesome new Laurent! You have done amazing work to get this far and I know you will get it all the way. Also Eloy and Vincent helped a lot with both the VM and specs and tests. Thanks to all of you! Do you have any plans for enlisting specific support you need to move things along faster? Best, Rich On Mar 28, 2009, at 3:37 PM, Laurent Sansonetti wrote:
Hi guys,
As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
I wrote a blog entry here: http://www.macruby.org/blog/2009/03/28/experimental-branch.html
2 big features in this branch: a LLVM-based JIT compiler and a new IO subsystem. Obviously performance-related, we really need to be faster.
Current status of the branch:
- The compiler is now able to pass most of the Ruby AST. It compiles nodes directly from the parser in a lazy fashion. - Only JIT compilation for the moment (AOT is planned for later). - The VM is still under development, it's not as complete as the compiler yet. - Early performance benchmarks are very promising. - By the time of this writing, we pass the vast majority of the language RubySpecs (a week ago we weren't even able to even bootstrap mspec!), about 1190 expectations. Work is very active in this area, we are also making sure the specs do run with the original Ruby 1.9 and writing missing specs. - The new IO subsystem is mostly functional, but there are still many methods that are not implemented yet and we are working on this. Once it will be complete we will integrate a default runloop in the VM and expose asynchronous IOs.
For the near future, the goals are:
- Improving the compiler. Currently compile time has not really been optimized yet. - Be able to run IRB. We are almost there. - Remove the libffi code used to call C/ObjC implementations and instead JIT compile stubs and insert them in the dispatcher cache. We should be way closer to ObjC then (and maybe faster once we enable secondary compilations of hotspots and inline the stubs). - Rewrite the BridgeSupport side using LLVM types. - Implement full concurrent threading! (this is a big one :-)). - And many more (check the TODO file for more info).
Once we arrive at a point when the branch is as functional as trunk, it will be merged in trunk and we will then work on stabilizing it, to later on release it as 0.5. The schedule for this release is unknown, it will be released when it will be ready (preferably this year though).
If you want to help let me know!
Laurent _______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
Laurent Sansonetti wrote:
Hi guys,
As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
Very cool stuff...some low-level benchmarks seem to have really excellent performance. I know you probably don't need bugs filed against 'experimental' yet, but I had a couple questions: 1. Is there a way to tell what's compiling and what isn't? Some benchmarks I run are fast, and some others are incredibly slow. I'd like to know what's representative of actual performance. 2. Is there a way to turn off peephole optimizations? Lots of benchmarks out there, including some we have in JRuby, have a lot of dead code (like repeated assignments to a local variable). That's going to confound a lot of benchmarking, since most of the benchmark isn't actually being run. JRuby has a flag to turn off peephole optz (though I often forget to turn it on). 3. You probably know about these, but I noticed there are numerous problems with eval: * Running an eval benchmark caused the system to blow up, claiming it ran out of space for machine code * bindings appear to be missing altogether * The binding associated with a block does not appear to work, as in p = proc { }; eval 'something', p.binding. This troublesome feature is one reason many local variable optimizations are much more difficult. I'm looking forward to seeing future results and getting some guidance on where we can expect to see the best performance right now. I'd also love to talk about some techniques you're using to see if they'd be applicable for JRuby. - Charlie
Hi Charles, On Mar 28, 2009, at 9:46 PM, Charles Oliver Nutter wrote:
Laurent Sansonetti wrote:
Hi guys, As some of you already noticed we have been working on a branch for a few weeks and I thought it's now time to describe what has been done and were we are going exactly.
Very cool stuff...some low-level benchmarks seem to have really excellent performance.
Thank you :)
I know you probably don't need bugs filed against 'experimental' yet, but I had a couple questions:
1. Is there a way to tell what's compiling and what isn't? Some benchmarks I run are fast, and some others are incredibly slow. I'd like to know what's representative of actual performance.
Currently everything is compiled. It is possible to dump the LLVM IR by turning on the ROXOR_DUMP_IR variable in roxor.cpp. Generally when something is incredibly slow it's because of a runtime bug and not the compiler, though.
2. Is there a way to turn off peephole optimizations? Lots of benchmarks out there, including some we have in JRuby, have a lot of dead code (like repeated assignments to a local variable). That's going to confound a lot of benchmarking, since most of the benchmark isn't actually being run. JRuby has a flag to turn off peephole optz (though I often forget to turn it on).
It's possible by modifying the source code and comment the call to createInstructionCombiningPass() and createCFGSimplificationPass(), but I do not recommend to remove these because I think it would break the way we compile Dwarf exception handlers in blocks. # In my personal benchmark suite I try to make sure these optimizations do not provide false positive numbers when comparing against YARV.
3. You probably know about these, but I noticed there are numerous problems with eval:
* Running an eval benchmark caused the system to blow up, claiming it ran out of space for machine code
Yes, currently calling eval with a literal string will call the JIT, so doing this in a loop will most likely eat all your memory :-) I plan to address that later by fallbacking to the LLVM interpreter in some cases, but I doubt this will affect real-world applications. Also, there are ways to improve the JIT (nothing has been done yet).
* bindings appear to be missing altogether * The binding associated with a block does not appear to work, as in p = proc { }; eval 'something', p.binding. This troublesome feature is one reason many local variable optimizations are much more difficult.
Yes as you noticed Binding has not been implemented yet :-( This is on the very top of my TODO list (needed for IRB) and I already know how to implement it without disabling our current "local variables into CPU registers" optimization.
I'm looking forward to seeing future results and getting some guidance on where we can expect to see the best performance right now. I'd also love to talk about some techniques you're using to see if they'd be applicable for JRuby.
Absolutely :-) Laurent
Laurent Sansonetti wrote:
It's possible by modifying the source code and comment the call to createInstructionCombiningPass() and createCFGSimplificationPass(), but I do not recommend to remove these because I think it would break the way we compile Dwarf exception handlers in blocks.
# In my personal benchmark suite I try to make sure these optimizations do not provide false positive numbers when comparing against YARV.
Ok, it would be nice if there were a simpler way to turn those off; I don't want to break 'experimental' completely, but it would be nice to get real benchmark results in these cases.
Yes, currently calling eval with a literal string will call the JIT, so doing this in a loop will most likely eat all your memory :-)
I plan to address that later by fallbacking to the LLVM interpreter in some cases, but I doubt this will affect real-world applications. Also, there are ways to improve the JIT (nothing has been done yet).
In Rails, as recently as 2.2 (I haven't checked 2.3) there's a small bit of eval'ed code used to look up constants. Before we fixed our parser performance, we found it was a bottleneck. So that's at least one real-world case.
Yes as you noticed Binding has not been implemented yet :-( This is on the very top of my TODO list (needed for IRB) and I already know how to implement it without disabling our current "local variables into CPU registers" optimization.
How will you do that? Given that a block can be used as a binding, you can't statically inspect contained blocks to determine which variables are used and which aren't. For example, this code: def foo a = 1 bar { } puts a end def bar(&b) eval "a = 2", b.binding end foo This should print out "2" but prints out "1" in 'experimental' right now. This one feature is the primary reason JRuby can only put local variables in registers when there's no blocks present. When there's a block present, any variable can be accessed via its binding at any time. I've argued for this feature to be removed, but I have been unsuccessful. Current JRuby is also putting local variables in registers (via HotSpot doing so for Java locals) when there's no blocks present. - Charlie
On Mar 28, 2009, at 10:38 PM, Charles Oliver Nutter wrote:
Laurent Sansonetti wrote:
It's possible by modifying the source code and comment the call to createInstructionCombiningPass() and createCFGSimplificationPass(), but I do not recommend to remove these because I think it would break the way we compile Dwarf exception handlers in blocks. # In my personal benchmark suite I try to make sure these optimizations do not provide false positive numbers when comparing against YARV.
Ok, it would be nice if there were a simpler way to turn those off; I don't want to break 'experimental' completely, but it would be nice to get real benchmark results in these cases.
I don't think it's a good idea to provide a way to turn off optimizations and I do not see the point in benchmarking dead code in general (I would never do this).
Yes, currently calling eval with a literal string will call the JIT, so doing this in a loop will most likely eat all your memory :-) I plan to address that later by fallbacking to the LLVM interpreter in some cases, but I doubt this will affect real-world applications. Also, there are ways to improve the JIT (nothing has been done yet).
In Rails, as recently as 2.2 (I haven't checked 2.3) there's a small bit of eval'ed code used to look up constants. Before we fixed our parser performance, we found it was a bottleneck. So that's at least one real-world case.
Good to know, I just hope they are not doing this 30 million times in a loop or something :-)
Yes as you noticed Binding has not been implemented yet :-( This is on the very top of my TODO list (needed for IRB) and I already know how to implement it without disabling our current "local variables into CPU registers" optimization.
How will you do that? Given that a block can be used as a binding, you can't statically inspect contained blocks to determine which variables are used and which aren't. For example, this code:
def foo a = 1 bar { } puts a end
def bar(&b) eval "a = 2", b.binding end
foo
This should print out "2" but prints out "1" in 'experimental' right now.
Yes, Binding is not implemented yet. Do not worry I have read the MRI source code and know how Binding works and how to provide a compliant implementation. Please stay tuned. Laurent
Laurent Sansonetti wrote:
I don't think it's a good idea to provide a way to turn off optimizations and I do not see the point in benchmarking dead code in general (I would never do this).
I think it's actually very useful to provide a way to turn off specific optimizations, if only because they may eventually run into cases where they break something. But they're also useful when writing benchmarks that have dead code on purpose... For some benchmarks it's very difficult to get a reasonable measurement without forcing some dead code to run. For example, benchmarking a single local variable access gets completely lost in the method or block invocation that surrounds it. By forcing several successive local variable accesses to execute, you get a better picture of what the actual cost is. At any rate, if you have good benchmarks for things like local variables, we can certainly use those for now.
Good to know, I just hope they are not doing this 30 million times in a loop or something :-)
Well, it gets called numerous times per request. In the end, though, Rails performance has not actually been very execution-bound. We've had Ruby code running faster than Ruby 1.8 for almost two years, but we only recently started to post 10-20% performance gains for Rails itself. Rails performance, and probably most large applications' performance, all seem heavily dependent on core classes being as blazing fast as possible. It's a balancing act, and we often completely ignore execution performance for a whole release to work on core classes instead.
Yes, Binding is not implemented yet. Do not worry I have read the MRI source code and know how Binding works and how to provide a compliant implementation. Please stay tuned.
Well, I'd certainly like to hear what you're planning for this particular case. Just let me know when you're ready to talk about it. I've gone over several options when optimizing JRuby, and the block-as-binding issue makes most of them infeasible. - Charlie
Hi Charlie, I don't think/hope you do it on purpose, but it seems that you're asking questions just to prove that Laurent is wrong and that whatever he will do will end up slowing down the current experimental branch. I understand that you are upset about Antonio Cangiano's blog post with early benchmarks, but I have a hard time telling if you are trying to help or trying to hurt the project. From my view point, it seems like you are disseminating negative information(speculations) designed to undermine the credibility of MacRuby. Let's just wait and see. Regards, - Matt On Sun, Mar 29, 2009 at 12:09 PM, Charles Oliver Nutter < charles.nutter@sun.com> wrote:
Laurent Sansonetti wrote:
I don't think it's a good idea to provide a way to turn off optimizations and I do not see the point in benchmarking dead code in general (I would never do this).
I think it's actually very useful to provide a way to turn off specific optimizations, if only because they may eventually run into cases where they break something. But they're also useful when writing benchmarks that have dead code on purpose...
For some benchmarks it's very difficult to get a reasonable measurement without forcing some dead code to run. For example, benchmarking a single local variable access gets completely lost in the method or block invocation that surrounds it. By forcing several successive local variable accesses to execute, you get a better picture of what the actual cost is.
At any rate, if you have good benchmarks for things like local variables, we can certainly use those for now.
Good to know, I just hope they are not doing this 30 million times in a
loop or something :-)
Well, it gets called numerous times per request.
In the end, though, Rails performance has not actually been very execution-bound. We've had Ruby code running faster than Ruby 1.8 for almost two years, but we only recently started to post 10-20% performance gains for Rails itself. Rails performance, and probably most large applications' performance, all seem heavily dependent on core classes being as blazing fast as possible. It's a balancing act, and we often completely ignore execution performance for a whole release to work on core classes instead.
Yes, Binding is not implemented yet. Do not worry I have read the MRI
source code and know how Binding works and how to provide a compliant implementation. Please stay tuned.
Well, I'd certainly like to hear what you're planning for this particular case. Just let me know when you're ready to talk about it. I've gone over several options when optimizing JRuby, and the block-as-binding issue makes most of them infeasible.
- Charlie
_______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
Matt Aimonetti wrote:
Hi Charlie,
I don't think/hope you do it on purpose, but it seems that you're asking questions just to prove that Laurent is wrong and that whatever he will do will end up slowing down the current experimental branch.
I think you're misinterpreting me. I'd love for Laurent to be right, and I'd love to know how to get around the cases that end up slowing down JRuby. I sincerely hope it's possible, since it would mean JRuby can probably do whatever MacRuby does in those cases. And I may be able to help if some of the optimization ideas are discussed more openly; I had similar discussions with IronRuby folks at RubyConf 2007 and saved them going down a path that I knew would eventually break code. - Charlie
Charles Oliver Nutter wrote:
Matt Aimonetti wrote:
Hi Charlie,
I don't think/hope you do it on purpose, but it seems that you're asking questions just to prove that Laurent is wrong and that whatever he will do will end up slowing down the current experimental branch.
I think you're misinterpreting me. I'd love for Laurent to be right, and I'd love to know how to get around the cases that end up slowing down JRuby. I sincerely hope it's possible, since it would mean JRuby can probably do whatever MacRuby does in those cases. And I may be able to help if some of the optimization ideas are discussed more openly; I had similar discussions with IronRuby folks at RubyConf 2007 and saved them going down a path that I knew would eventually break code.
Hmm, I just realized "more openly" sounded bad. I just mean I'm here if anyone wants to discuss ideas, and I might be able to help. Carry on! :) - Charlie
Thanks Charles for clarifying the situation :) For once, I'm glad to see I was wrong. - Matt On Mon, Mar 30, 2009 at 1:23 PM, Charles Oliver Nutter < charles.nutter@sun.com> wrote:
Charles Oliver Nutter wrote:
Matt Aimonetti wrote:
Hi Charlie,
I don't think/hope you do it on purpose, but it seems that you're asking questions just to prove that Laurent is wrong and that whatever he will do will end up slowing down the current experimental branch.
I think you're misinterpreting me. I'd love for Laurent to be right, and I'd love to know how to get around the cases that end up slowing down JRuby. I sincerely hope it's possible, since it would mean JRuby can probably do whatever MacRuby does in those cases. And I may be able to help if some of the optimization ideas are discussed more openly; I had similar discussions with IronRuby folks at RubyConf 2007 and saved them going down a path that I knew would eventually break code.
Hmm, I just realized "more openly" sounded bad. I just mean I'm here if anyone wants to discuss ideas, and I might be able to help. Carry on! :)
- Charlie _______________________________________________ MacRuby-devel mailing list MacRuby-devel@lists.macosforge.org http://lists.macosforge.org/mailman/listinfo.cgi/macruby-devel
participants (6)
-
Charles Oliver Nutter
-
Ernest N. Prabhakar, Ph.D.
-
Jordan Breeding
-
Laurent Sansonetti
-
Matt Aimonetti
-
Richard Kilmer