-
-
Save AlexWayfer/8affba5fa6b4f29b7cc1f22de9e85d9c to your computer and use it in GitHub Desktop.
| > ruby test.rb | |
| I, [2018-09-14T05:14:28.929558 #30046] INFO -- : true | |
| Warming up -------------------------------------- | |
| native 1.000 i/100ms | |
| baseline 2.000 i/100ms | |
| lazy 1.000 i/100ms | |
| via_transduce 1.000 i/100ms | |
| Calculating ------------------------------------- | |
| native 11.293 (± 0.0%) i/s - 57.000 in 5.054247s | |
| baseline 21.711 (± 4.6%) i/s - 110.000 in 5.077083s | |
| lazy 5.798 (± 0.0%) i/s - 29.000 in 5.009277s | |
| via_transduce 3.549 (± 0.0%) i/s - 18.000 in 5.078086s | |
| Comparison: | |
| baseline: 21.7 i/s | |
| native: 11.3 i/s - 1.92x slower | |
| lazy: 5.8 i/s - 3.74x slower | |
| via_transduce: 3.5 i/s - 6.12x slower | |
| Calculating ------------------------------------- | |
| native 169.602M memsize ( 0.000 retained) | |
| 40.000 objects ( 0.000 retained) | |
| 0.000 strings ( 0.000 retained) | |
| baseline 10.216M memsize ( 0.000 retained) | |
| 10.000 objects ( 0.000 retained) | |
| 0.000 strings ( 0.000 retained) | |
| lazy 52.649M memsize ( 0.000 retained) | |
| 1.060M objects ( 0.000 retained) | |
| 0.000 strings ( 0.000 retained) | |
| via_transduce 10.236M memsize ( 0.000 retained) | |
| 340.000 objects ( 0.000 retained) | |
| 0.000 strings ( 0.000 retained) | |
| Comparison: | |
| baseline: 10215920 allocated | |
| via_transduce: 10235920 allocated - 1.00x more | |
| lazy: 52648720 allocated - 5.15x more | |
| native: 169601920 allocated - 16.60x more |
| # frozen_string_literal: true | |
| require 'benchmark/ips' | |
| require 'benchmark/memory' | |
| require 'logger' | |
| ARRAY = (0..530000).to_a.freeze | |
| def native | |
| ARRAY | |
| .map { |x| x + 10 } | |
| .map { |x| x * 2 } | |
| .select { |x| x.even? } | |
| .select { |x| x % 5 == 0 } | |
| end | |
| def lazy | |
| ARRAY | |
| .lazy | |
| .map { |x| x + 10 } | |
| .map { |x| x * 2 } | |
| .select { |x| x.even? } | |
| .select { |x| x % 5 == 0 } | |
| .to_a | |
| end | |
| def baseline | |
| ARRAY.each_with_object([]) do |x, result| | |
| x = (x + 10) * 2 | |
| result << x if x.even? && x % 5 == 0 | |
| end | |
| end | |
| require 'ramda' | |
| def transduce(transformation, reducing_fn, initial, input) | |
| input.reduce(initial, &transformation.call(reducing_fn)) | |
| end | |
| PUSHES = -> list, item { list.push(item) } | |
| def mapping(&fn) | |
| -> reducing_fn { | |
| -> result, input { | |
| reducing_fn.call(result, fn.call(input)) | |
| } | |
| } | |
| end | |
| def selecting(&predicate_fn) | |
| -> reducing_fn { | |
| -> result, input { | |
| if predicate_fn.call(input) | |
| reducing_fn.call(result, input) | |
| else | |
| result | |
| end | |
| } | |
| } | |
| end | |
| def via_transduce | |
| transformation = Ramda.compose( | |
| mapping { |x| x + 10 }, | |
| mapping { |x| x * 2 }, | |
| selecting { |x| x.even? }, | |
| selecting { |x| x % 5 == 0 } | |
| ) | |
| transduce( | |
| transformation, | |
| PUSHES, | |
| [], | |
| ARRAY | |
| ) | |
| end | |
| def logger | |
| return @logger if defined? @logger | |
| @logger = Logger.new($stdout) | |
| @logger.level = Logger::INFO | |
| @logger | |
| end | |
| def test | |
| logger.debug native | |
| logger.debug lazy | |
| logger.debug baseline | |
| logger.debug via_transduce | |
| result = (native == lazy) && (lazy == baseline) && (baseline == via_transduce) | |
| logger.info result | |
| result | |
| end | |
| exit unless test | |
| Benchmark.ips do |x| | |
| x.report('native') { native } | |
| x.report('baseline') { baseline } | |
| x.report('lazy') { lazy } | |
| x.report('via_transduce') { via_transduce } | |
| x.compare! | |
| end | |
| Benchmark.memory do |x| | |
| x.report('native') { 10.times { native } } | |
| x.report('baseline') { 10.times { baseline } } | |
| x.report('lazy') { 10.times { lazy } } | |
| x.report('via_transduce') { 10.times { via_transduce } } | |
| x.compare! | |
| end |
@konung, speed almost the same for all three cases. There is big difference in memory consuming, not in favor of lazy.
The other thing to consider is that Enumerable methods are all optimized. If you were to throw this up before your test, it'd skew the result quite a bit:
class Array
def map
new_array = []
each { |v| new_array << yield(v) }
new_array
end
def select
new_array = []
each { |v| new_array << v if yield(v) }
new_array
end
endNow if someone could figure out how to write both transduction and composition in a C extension, those numbers would get really interesting. Unfortunately I do not have that skillset currently, but may pursue it later for the sake of amusement.
This JS benchmark was something else: https://jlongster.com/Transducers.js-Round-2-with-Benchmarks
Thanks @konung for the benchmarks!
I'd meant to reply but I was still delving around my own benchmarks to see if there was a clean way to demonstrate a few corner cases, but that does quite nicely.
If you were to throw this up before your test, it'd skew the result quite a bit:
It will not be lazy anymore, this code is not about laziness.
V8 doesn't have optimizations for basic JavaScript functions?
Thanks for a reply and putting up a benchmark . My Point was that Lazy was slow and slower than 'baseline'. the other commenter, made it out like lazy was fast . which it is not.
Also there is a bit caveat about
lazy- it's effeciency highly depends on the size of the array. ( By the way my quip about it being slow - comes from the horse's mouth - the author oflazyhimself - https://railsware.com/blog/2012/03/13/ruby-2-0-enumerablelazy/Here are my results for your benchmark running on MacBook 2015 with high sierra and ruby 2.6.0-previews
Memory consumption is is exactly the same, but ips is off.
HOWEVER
If you run it for a smaller array ( let's say 50 elements and 1000 elements) - numbers actually jump around quite a bit ( in my initial test I ran it for 50 elements - which is the size of sets I often have to deal with ( lots of small arrays)
For 50 elements
For 1000 elements