[r6rs-discuss] [Formal] SRFI-39 should be made an R6RS library

From: Robby Findler <robby>
Date: Fri Feb 23 19:27:51 2007

I don't know how interested you are in continuing this benchmarking,
but mzscheme has a model like chez's, but only allocates space for the
parameters when they are changed. Different tradeoffs, I expect, but
it would be at least one other data point.

Robby

On 2/23/07, R. Kent Dybvig <dyb_at_cs.indiana.edu> wrote:
> > > In implementation, yes. But my thread system is based on pthreads, and
> > > you'd have to have quite a few parameters to notice the difference in
> > > thread-creation or memory overhead. Applications don't typically use
> > > very many thread parameters in any case.
> >
> > It's still a problem that you pay for something even if you don't use
> > it: As the number of loaded libraries that use parameters grows, so do
> > all of your threads. This is probably not a problem in a system with
> > (comparatively) heavyweight threads such as pthreads
>
> Right, which was my point. While I doubt this would be a problem even
> with a lightweight thread system (it hasn't been with SWL), one could use
> instead a sparse representation that creates per-thread locations lazily,
> which should be possible at least for the ones that have their original
> values at the point where the thread is forked.
>
> > , but it is a
> > problem in systems where threads take up only a few words in the
> > absence of parameters.
> > (For reference, the Scheme 48 Scheme code contains 33 fluids---if
> > those were parameters, they would take up 33 slots in *each* thread,
> > which is significant given the lightweight nature of Scheme 48's
> > threads.)
>
> Maybe. Being skeptical, however, I installed Scheme48 on my Mac laptop
> and did a couple of quick tests.
>
> Test 1: attempt to determine the space overhead for maintaining a
> length-33 parameter vector. For both tests, I ran the following
> boilerplate code:
>
> ,open threads
> ,open spatial
> (define make-list (lambda (n x) (if (= n 0) '() (cons x (make-list (- n 1) x)))))
> (define spin1 (lambda () (let f () (f))))
> (define spin2 (lambda () (let f ((v (make-vector 33))) (f v))))
> ,collect
>
> For both runs, collect reported 772039 words free after collection.
>
> For run 1, I created 10 threads running spin1, which doesn't allocate a
> vector:
>
> (map spawn (make-list 10 spin1))
> ,collect
>
> Collect reported 766974 words free after collection, a net difference of
> 5065 words.
>
> For run 2, I created 10 threads running spin2, which allocates a vector
> of length 33 (per thread):
>
> (map spawn (make-list 10 spin2))
> ,collect
>
> In this case, collect reported 766604 words free after collection, a net
> difference of 5435 words, which is an increase in the net of 7.3% over
> run2.
>
> I ran a different test to see how the thread creation-time cost might be
> affected. Again I used two runs, this time with the following boilerplate
> code:
>
> ,open threads
> ,open spatial
> (define make-list (lambda (n x) (if (= n 0) '() (cons x (make-list (- n 1) x)))))
> (define quick1 (lambda () 0))
> (define quick2 (lambda () (make-vector 33)))
>
> For the first run I created 1000 threads running quick1, which doesn't
> allocate a vector.
>
> ,time (begin (map spawn (make-list 10000 quick1)) 0)
>
> For the second run I created 1000 threads running quick2, which allocates
> a vector of length 33 (per thread).
>
> ,time (begin (map spawn (make-list 10000 quick2)) 0)
>
> The times reported over 10 tries of the first run ranged from .47 to .52
> seconds, and for 10 tries of the second, .47 to .53 seconds. The average
> times were .482 and .483 seconds respectively, or essentially the same
> given the time variances.
>
> So, while 7.3% might or might not be considered significant space
> overhead, there's essentially no difference in time. Of course, with more
> thread parameters, both space and time overhead would go up. It's
> impossible to extrapolate from the timing numbers, but extrapolating
> from the space numbers, we'd pass 100% overhead, i.e., double the space
> taken up by each running thread, at about 451 thread parameters.
>
> > (Pure deep binding doesn't have this problem.)
>
> Deep binding has undesirable overhead of its own, as I'm sure you are
> aware, i.e., reference cost proportional to the number of current dynamic
> bindings.
>
> Kent
>
> _______________________________________________
> r6rs-discuss mailing list
> r6rs-discuss_at_lists.r6rs.org
> http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
>
Received on Fri Feb 23 2007 - 19:27:45 UTC

This archive was generated by hypermail 2.3.0 : Wed Oct 23 2024 - 09:15:01 UTC