It's not supported in the text protocol because it'd be very, very complicated to express, no clients would support it, and it would provide very little that you can't already do from the text protocol It's supported in the binary protocol because it's a trivial use case of binary operations spymemcached supports it implicitly -- just do a bunch of sets and magic happens: dustin.github.com/2009/09/23/spymemcache....
It's not supported in the text protocol because it'd be very, very complicated to express, no clients would support it, and it would provide very little that you can't already do from the text protocol. It's supported in the binary protocol because it's a trivial use case of binary operations. Spymemcached supports it implicitly -- just do a bunch of sets and magic happens: dustin.github.com/2009/09/23/spymemcache....
Also, I read through your memcached optimizations and am glad that you already support it - thank you! – serverman Jul 21 '10 at 22:18 How would you express multi-set in the memcached protocol differently than you currently express it? Keep in mind, each key has flags, expiration, and arbitrary data associated with it.
– Dustin Jul 21 '10 at 22:57.
I don't know a lot about memcache internals, but I assume writes have to be blocking, atomic operations. I assume that allowing multiple set operations to be batched, you could block all reads for a long time (or risk a get occurring while only half of a write had been applied). Forcing writes to be done individually allows them to be interleaved fairly with gets.
Thank you - Yes, I thought about that, and to some extent that makes sense but it could be resolved by implementation, I believe. If you are doing 100 sets individually one after the other, you are still better served by doing one network round trip and then doing sets interleaved with small amount of (configurable) "yield" times in between so you do not lock the associated data structures for long period of time. – serverman Jul 21 '10 at 20:16 100 sets in batch isn't different from 100 sets in series.
The difference in a "multi-set" -- much like the difference in a "multi-get" is in reducing network IO. – Dustin Jul 21 '10 at 21:44 Dustin, yes that is what I meant to say - you said it better... – serverman Jul 21 '10 at 22:34 I'm not sure I understand. If I send off a "multi-set" with 100 requests, am I willing to accept the possibility that if a get is done, it may get serviced after only 50 of those sets have been applied?
That seems unacceptable to me. – Gian Jul 21 '10 at 22:42 Neither multi-get nor multi-set implies any sort of transaction isolation. Memcached has no such concept.
– Dustin Jul 21 '107 at 3:11.
I would imagine that the restriction against using multi sets is to avoid collisions when writing cached values to the memcache. As an object cache, I can't foresee an example of when you would need transactional type writes. This use case seems less suited for a caching layer, but better suited for the underlying database.
If sets come in interleaved from different clients, it is most likely the case that for one key, the last one wins, or is at least close enough, until the cache is invalidated and a newer value is written. As Gian mentions, there don't seem to be any good reasons to block reads from the cache while several or many writes to the cache happen.
The problem is that if we assume doing X writes takes N*X seconds, we could potentially block everything else for a long time. We're also forced to queue all the writes that are waiting, as well as read requests. If we choose instead to allow reads to be serviced while there are writes in the queue, then we are almost certainly going to return a lot of stale data.
– Gian Jul 21 '10 at 19:01.
It's not supported in the text protocol because it'd be very, very complicated to express, no clients would support it, and it would provide very little that you can't already do from the text protocol.
Up vote 1 down vote favorite share g+ share fb share tw.
Can anyone explain why memcached folks decided to support multi get but not multi set. By multi I mean operation involving more than one key (see protocol at code.google.com/p/memcached/wiki/NewComm...). So you can get multiple keys in one shot (basic advantage is the standard saving you get by doing less round trips) but why can not you get bulk sets?
My theory is that it was meant to do less number of sets and that too individually (e.g. On a cache read and miss). But I still do not see how multi-set really conflicts with the general philosophy of memcached. I looked at the client features at code.google.com/p/memcached/wiki/NewComm... and it seems that some clients potentially do support "Multi-Set" (why only in binary protocol?
). I am using Java spy memcached, btw. Memcached link|improve this question asked Jul 21 '10 at 18:28serverman16211 71% accept rate.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.