Sharing from my own experience (one year old project with thousands of user requests per second) Usually, there were two scenarios for me: Add new information to existing structures (like, "email" field to a user) Remove or change existing values in existing structures (like, change format of some field) Drop stuff from the database For 1 I keep following simple strategy: degrade gracefully e.g. If user doesn't have email record - treat it as empty email. Worked all the time For 2 and 3 it depends, whether data can be changed/calculated/fixed before releasing or after. I run a job on database that does all the work for me, for few millions of keys it takes considerable time (minutes).
If that job can be run only after I release the new code - then degrading gracefully helps a lot, I simply release and then run the job PS: If you affect a lot of keys in redis then it is very important to use redis.io/topics/pipelining Saves a lot of time Take a list of all affected (i.e. You want to fix them in any way) keys or records in pipeline Do whatever you want on them. If it's possible try to queue writing operations into pipeline too Send queued operations to redis It is also very important for you to make indexes of your structures.
I keep sets with ids. Then I simply iterate over SMEMBERS(set_with_ids). It is much, much better than iterating over KEYS command.
Sharing from my own experience (one year old project with thousands of user requests per second). Usually, there were two scenarios for me: Add new information to existing structures (like, "email" field to a user) Remove or change existing values in existing structures (like, change format of some field) Drop stuff from the database For 1 I keep following simple strategy: degrade gracefully, e.g. If user doesn't have email record - treat it as empty email. Worked all the time.
For 2 and 3 it depends, whether data can be changed/calculated/fixed before releasing or after. I run a job on database that does all the work for me, for few millions of keys it takes considerable time (minutes). If that job can be run only after I release the new code - then degrading gracefully helps a lot, I simply release and then run the job.
PS: If you affect a lot of keys in redis then it is very important to use redis.io/topics/pipelining Saves a lot of time. Take a list of all affected (i.e. You want to fix them in any way) keys or records in pipeline Do whatever you want on them.
If it's possible try to queue writing operations into pipeline too Send queued operations to redis. It is also very important for you to make indexes of your structures. I keep sets with ids.
Then I simply iterate over SMEMBERS(set_with_ids). It is much, much better than iterating over KEYS command.
For extremely simple versioning, you could use different database numbers. This could be quite limiting in cases where almost everything is the same between two versions but it's also a very clean way to do it if it will work for you.
Databases are discouraged for a long time already. They won't be supported in future (soon). – kmerenkov Aug 5 at 12:28.
PS: If you affect a lot of keys in redis then it is very important to use http://redis.io/topics/pipelining Saves a lot of time. Take a list of all affected (i.e. Do whatever you want on them.
Send queued operations to redis. It is also very important for you to make indexes of your structures. I keep sets with ids.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.