How to dynamically scale StarCluster/qsub/EC2 to run parallel jobs across multiple nodes?

So, here is my suggestion. 1. Extract your files to S3.

2. Launch StarCluster 3. Using your bashscript, qsub a job for every few files (might be more efficient for a job to work on say 10 files than having a job for every single files) 4.

Your application must I/O to s3. 5. When the queue is empty, have a script look at the results to make sure all jobs ran well.

You may reschedule jobs when the output is missing.

After some time researching on various options available for dynamic scaling, I decided to use the Queue mechanism to distribute jobs to multiple workers.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions