Tornado process data in request handler after return?

No it's not "easy" out-of-the-box. What you're referring to is "fire and forget". Even if you use a thread pool to farm out the request, that thread pool will belong to the main python process that belongs to Tornado.

No it's not "easy" out-of-the-box. What you're referring to is "fire and forget". Even if you use a thread pool to farm out the request, that thread pool will belong to the main python process that belongs to Tornado.

The best approach is a message queue. Something like Carrot. That way, suppose you have a page where users can execute to start generating a HUGE report, you can start it in a message queue and then finish the Tornado request and with some AJAX magic and other tricks (outside the scope of Tornado) you can sit back and wait till the message queue has finished it's job (which could technically be happening on a distributed server in a different physical location).

Ioloop. Add_callback, Tornado will execute the callback in the next IOLoop iteration.

Bad advice warning: you can use multiprocessing. docs.python.org/library/multiprocessing.... be careful that you close all of your database connections (in the spawned code) and do whatever else tornado might do when it normally completes a request without a subprocess. The other answers sound better.

But, you can do this. Don't do this.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions