Why don't you call updateSimulation on the success function (the second parameter of $. GetJSON)? It will do the request after the previous one has finished.
1 Let's say that.... I feel like an idiot :9 This is totally the way to go, thanks! – jsalonen Dec 22 '10 at 13:19 When you are so deeply thinking about how to implement something, you not always think about the simplest solution :) – cx42net Dec 22 '10 at 13:21 1 I would suggest not to take this direction. You'll overload your processor: 1.
You'll put a heavy load on your processor and 2. You'll get into a huge stack state, because you'll be nesting calls. Don't directly call it within itself.
– Robert Koritnik Dec 22 '10 at 13:26 The problem is more located on the initial idea of executing a request "continuously", maybe trying to reduce the requests by executing a setInterval on a 5s period? – cx42net Dec 22 '10 at 13:39 But I didn't knew all the implication of calling multiple times a function within itself. Do you have any informations (links?) about it @Robert Koritnik?
Thanks! – cx42net Dec 22 '10 at 13:44.
Firstly, when using setInterval() or setTimeout(), don't pass the method in as a string, as this calls the infamous eval() which is poor coding practice. Rather pass the method like this: setInterval(updateSimulation, 50); To make sure the ajax request has completed before another one is sent off, use setTimeout() to only call the updateSimulation() method once, and then on the jquery's ajax success event, call updateSimulation() again.
Thanks for the note on eval! SetTimeout could help here as well.. – jsalonen Dec 22 '10 at 14:23.
Use setTimeout instead, and call it on return of precedent call. It will run 50 ms after return (btw, 50 ms is far too small for AJAX, think about 500 ms or 1000 ) You can also add timestamp to sent data, and compare it against last known timeout function updateSimulation() { $. GetJSON('/simulation/', function(data) { window.
SetTimeout(updateSimulation, 500 ); if (data. Sent > last_data_sent){ // use data last_data_sent = data. Sent } }); }.
This may overlap as well... using data code part is the one that takes an arbitrary amount of time (depending on browser, javascript engine and other factors... This isn't right. – Robert Koritnik Dec 22 '10 at 13:28 Yes thank you for the tips, but please notice that I am really trying to achieve latency lower than 50 ms so this isn't an ordinary Ajax scenario. – jsalonen Dec 22 '10 at 14:24 you can just place window.
SetTimeout(updateSimulation, latency_ms ); at the end of processing data. Thus latency will be (latency_ms + time of data processing). But in the real-world, if you are doing things over http, it will be really hard to assume 50 ms (hint: look at autocompletion speed at google search page, which is quite optimised in terms of both code and infrastructure) – ts.
Dec 22 '10 at 15:07.
Parse and use data first, then issue a deferred call Use window.setTimeout() instead and reissue another one after you've done processing the current one. This way there will be no overlapping. Function updateSimulation() { $.
GetJSON('/simulation/', function(data) { // ..parse data here window. SetTimeout(updateSimulation, 250); }); } Adjust timing as you wish...
Oh yes, very nice point indeed! – jsalonen Dec 22 '10 at 14:25.
You also could use the setInterval with the period you want (50ms for example) and add an indicator to know if the previous request has finished. This indicator doesn't have to be in the gobal array thought. Function updateSimulation() { if (!requesting) { requesting = true; $.
GetJSON('/simulation/', function(data) { requesting = false; }); } } var requesting = false; setInterval ('updateSimulation', 50); Where namespace is an object you use.
2) sounds good, but 50ms is a little bit quick. I don't think that the Server will like it.
It probably doesn't make any sense to users either. – Robert Koritnik Dec 22 '10 at 13:24 Of course it makes sense, why do you think people optimize webpages, use CDNs, CSS-sprites and other techniques? Improve performance(what makes sense to the user) or from boredom?50ms means 20 request in a second, 1200 in a minute->the server has a lot to do with it and less capacity for other requests.
– Dr.Molle Dec 22 '10 at 13:28 @Dr.Molle: Yes they do optimize things. But doing requests every 50ms doesn't make much sense. Even if you'd be doing a chat web application it wouldn't make any obvious difference whether it was 50 or 250 or maybe even 500 ms.
Chat would be smooth either way. But would put much less load on the server if it was 500ms... As well as on the client, because there would be much less processing involved. – Robert Koritnik Dec 22 '10 at 13:31 @Dr.Molle: All those optimisations you mentioned are related to network lag... And issuing frequent requests doesn't really help.
We'd need some more information of the end application process to see benefit in such short intervals. – Robert Koritnik Dec 22 '10 at 13:34 1 If you really need real-real time, consider using WebSockets. – ts.
Dec 22 '10 at 15:17.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.