Definitely slower. (But remember you need to balance the number of malloc and free otherwise you get a memory leak. ) If the length varies, you can use realloc to expand the buffer size void* v = malloc(1024); size_t bufsize = 1024; while(cond) { size_t reqbufsize = get_length(); if (reqbufsize > bufsize) { bufsize = reqbufsize * 2; v = realloc(v, bufsize); } // you may shrink it also.
Do_something_with_buffer(v); } free(v).
Definitely slower. (But remember you need to balance the number of malloc and free otherwise you get a memory leak. ) If the length varies, you can use realloc to expand the buffer size.
Void* v = malloc(1024); size_t bufsize = 1024; while(cond) { size_t reqbufsize = get_length(); if (reqbufsize > bufsize) { bufsize = reqbufsize * 2; v = realloc(v, bufsize); } // you may shrink it also. Do_something_with_buffer(v); } free(v).
P – Billy ONeal Mar 17 '10 at 16:13 2 @BillyONeal: That's actually a pretty common way to do things. – Dinah Mar 17 '10 at 16:43 3 Buffers usually increase proportionally to their old size because it gets better amortized asymptotic running time. If you add a one slot per realloc and insert n elements, there are 1 + 2 + ... + (n - 1) + n = O(n^2) copies required.
If, however, you double the buffer's size, inserting n elements only costs O(1) copies. You can find a more rigorous explanation by looking at the hash table Wikipedia article: en.wikipedia.Org/wiki/… – Michael Koval Mar 17 '10 at 17:29 1 If the realloc fails, you've leaked it. You need a temp pointer to realloc to, if it succeeds, copy that to the original pointer, else free the original pointer and abort with error message.
– Arthur Kalliokoski Mar 19 '10 at 10:38 1 @Arthur.Yes. But you've a bigger problem than leaking if realloc fails :). – KennyTM Mar 19 '10 at 11:04.
For 20 iterations, you shouldn't be worrying about the performance of malloc/free. Even for vastly more (a couple orders of magnitude), you shouldn't start thinking about optimizations until you profile the code and understand what is slow. Finally, if you're going to free the buffer, there is no need to clear it first.
Even if you were to move the malloc/free outside the loop (using a maximum buffer as suggested by Justin), you shouldn't need to explicitly clear the buffer.
You can't call free outside the loop if you are calling malloc inside: char * buffer; for (int I = 0; I.
If you know the maximum length of the buffer - or can set a reasonable maximum - then you could use the same buffer for every iteration. Otherwise what you are doing should be fine.
It depends on what you need the buffer for. Do you really need to clear it after every iterations or maybe a \0 char at the end would suffice to mark the end of a string? After all that's what the various str library calls use.
If you really need to clear it you can use bzero(). Certainly malloc'ing and free'ing at every iteration is a waste of resources, since you can happily re-use the buffer. A different problem would arise if you were to parallelize the for loop, i.e.To have several concurrent threads using it.
Simple, real-life example: using a bucket to carry water. Suppose you need to do several trip with that bucket: would it make sense to pick it up, use it, put it down, pick it up again, use it, etc...? You can re-use the bucket as many times as possible. If, on the other hand, the bucket needs to be used by you and more people, either you organize access to the bucket or need more buckets.
Last suggestion: do not worry about performances now. They say that early optimization is the root of all evil, and you'll soon understand why. Firstly, understand the problem: write code that can be thrown away.
Experiment. Secondly, test it. Make sure it does what you need.
Thirdly, optimize it. Make the loop run ten thousand times and measure how long it takes. Then move the malloc outside, and measure again (use the shell command time if under UNIX).
Fourthly, rewrite it because your first experiment will most likely be a mess of patches of try-retry-not working code. Rinse, repeat.Ps: have fun in the mean time. It's supposed to be interesting, not frustrating.
It depends on the implementation of malloc and free. The best way to respond to your question is to build a benchmark...
Generally anything that can be moved outside of a loop should be. Why repeat the same action when you can just do it once? Justin Ethier is right, allocate a buffer that will comfortably fit the largest string and reuse that.
Process it better. Have some pseudo code: #define BLOCK_SIZE 1024 // or about the bigger size of your strings. Char *buffer = (char *) malloc(BLOCK_SIZE) for(int i=0; i.
After all that's what the various str library calls use. If you really need to clear it you can use bzero(). Certainly malloc'ing and free'ing at every iteration is a waste of resources, since you can happily re-use the buffer.
A different problem would arise if you were to parallelize the for loop, i.e. To have several concurrent threads using it. Simple, real-life example: using a bucket to carry water.
Suppose you need to do several trip with that bucket: would it make sense to pick it up, use it, put it down, pick it up again, use it, etc...? You can re-use the bucket as many times as possible. If, on the other hand, the bucket needs to be used by you and more people, either you organize access to the bucket or need more buckets. Last suggestion: do not worry about performances now.
They say that early optimization is the root of all evil, and you'll soon understand why. Firstly, understand the problem: write code that can be thrown away. Secondly, test it.
Make sure it does what you need. Thirdly, optimize it. Make the loop run ten thousand times and measure how long it takes.
Then move the malloc outside, and measure again (use the shell command time if under UNIX). Fourthly, rewrite it because your first experiment will most likely be a mess of patches of try-retry-not working code. Ps: have fun in the mean time.
It's supposed to be interesting, not frustrating.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.