Glibc detected *** free(): invalid next size (normal): 0x0a03c978?

You don't verify that the fopen() calls all succeed; this is a recipe for trouble.

Up vote 4 down vote favorite share g+ share fb share tw.

I'm writing a socket program to download images. The problem is that when I test my code on small pics like gif it works fine. But when I run it with JPG pics (bigger than GIF) I got the error message: *** glibc detected *** /home/ubuntu/NetBeansProjects/myDownloader/dist/Debug/GNU-Linux-x86/mydownloader: free(): invalid next size (normal): 0x0a03c978 *** Please see the code and I'll give more information about the error.

FILE* pFile; long lSize; unsigned char* buffer; size_t result; FILE* combinedFile = fopen("mypic. Jpg", "wb+"); for(i = 1; I = numberOfPartitions) fwrite(temp, 1, len - 1, combinedFile); else fwrite(temp, 1, len, combinedFile); } fclose(pFile); printf("crash here\n"); free(buffer); } fclose(combinedFile); I got the error from this part, as I said when the image size is small it works fine. But with bigger size it's broken!

P. S: The program divided the pic into several files then re-combine it so the combining part is the one that causes the error. Any help will be very much appreciated since I've been stuck with this error for more than 3 days!

C linux sockets gcc link|improve this question edited Feb 13 '11 at 5:37Jonathan Leffler140k10108268 asked Feb 12 '11 at 21:53user6145782314.

The only thing that jumps out at me is strstr(buffer,"\r\n\r\n");. Why are you searching for (Windows-style) return characters in a binary file? – Joey Adams Feb 12 '11 at 22:09 to determine the end of the header that come with the response through the socket.

This is the onlyway to discard this header and start copying the binary data. – user614578 Feb 12 '11 at 22:15 When there are multiple "partitions", do all such "partitions" include the "\r\n\r\n" sequence? – pmg Feb 12 '11 at 22:24 1 hi, Actually each partition has been generated by executing a different GET request.

So, each of them include "\r\n\r\n". The problem happen when the loop finish the first iteration when it arrives to free(buffer) it crashes! – user614578 Feb 13 '11 at 0:31 2 FYI, Valgrind, which is probably available in your distribution's package manager, is a great tool to help with this type of thing.

– Paul Fisher 8 Feb1 at 2:21.

You don't verify that the fopen() calls all succeed; this is a recipe for trouble. You don't check the ftell() gives you a plausible value in lSize. You don't verify that the strstr() operation actually finds the marker string.

If it doesn't, it will return NULL and the following length operations are then bogus. But the error suggests that your code has written out of bounds, rather than just read data out of bounds. You could declare the first four variables into the body of the loop instead of outside the loop.

You don't show the declaration of the variable filename; could that be a char pointer with no space allocated? Or is it an array that is big enough? It is an odds-on bet that something has written beyond the end of some allocated space.

It is not immediately obvious that there's anything wrong with this code, but the trouble could be elsewhere yet it is this code that suffers the effects of transgressions elsewhere. This is quite common with memory problems; the code that finds the problem isn't the code that causes it. Does the malloc() on your machine return null or a non-null pointer when you allocate zero bytes?

Both are legitimate responses. If ftell() returns -1, then malloc() would allocate a buffer for 0 bytes, but the fread() would attempt to read up to 4 GB of data, which might overflow the space. OTOH, if ftell() fails, it is likely that fread() will fail too.

Have you printed out the sizes of the files? Is it the second partial file that crashes, or a later file? I've taken the code you supplied, wrapped it up as a main() function, supplied missing variables and headers, and run it under valgrind.

(MacOS X 10.6.6, GCC 4.5.2, Valgrind 3.6.0) It shows no problem. So, your trouble is most probably not in this code per se; something else earlier in your program trampled out of bounds of allocated memory and caused this to fail. I generated the 4 part files using the script: { echo "Header:control-Vcontrol-Mreturncontrol-Vcontrol-M"; dd if=/dev/random bs=1k count=4; } >part1 So each file was 4107 bytes long.

Working Code #include #include #include int main(void) { char filename32; FILE* pFile; long lSize; char *buffer; ssize_t result; FILE* combinedFile = fopen("mypic. Jpg", "wb+"); int numberOfPartitions = 4; int i; for(i = 1; I = numberOfPartitions) fwrite(temp, 1, len - 1, combinedFile); else fwrite(temp, 1, len, combinedFile); } fclose(pFile); printf("crash here\n"); free(buffer); } fclose(combinedFile); return 0; } I've not inserted all the error checking that I would if it were my own program. The output file in my scheme is 16381 bytes long; that is 3 bytes short.

The problem there is the fwrite() calls. The fread() code told you how many bytes it read; you subtracted the bytes for the header, and then subtracted one more. So, that if/else code reduces to just the fwrite() in the else.

All the partitions has data with size more than 1. It works when I move free(buffer) out of the loop but crashes after about 17 URL1 machine returns null when malloc allocates zero bytes. Thanks for your suggestions.

– user614578 Feb 13 '11 at 18:18 Thanx for your effort to help fixing this bug. I'm sorry for bothering you but I've sent you an email to your gmail account with the complete program to find out if other parts of my code causes this problem. Again thank you very much for your help.

– user614578 Feb 14 '11 at 1:30 The problem was not in the code shown; it was in code allocating space for messages to send over the internet. Changing '-6' into '+32' for the allocation size cured the 'write out of bounds' and everything else worked magically. It leaked a lot of memory.

– Jonathan Leffler Feb 14 '11 at 5:02.

Actually I can't find anything obviously wrong with your memory or file handling in the code above, the crash on free() might just be a symptom of something in your code writing into malloc()'s personal space... You could use memory checkers such as Valgrind or debuggers like gdb to take a closer look. The only possibly wrong thing that comes to mind is that buffer is not necessarily NUL-terminated, and as such the strstr() search can happily go over it, bufferlSize = '\0'; after the malloc-NULL-check should fix that. Also just to be sure, check that strstr() actually found what it was looking for (it returns NULL if it didn't).

You may also want to check that all your fopen() calls actually succeed (return not NULL). If none of this helps, printout's of len, lSize, temp and buffer's values just before the fwrite() calls would be helpful.

I've tried bufferlSize='\0' but nothing changes! When I moved free(buffer) out of the loop it works fine till iteration number 17 out of 20 then it crashes! This means the whole problem is with memory allocation, don't you think so?

– user614578 Feb 13 '11 at 18:02.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions