How to save an image with a specific filesize?

Since there is no way to compress to a target size that I know of, I suggest looking into an indirect solution: Do some quick stats (possibly by compressing lots of images with different JPEG compression factors) to find out what the mean compressed size and its standard deviation is for a given quality level When accepting an image, try to compress with a suitable quality. If the resulting file is too big, decrease the quality and try again. Optionally, if it turns out too small you can increase the quality too Stop when the resulting image is "close" to your target size, but smaller To elaborate a bit on the maths in step 2: If you choose your starting quality such that your mean calculated size + 3 * standard deviation.

Since there is no way to compress to a target size that I know of, I suggest looking into an indirect solution: Do some quick stats (possibly by compressing lots of images with different JPEG compression factors) to find out what the mean compressed size and its standard deviation is for a given quality level. When accepting an image, try to compress with a suitable quality. If the resulting file is too big, decrease the quality and try again.

Optionally, if it turns out too small you can increase the quality too. Stop when the resulting image is "close" to your target size, but smaller. To elaborate a bit on the maths in step 2: If you choose your starting quality such that your mean calculated size + 3 * standard deviation You can tweak the starting quality and the logic that increases or decreases it as you wish, balancing between less server load and files closer to your maximum size ("making better use" of your restrictions).

This is exactly how I do it (not programmatically, using GIMP as you can see by the slider what the filesize will be (if you have preview checked - obviously creates temp files for the preview)) when I need an image to be a certain filesize for site upload limits. – billythekid Nov 27 '10 at 1:42 +1 This seems to be the general consensus given that noone knows of a way to do it algorithmically. Whilst a potential solution (and well thought out, thank you!), I'd still prefer to wait and see if anyone else can come up with an algorithmic solution.

– Jess Telford Nov 27 '10 at 2:35.

1 I have utilized this class in the final solution. Thank you for pointing it out! – Jess Telford Dec 20 '10 at 0:59 This class MUST use some sort of pre-scanning or trial and error within itself though... so, it is abstraction of the 'trial and error'.

– Jeremy Collake Apr 15 at 8:38.

– Jess Telford Nov 26 '10 at 21:17 ah yeah I think you're right on re-reading through. I think perhaps the trial-and-error methods above are looking more and more like the resolution, unfortunately. – billythekid Nov 26 '10 at 23:32 Thanks all the same :) I have down-voted as it doesn't answer the question, and to give another answer the opportunity for the bounty.

– Jess Telford Nov 27 '10 at 0:00 no worries, perhaps it should have been a comment really. ;oD – billythekid Nov 27 '10 at 1:46.

I do not know of a way to automatically determine the resulting file size of an image - that's task of the library generating the image. Unless you implement the compression yourself, you cannot pre-calculate the resulting size. What you could do is collect statistical data (image height and width and file size vs. different compression options) and do your estimations based on that data for new images.

Example: 50k jpg, 100x200 is compressed 30k 100k jpg, 100x200 is compressed 60k -> when you get a 100x202px image with 59k, the compressed size will be roughly estimated 35k.

The problem being, the images could vary wildly. When we come to jpeg compression width/height/compression ratio has no close relation to the final compressed size. – Jess Telford Nov 23 '10 at 10:37 This is why you could collect data about original file size and height/width vs. the compressed file size.

I expect the original file size to be an indicator for the compressed one. – cweiske Nov 23 '10 at 10:39 The file size of the image is not exactly proportional to its dimension because the amount of compression is dependent on its complexity. Although, you can get upper bounds of this equation from one of the most complex images but, you loose some KBs by always considering the worst case.

– nuaavee Nov 24 '10 at 19:39 Yes, that's why I said "estimate" :) – cweiske Nov 28 '10 at 17:35.

So, if I understand this correctly - You are building a system which allows users to write some text on one of the template images that you provide. If that is correct, why are you saving the image at all? You can easily remove the 50Kb size limit by saving the user actions itself.

You might be able to do something like - save the text (along with its properties and location) and the template it is on.

– nuaavee Nov 24 '10 at 22:35 I have updated the question with a description of the requirements. – Jess Telford Nov 25 '10 at 2:42.

There's only three ways to reduce the final size of any given image: reduce resolution reduce color depth reduce image complexity The first two are under your control. If the uploaded image comes out with a file size over the limit, you can try shrinking it to the next smaller available size and see if it fits into that. Given you've only got 3 target resolutions, this wouldn't be too expensive.

But if you need the "large" size to be available, then you're left with option 3. Reducing image complexity is a nasty beast. You can try to reduce inter-pixel "noise" to produce larger areas of the same color, which will compress in GIF/PNG images very nicely.

A simple blur filter could accomplish this, but it might also destroy the legibility of any fine print/text within the image. For a JPG target, you can try to lower the compression quality, but again, this can trash images if you take the quality down too low. If simple server-side transforms can't handle this, the image would have to be redone by the artist who created it in the first place.

The only practical method I can see to automate this is the compress-save-test loop you mention. Given that the final images are relatively small, this won't be that large a burden on the server. Saving a gif/png is a lightweight operation.

JPG compression takes more CPU power, but again, with small images, a modern server shouldn't have any trouble doing 10 or 20 test images within a second or two.

That's mostly a description of the alternative solution (which I have considered since asking the question). What I was hoping was instead of 'guessing' by reducing the image complexity, color depth, etc, there was an algorithmic way to push the compression toward a target file-size. Somewhat like what A* is to search compared with breadth-first or depth first and simply stopping at the first valid result.

Thank you for your comments, though! – Jess Telford Nov 26 '10 at 4:23.

I concur with other answers; there is no lossy or lossless image compression algorithm I know of that will allow you to input a 'desired target size'. You will have to create an algorithm that does compression parameter trial and error until you get it right. However, you can greatly increase your likelihood of success by doing a lot of research to determine how most of your specific type of images tend to compress (if there is any standardization to them at all).

As a compression guy, the reason no compression algorithm offers this is because there is no way to know before-hand how big the compressed image will be. You don't know that until you are done with compression. So, it is theoretically impossible for the compression algorithm to have an input such as 'make X size' without resorting to re-compressing with changed parameters until it hits the target size (as you would be doing).

Variable compression levels throughout different stages, etc, could push the result toward a desired goal. I'm still in the learning stages of compression, so if you've got any theory behind your statements, I'd be happy to read further. – Jess Telford Nov 30 '10 at 0:24 Sure, you can obviously guide compression towards being better or worse, but can you really guide it outputting a specific size data set?

It is an unanswered question, and impossible (at least for the moment). Maybe it is possible, but that is hard thing to prove when it has never been done ;) – Jeremy Collake Dec 1 '10 at 3:15 I'm sure you understand the compression process and see why I say it is impossible (at least for now). I've authored a few lossless compression algorithms, so know that much at least.

The goal here requires a very reliable prediction of how compressible future data will be, and .. well.. the only way to make a prediction is to compress the data (or have it go through a dry-run). Hence, going back to the trial and error process. – Jeremy Collake Dec 1 '10 at 3:47.

Whilst the programmatic option of looping until the target filesize is reached seems to be the popular answer, there are two ways to do this with . Jpeg compression: There is a patented method by Kuo, Chun-ming, so I'm not sure of the commercial viability of utilizing it: Method and electronic device for adjusting compression ratio of jpeg image Which is based on this formula: log(NSF) = (log(SF1 / SF2) / log(FileSize1 / FileSize2)) * log(Target / FileSize1) + log(SF1) Where SF1 is a first compression parameter SF2 is a second compression parameter FileSize1 is the size of the image compressed with SF1 FileSize2 is the size of the image compressed with SF2 Target is the target file size NSF is the target compression parameter. It's not clear if SF1, SF2, and NSF are in the range 0-1, or 0-100, etc, and if the FileSize1, FileSize2, and Target are in Bytes, KiloBytes, etc.Experiment with the correct combination here to find out the correct units.

The second method comes from Ricky D. Nguyen at MIT: Rate control and bit allocations for JPEG transcoding He suggests varying the data that is used for the compression while it's happening. This option may not be as feasible to implement as it requires modifications to the actual compression code itself.

From both of these examples, it is certainly possible to save out a . Jpeg file with a certain target file size.

Try 24-bit . Png first. If that fits it will be the best quality, and you're done.

Alternatively you could test some typical images and if none of them fit, you could eliminate the format from consideration altogether. For the . Gif and .

Jpg you'll need to search for the best fit; neither one can be predicted with enough certainty, and neither algorithm is suitable for constant bit rate encoding. You can use a binary search to find the best fit. You can choose compression factors from a predetermined list to limit the number of test compressions you need to do; for example if your list of .

Jpg compression factors is 4, 6, 8, 12, 18, 27, 44, 66, you would need to do 4 test compressions at most. . Gif and paletted .

Png are similar enough that you should just pick one and forget about the other. It will be tough to choose between . Gif/.

Png and . Jpg based on the compression results; the artifacts introduced by each process are completely different. Again you might be best off by compressing a number of test images to your target size, and eliminating one format or the other based on eyeball testing.

Purhaps you are looking at the problem from the wrong angle. By selecting to use PNG and minimizing the metadata stored in the image you will minimize the file size. This is because PNG is a Bitmap structure.So long as GMagick does not store the text as metadata it will have no impact on the file size.

Only the color depth (which you can also control) will impact the size of the file. Non-interlaces without filtering the file size should be essentially the same as the template size.So long as the templates are less than 50Kb you should be alright.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions