Why new String(bytes, enc).getBytes(enc) does not return the original byte array?

Character. When you convert back? Is normally converted to the byte value 63 - which isn't what it was before.

Character. When you convert back,? Is normally converted to the byte value 63 - which isn't what it was before.

Awesome. I was actually looking for the answer in . NET but they are both similar enough in behaviour that I gleaned it from this.Thanks.

– John K Mar 30 '10 at 12:33.

What is the reason for this The reason is that character encodings are not necesarily bijective and there is no good reason to expect them to be. Not all bytes or byte sequences are legal in all encodings, and usually illegal sequences are decoded to some sort of placeholder character like '? ' or U+FFFD, which of course does not produce the same bytes when re-encoded.

Additionally, some encodings may map some legal different byte sequences to the same string.

Actually there shall be one difference: a byte of value 24 is converted to a char of value 0xFFFD; that's the "Unicode replacement character", used for untranslatable bytes. When converted back, you get a question mark (value 63). In CP1251, the code 24 means "end of input" and cannot be part of a proper string, which is why Java deems it as "untranslatable".

It appears that both cp1251 and cp1252 have byte values that do not correspond to defined characters; i.e. They are "unmappable". The javadoc for String(byte, String) says this: The behavior of this constructor when the given bytes are not valid in the given charset is unspecified.

The CharsetDecoder class should be used when more control over the decoding process is required. Other constructors say this: This method always replaces malformed-input and unmappable-character sequences with this charset's default replacement string. If you see this kind of thing happening in practice it indicates that either you are using the wrong character set, or you've been given some bad data.

Either way, it is probably not a good idea to carry on as if there was no problem. I've been trying to figure out if there is a way to get a CharsetDecoder to "preserve" unmappable characters, and I don't think it is possible unless you are willing to implementing a custom decoder/encoder pair. But I've also concluded that it does not make sense to even try.It is (theoretically) wrong map those unmappable characters to real Unicode code points.

And if you do, how is your application going to handle them?

Storical reason: in the ancient character encodings (EBCDIC, ASCII) the first 32 codes have special 'control' meaning and they may not map to readable characters. Examples: backspace, bell, carriage return. Newer character encoding standards usually inherit this and they don't define Unicode characters for every one of the first 32 positions.

Java characters are Unicode.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions