Transcoding characters on-the-fly using iostreams and ICU?

Yes, but it is not the way you are expected to do it in modern (as in 1997) iostream.

Yes, but it is not the way you are expected to do it in modern (as in 1997) iostream. The behaviour of outputting through basic_streambuf is defined by the overflow(int_type c) virtual function. The description of basic_filebuf::overflow(int_type c = traits::eof()) includes a_codecvt.

Out(state, b, p, end, xbuf, xbuf+XSIZE, xbuf_end); where a_codecvt is defined as: const codecvt& a_codecvt = use_facet(getloc()); so you are expected to imbue a locale with the appropriate codecvt converter. The class codecvt is for use when converting from one character encoding to another, such as from wide characters to multibyte characters or between wide character encodings such as Unicode and EUC. The standard library support for Unicode made some progress since 1997: the specialization codecvt converts between the UTF-32 and UTF-8 encoding schemes.

This seems what you want (ISO-8859-1 codes are USC-4 codes = UTF-32). If not, what would be better? I would introduce a different type for UTF8, like: struct utf8 { unsigned char d; // d for data }; struct latin1 { unsigned char c; // c for character }; This way you cannot accidentally pass UTF8 where ISO-8859-* is expected.

But then you would have to write some interface code, and the type of your streams won't be istream/ostream. Disclaimer: I never actually did such a thing, so I don't know if it is workable in practice.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions