Is this an idiomatic C way to convert longs to a binary (char *) representation?

Looks approximately idiomatic to me, except that I'd write the loop something like: char *writeptr = ptr_binarray + needed_digits; *writeptr = 0; do { --writeptr; *writeptr = (input % 2) + '0'; input /= 2; } while (input > 0) No need for an integer index For this particular case, I wouldn't bother with malloc since you free in the same function. Just allocate a big enough char array on the stack: char binarraysizeof(unsigned long)*CHAR_BIT + 1 Or make use of C99's variable length arrays: char binarrayneeded_digits + 1 Also, if you're only using gcc, then rather than taking a logarithm you could consider using builtin_clz to calculate needed_digits That's not about idiomatic C, though, since it's the gcc dialect. But even without it, you don't need floating point math to figure out how many digits are needed: graphics.stanford.edu/~seander/bithacks.... Just noticed a probable error in that line, too - your do/while loop neatly handles the case where input is 0, but the first line doesn't since you can't take the log of 0 Is there a better way (considering I would want the ability of bit-shifting etc...) Not sure what you mean here.

If you want to do operations like bit-shifting on a value, then don't convert it to a string like this. Keep it as a long int and do your bit-shifting there Other minor things, since you're asking for general opinions. None of these are things I'd really criticise as long as you have a reason you've done them: Remove pointless parens around ( needed_digits ), it's just noise Error message should probably go to stderr rather than stdout I would always check the return value from malloc (or any other function that returns an error value) immediately, rather than have a line of code in between.So move the int idx = needed_digits line down to just before the 'do .. while' loop (since you're using std=c99.

If it was c89, then you could still do that except that I'm going to recommend...) I wouldn't put an "else" after a conditional exit or return. But other people would do as you have, and the argument can probably get tribal Personally I wouldn't multiply by sizeof(char) in malloc, since the size of the buffer allocated by malloc is measured in chars by definition. But others put it there so that every malloc consistently always has a sizeof, so again I can't claim my way is idiomatic.

It's just better ;-) Clearing pointers after free is arguably worthwhile when they're in a structure, but not so much for automatics For each of the last three things, good C programming practice would not necessarily be to do as I do, but to agree a coding style with your colleagues / collaborators. The coding standard is allowed to be "do as you prefer", just so long as you agree not to argue, and not to "tidy up" each other's code.

Looks approximately idiomatic to me, except that I'd write the loop something like: char *writeptr = ptr_binarray + needed_digits; *writeptr = 0; do { --writeptr; *writeptr = (input % 2) + '0'; input /= 2; } while (input > 0); No need for an integer index. For this particular case, I wouldn't bother with malloc since you free in the same function. Just allocate a big enough char array on the stack: char binarraysizeof(unsigned long)*CHAR_BIT + 1; Or make use of C99's variable length arrays: char binarrayneeded_digits + 1; Also, if you're only using gcc, then rather than taking a logarithm you could consider using __builtin_clz to calculate needed_digits.

That's not about idiomatic C, though, since it's the gcc dialect. But even without it, you don't need floating point math to figure out how many digits are needed: graphics.stanford.edu/~seander/bithacks.... Just noticed a probable error in that line, too - your do/while loop neatly handles the case where input is 0, but the first line doesn't since you can't take the log of 0. Is there a better way (considering I would want the ability of bit-shifting etc...) Not sure what you mean here.

If you want to do operations like bit-shifting on a value, then don't convert it to a string like this. Keep it as a long int, and do your bit-shifting there. Other minor things, since you're asking for general opinions.

None of these are things I'd really criticise as long as you have a reason you've done them: Remove pointless parens around (needed_digits), it's just noise. Error message should probably go to stderr rather than stdout. I would always check the return value from malloc (or any other function that returns an error value) immediately, rather than have a line of code in between.So move the int idx = needed_digits line down to just before the 'do .. while' loop (since you're using std=c99.

If it was c89, then you could still do that except that I'm going to recommend...). I wouldn't put an "else" after a conditional exit or return. But other people would do as you have, and the argument can probably get tribal.

Personally I wouldn't multiply by sizeof(char) in malloc, since the size of the buffer allocated by malloc is measured in chars by definition. But others put it there so that every malloc consistently always has a sizeof, so again I can't claim my way is idiomatic.It's just better ;-) Clearing pointers after free is arguably worthwhile when they're in a structure, but not so much for automatics. For each of the last three things, good C programming practice would not necessarily be to do as I do, but to agree a coding style with your colleagues / collaborators.

The coding standard is allowed to be "do as you prefer", just so long as you agree not to argue, and not to "tidy up" each other's code.

1 - good depth, and the code example is clean and elegant as well. – Matthew Iselin Aug 10 '09 at 0:12 Perfect! These are exactly the kind of answers I was looking for.

I'll try to update the code in question if I find any spare time tomorrow evening. – ChristopheD Aug 10 '09 at 20:10.

There is no need to “convert” numbers to binary representation; they are already represented in binary inside memory. Using bitwise operators it's quite easy to play with binary representation: #include #include static void binary_repr(unsigned long input); int main (void) { binary_repr(0); binary_repr(1); binary_repr(16); return 0; } static void binary_repr(unsigned long input) { unsigned int i; unsigned int nb_bits = sizeof(input) * CHAR_BIT; for (i = 0; I.

Very nice, concise way. Thanks a lot! – ChristopheD Aug 10 '09 at 20:11.

Itoa(value, output_buffer, base); If you use 2 for base, you'll get the binary version in the string. Note that I'm only answering "Is there a better way", not any other component of the question. EDIT: Also, you may want to look at popular implementations of itoa to see how they did the multi-base conversion without requiring math functions (from -lm).

I know a lot of itoa's I've seen are quite small and elegant and still quite powerful.

1 Funny, I've never seen itoa(). What standard is that in? – Chris Lutz Aug 9 '09 at 23:24 I don't think that itoa() is standard.

– Karl Voigtland Aug 9 '09 at 23:27 en.wikipedia. Org/wiki/Itoa, "The itoa function is a widespread non-standard extension to the standard C programming language". – Matthew Iselin Aug 9 '09 at 23:27 Alternatively you can also look at ltostr, however that's not really standardised either.

– Matthew Iselin Aug 9 '09 at 23:30 I don't have an itoa() on OS X, so you're going to have to roll your own for portability anyway, unless you don't care about us OS X users. *tear* – Chris Lutz Aug 9 '09 at 23:37.

You could use glibc to register a custom printf conversion specifier: Customizing printf Then you could do: printf("Binary Representation: %b\n", num); This would be more flexible than having a call to printf() inside your function. You would still need to specify a function to do the conversion; but then you could use (s)printf throughout the code. Bastien Léonard's answer has a more idiomatic function to do the conversion, using bitwise-and instead of modulo 2, bit-shifting instead of division, and the ternary operator instead of if else.

Here is a related question: is-there-a-printf-converter-to-print-in-binary-format.

Ok, another possible solution using look-up table: #include #undef BIGENDIAN #ifdef BIGENDIAN enum { TSIZE = sizeof(int), INIT = 0, END = TSIZE }; #define op(x) ++(x) #define cond(x) ((x) END) #endif static char *binstr = { "0000", // 0x0 "0001", // 0x1 "0010", // 0x2 "0011", // 0x3 "0100", // 0x4 "0101", // 0x5 "0110", // 0x6 "0111", // 0x7 "1000", // 0x8 "1001", // 0x9 "1010", // 0xA "1011", // 0xB "1100", // 0xC "1101", // 0xD "1110", // 0xE "1111", // 0xF }; int main(void) { int num, i; unsigned char *hex; hex = ((unsigned char *) &num); while(fscanf(stdin, "%i", &num)! = EOF) { for(i = INIT; cond(i); op(i)) printf("%s%s", binstrhexi>>4, binstrhexi&0xF); printf("\n"); } return 0; } PD: I only check with little-endian memory organization.

Nice alternative solution, thanks! – ChristopheD Aug 10 '09 at 20:15.

Short int n = 1; //0x0001 n = n 0) { if (n & i) printf("1"); else printf("0"); I >>= 1; } }.

Well basically I want to convert longs to char * for playing with binary multiplication (where the product is bigger than the long datatype limits). I should probably have included this information in my question. Thanks for taking the time to answer (and the insight)!

– ChristopheD Aug 10 '09 at 20:14.

Yet another option. This simply loops over all bits from the most significant to the least significant and checks if they are set. Void binary_repr(unsigned long input) { int I = sizeof(input) * 8 - 1; for (; I >= 0; --i) { putchar((input & (1 '0' : '1'); } putchar('\n'); } This doesn't do anything that hasn't already been suggested by other people here.It's just a method that's easier to remember for me.

It's one of the great tragedies of Arabic notation that we put the most significant digit first. Almost all computations are easier when we start with the least significant digit: void fprint_binary(FILE *fp, unsigned long n) { char digits8*sizeof(n)+1; char *p = digits+sizeof(digits)-1; *p = '\0'; unsigned long mask; for (mask = 1; mask; mask P : "0"); } The comments about representation go double if you ever write code for a Turing machine (exercise for students, not practical).

Nice code, thanks! – ChristopheD Aug 10 '09 at 20:15.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions