Neither, I suggest ISO C. ANSI has not been the standards body for C since 1990. ISO C90 should be ubiquitous.
Depending on the target platform capabilities, the accompanying Standard Library may be a subset, or may require implementation of stubs to port it to a particular target. The Newlib C library, often used in embedded systems using GCC for example, requires basic I/O stubs (not required in this case since you have specified that no I/O operations take place) and the sbrk() function to be implemented.Sbrk() provides memory to the heap allocator; it requires no OS, or it could request memory from an OS Whether to write an abstraction layer, which encapsulates an underlying OS+CPU? Give the restrictions you have imposed on this library design, it seems that the are no OS issues.
The standard library provides the requirements for memory allocation Should this abstraction layer include its own memory manager (malloc/free routines) or all platforms today provide it already? Answered in (1); provide them as part of the standard library or use the existing implementation for the platform In the end what you need is to write your library to use C90 and the standard library, then it is simply a matter of porting the standard library to the target if it is not already done.
Neither, I suggest ISO C. ANSI has not been the standards body for C since 1990. ISO C90 should be ubiquitous.
Depending on the target platform capabilities, the accompanying Standard Library may be a subset, or may require implementation of stubs to port it to a particular target. The Newlib C library, often used in embedded systems using GCC for example, requires basic I/O stubs (not required in this case since you have specified that no I/O operations take place) and the sbrk() function to be implemented.Sbrk() provides memory to the heap allocator; it requires no OS, or it could request memory from an OS. Whether to write an abstraction layer, which encapsulates an underlying OS+CPU?
Give the restrictions you have imposed on this library design, it seems that the are no OS issues. The standard library provides the requirements for memory allocation. Should this abstraction layer include its own memory manager (malloc/free routines) or all platforms today provide it already?
Answered in (1); provide them as part of the standard library or use the existing implementation for the platform. In the end what you need is to write your library to use C90 and the standard library, then it is simply a matter of porting the standard library to the target if it is not already done.
In reverse order: (3). You need your own memory management routines, if not for any other reason, then simply because you expect that it may be ported to a bare platform without an underlying OS. Then you also need an implementation for pretty much everything else: string libraries, math libraries etc.You have to either code them yourself, or try to find ready-made code libraries that provide them for each platform.
(2). The OS and CPU are somewhat orthogonal variables. You would probably have a better time creating two abstraction layers (one for different operating systems, one for different hardware platforms) and then include/override definitions as necessary for each new platform.
But yes, an abstraction layer is a more manageable solution than riddling your code with whole hordes of #ifdefs. (1). This is not an easy question to answer.
For example if you expect your library to run on an embedded system or even a microcontroller then it's quite probable that not all of ANSI C features are available, depending on the maturity of the development tools. There could also be restrictions not necessarily related to the language itself. Hardware floating point units are relatively rare in embedded systems.
Stack sizes could be limited etc. I suggest that you make a survey of the platforms that you are interested in and try to select a common subset.(0). You could very well find from an economic (or even a feasibility) point of view that it is preferrable to develop for a rather rich subset that is supported in your most common target platforms and refactor your code if you encounter a new platform. Trying to restrict youserlf to the most common subset could essentially cripple both your development effort and the effectiveness of your library in slightly more capable systems.(-1).
You should realise from the scarsity (or even complete lack) of libraries with your required level of portability that what you want to achieve is not going to be easy.Be prepared!
There are lots of different styles of system architecture, and writing something non-trivial that is easily portable to all of them is probably not possible. I suggest, though, that you write your interfaces so that they take in a pointer to a structure that contains all of the system functions that are needed. You could do this either in every call or just once in the init routine.
This way you will have limited the places where differences in the code for different architectures to code which generates the structure of function pointers. This also makes it so that you can easily test the library for handling failure of one or more of these routines by altering just this structure, so that if you wanted to test for proper handling of malloc failure then you just replace the malloc pointer in the struct with one that fails. You might also wrap the system functions on a particular system with functions that give you the interface your library expects as well as at least partially handling errors (like translating to/from errno).
You need your own memory management routines, if not for any other reason, then simply because you expect that it may be ported to a bare platform without an underlying OS. Then you also need an implementation for pretty much everything else: string libraries, math libraries etc. You have to either code them yourself, or try to find ready-made code libraries that provide them for each platform. The OS and CPU are somewhat orthogonal variables.
You would probably have a better time creating two abstraction layers (one for different operating systems, one for different hardware platforms) and then include/override definitions as necessary for each new platform. But yes, an abstraction layer is a more manageable solution than riddling your code with whole hordes of #ifdefs. This is not an easy question to answer.
For example if you expect your library to run on an embedded system or even a microcontroller then it's quite probable that not all of ANSI C features are available, depending on the maturity of the development tools. There could also be restrictions not necessarily related to the language itself. Hardware floating point units are relatively rare in embedded systems.
Stack sizes could be limited etc. I suggest that you make a survey of the platforms that you are interested in and try to select a common subset. You could very well find from an economic (or even a feasibility) point of view that it is preferrable to develop for a rather rich subset that is supported in your most common target platforms and refactor your code if you encounter a new platform. Trying to restrict youserlf to the most common subset could essentially cripple both your development effort and the effectiveness of your library in slightly more capable systems.
You should realise from the scarsity (or even complete lack) of libraries with your required level of portability that what you want to achieve is not going to be easy.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.