I've written and maintained 30 embedded products on a varity of target micros (including MSP430's). The "rules of thumb" I have been most successful with are: Try to modularize generic concepts as much as possible. It makes for easier maintenance and reuse/porting of a project to another target micro in the future.(e.g. Separate driver code from application code) DO NOT start by worrying about optimized code at the very beginning.
Try to solve the domain's problem first and optimize second. -- Your target micro can handle a lot more "stuff" than you might expect Work to ensure readability. Although most embedded projects seem to have short development-cycles, the projects often live longer than you might expect and another developer will undoubtedly have to work with your codebase.
I've written and maintained 30 embedded products on a varity of target micros (including MSP430's). The "rules of thumb" I have been most successful with are: Try to modularize generic concepts as much as possible. It makes for easier maintenance and reuse/porting of a project to another target micro in the future.(e.g. Separate driver code from application code) DO NOT start by worrying about optimized code at the very beginning.
Try to solve the domain's problem first and optimize second. -- Your target micro can handle a lot more "stuff" than you might expect. Work to ensure readability.
Although most embedded projects seem to have short development-cycles, the projects often live longer than you might expect and another developer will undoubtedly have to work with your codebase.
I've worked on 8-bit PIC processors with similar limitations. One restriction you don't have is how many comments you make or what you choose to name your methods, variables, etc.. Take advantage. Speed and size constraints do sometimes trump organization, but you can always explain.
Another tip is to break up a logical source file into even more pieces than you need, then bind them by #includeing them in a compilation unit. This allows you to have lots of reusable code (even one routine per file) but combine in whatever order you need. This is useful e.g. When trying to meet compilation unit size restrictions, or to pick and choose which common subroutines you need on the next project.
It's important to note that with some embedded micros/compilers, it's advantageous to combine multiple files into one big #included mess, while with others, it's best to split them into smaller pieces. I wish the compilers I used would give somewhat better control of memory organization without having to artificially rearrange the source code. – supercat Feb 2 at 5:47.
I try to organize it as if I had unlimited RAM and ROM, and it usually works out fine. As mentioned elsewhere, do not try to optimize it until you absolutely need to. If you can get a pin-compatible processor that has more resources, it's better to get it working on that, concentrating on good structure and layout, then optimize for size later when you understand the code better.
I've worked with some sensors like the Tmote Sky, I too have seen poor organization, and I have to admit I have contributed to it. Anyway I'd say that some confusion has to be, because loading too much modules or too much part of program will be (imho) resource killing too, so try to be aware of a threshold between organization and usability on the low resources. Obviously this don't mean let caos begin, but for example try to get a look on the organization of the tinyOS source code and applications, it's an idea on what I'm trying to say.
Although it is a bit painful, one organization technique that is somewhat common with embedded C libraries is to split every single function and variable into a separate C source file, and then aggregate the resulting collection of O files into a library file. The motivation for doing this is that for most normal linkers the unit of linkage is an object, for every object you either get the whole object or none of it. Since there is a 1-1 relationship between C files and object files, putting each symbol in it's own C file gives each one it's own object.
This in turn lets the linker pull in only that subset of functions and variables that are actually used. This sort of game doesn't help at all for headers they can happily be left as single files.
When writing libraries that are generic/reused, this is definitely important. – Toybuilder Oct 21 '08 at 20:33 I'm not sure about all compilers, but the IAR linker only includes functions that are actually called, while leaving out uncalled ones. No need to butcher your C files.
– Kyle Heironimus Oct 22 '08 at 2:37 You can massage newer versions of gcc into doing this as well, but it requires changing flags on both the compiler and linker and if you have a link script that will require modifications as well. – tolomea Oct 22 '08 at 11:10.
Except under exceptional circumstances (see note), the organisation of your code will have no impact on the final product. (contents of the code are obviously a different matter) So with that in mind you should organise your code as you would any other project. With that said, the following are fairly typical: If this is a processor that you've worked on before, or will be working on in the future, you will usually want to keep a dedicated hardware abstraction layer that can be shared between projects in the future.
Typically this module would contain items like routines for managing any uarts, timers etc. Usually it's reasonable to maintain a set of platform specific code for initialisation and setup that performs all of the configuration and initialisation up to the point where your executive takes over and runs your application. It will also include platform specific hal routines. The executive/application is probably maintained as a separate module.
All of the hardware specific code should be hidden in the hal (as mentioned above). By splitting your code up like this you also have the option of compiling and running your application as a simulation, on a completely different platform, just by replacing the hardware specific code with routines that mimic the hardware. This can be good for unit testing and debugging and algorithmic problems you might have.
Exceptional circumstances as might be imposed by unusual compiler restrictions. Eg. I've come across some compilers that expect all interrupt service routines to be compiled within a single object file.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.