What represents the most mentally challenging form of coding?

For me, it's threading. Even relatively "simple" threading is challenging, and if you delve into the realms of lock-free code it gets even hairier. There are certainly threading paradigms which don't raise as many mental headaches (actors, message passing etc) but they tend to come with their own trade-offs This is a level of "deep" complexity in my view, but there are other areas of coding which are challenging in different ways.

Security, i18n and date/time handling (or pretty much anything related to actual human characteristics) is very finicky, with lots of corner cases to learn and watch out for. This is certainly hard, but in a different way to concurrency EDIT: As a response to twk's answer: yes, there are lots of people trying to make concurrency easier. While there are already various platforms which support concurrency well (e.g. Erlang) there's more of a move at the moment to bring simpler concurrency to already-mainstream platforms.

From my point of view as a . NET developer (well, an amateur/enthusiast . NET developer anyway; professionally Java at the moment) the Parallel Extensions and Coordination and Concurrency Runtime are the two most interesting recent developments.

I don't expect this to make concurrency easy just feasible for mortals.

For me, it's threading. Even relatively "simple" threading is challenging, and if you delve into the realms of lock-free code it gets even hairier. There are certainly threading paradigms which don't raise as many mental headaches (actors, message passing etc) but they tend to come with their own trade-offs.

This is a level of "deep" complexity in my view, but there are other areas of coding which are challenging in different ways. Security, i18n and date/time handling (or pretty much anything related to actual human characteristics) is very finicky, with lots of corner cases to learn and watch out for. This is certainly hard, but in a different way to concurrency.

EDIT: As a response to twk's answer: yes, there are lots of people trying to make concurrency easier. While there are already various platforms which support concurrency well (e.g. Erlang) there's more of a move at the moment to bring simpler concurrency to already-mainstream platforms. From my point of view as a .

NET developer (well, an amateur/enthusiast . NET developer anyway; professionally Java at the moment) the Parallel Extensions and Coordination and Concurrency Runtime are the two most interesting recent developments. I don't expect this to make concurrency easy - just feasible for mortals.

– rism Feb 28 '09 at 21:02 @rism, That was supposed to be a secret. But now it's spoiled, those extra features are available at 75k rep. – Gamecat Feb 28 '09 at 21:16 If you think you understand threading - you don't ;) – ChrisF Feb 28 '09 at 21:30.

Strong" artificial intelligence and quantum computing would be my votes for equivalence with nanotechnology.

Strong artificial intelligence is in another ballpark. Weak AI would be much closer to Nanotechnology imo. – Simucal Feb 28 '09 at 21:16 Upon further thought, you're right.

Weak AI and nanotech exist, Strong AI is still theoretical. – Dave Swersky Feb 28 '09 at 21:21.

Natural language processing (NLP) is another field that should probably be mentioned.

HTML / CSS Okay, its not exactly coding, but man is it a headache.

It's not even coding. – Thomas Feb 28 '09 at 20:50 1 @Welbog: Hmm, not a very productive comment. Have you tried doing complex layouts that are supported across all modern browsers / OS combos?

If you haven't, I'd suggest you'd hold judgment until you've tried it. – TJB Feb 28 '09 at 20:50 He may not be able to help it... read his user page bio... – Dave Swersky Feb 28 '09 at 20:50 Bah, it can become a headache for smart people too in some circumstances, especially if legacy browsers like IE6 need to be supported. The original answer was kinda funny, nanotechnology vs HTML/CSS – Jonik Feb 28 '09 at 20:51 @Thomas - I'm not sure what the exact definition of coding is, but it does involve a language that is 'executed' in a way by the browser.

Like I said its not exactly coding. – TJB Feb 28 '09 at 20:52.

I sure hope there are some people out there busy making it easy to use more processor cores automatically. Sure, there is stuff like the Intel Thread Building Blocks or even languages like Erlang, but I hope we see a lot more progress on that front in the next 10 years.

At least locally, the hot field is informatics, especially bioinformatics. Collecting, assembling, associating, and analyzing information from many sources and deriving additional information from the collection represents the strongest connection of computer science to other activities. It's the newest degree that's offered our CS department and is drawing the most graduate students, from many disciplines, into CS classes.

I don't know how mentally challenging it is, but being new there are a lot of ideas out there that haven't been thought of, let alone investigated. Coming up with original ideas is pretty challenging, though the actual programming part behind them probably isn't.

I concur with Jon Skeet that threading/parallel execution is a hot topic and there are opening new fields, or thinking out-of-the-box technologies, like Transactional Memory. It's an open field for new concepts.

Embedded firmware development. Right now, many talented people in my area are headed towards nanotechnology. What is the equivalent field in modern computer science?

Again, Embedded firmware development. And for any desktop or internet developers, if you don't think embedded coding is far more difficult than any other type of coding then you don't know embedded. I can do anything a desktop or internet programmer can do, but it's very doubtful that most, if any, desktop/internet programmers can do embedded well, or at all.

Case in point, I've got a bug I'm working on right now where after 4 bytes are received successfully on a serial communications port (interrupt driven reception) successive bytes don't cause the UART to generate an interrupt (as they should). So, Windows and Internet programmers - what would you consider the most likely causes? What would you do?

Oh, I forgot, Windows and Internet programmers don't have full and direct access to the hardware - so they've probably never even experienced such a situation. In the embedded development world you can't say "it's the hardware" and leave it at that - you are responsible to make it work by yourself, all the way from the UI down to the power supply (software, firmware, hardware, it's all the responsibility of the embedded developer). This is a greater challenge than any Windows or Internet programmer can encounter.

RogerD: it's greater, unless they've encountered it before in their pre-Windows life. In that case, they'll suggest you check why you aren't re-enabling interrupts on the UART after processing four bytes. – John Saunders Dec 29 '09 at 22:38.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions