You would usually extract features from the data and feed those to the network. It is not advisable to take just some data and feed it to net. In practice, pre-processing and choosing the right features will decide over your success and the performance of the neural net.
Unfortunately, IMHO it takes experience to develop a sense for that and it's nothing one can learn from a book.
You would usually extract features from the data and feed those to the network. It is not advisable to take just some data and feed it to net. In practice, pre-processing and choosing the right features will decide over your success and the performance of the neural net.
Unfortunately, IMHO it takes experience to develop a sense for that and it's nothing one can learn from a book. Summing up: "Garbage in, garbage out.
1 What about the case where you want the neural network to extract the features and feed it to another network for classification / interpretation. Or you want the network to learn a grammar from a set of examples. In both these cases the network would need to process variable length data sets.
– Jeremy E Nov 19 '09 at 21:55 There are times when you want a bidirectional associative memory and the size of the items to associate are different. (name of person, picture of person) – Jeremy E Nov 19 '09 at 21:57.
Some problems could be solved by a recurrent neural network. For example, it is good for calculating parity over a sequence of inputs. The recurrent neural network for calculating parity would have just one input feature.
The bits could be fed into it over time. Its output is also fed back to the hidden layer. That allows to learn the parity with just two hidden units.
A normal feed-forward two-layer neural network would require 2**sequence_length hidden units to represent the parity. This limitation holds for any architecture with just 2 layers (e.g. , SVM).
– Jeremy E Apr 22 '10 at 17:07 It is more similar to a neural network with some output fed to the next input. Unimportant history will be forgotten over time. – Ivo Danihelka Apr 22 '10 at 18:23.
I'm not entirely sure, but I'd say, use the maximum number of inputs (e.g. For words, lets say no word will be longer than 45 characters (longest word found in a dictionary according to wikipedia), and if a shorter word is encountered, set the other inputs to a whitespace character. Or with binary data, set it to 0. The only problem with this approach is if an input filled with whitespace characters/zeros/whatever collides with a valid full length input (not so much a problem with words as it is with numbers).
I guess one way to do it is to add a temporal component to the input (recurrent neural net) and stream the input to the net a chunk at a time (basically creating the neural network equivalent of a lexer and parser) this would allow the input to be quite large but would have the disadvantage that there would not necessarily be a stop symbol to seperate different sequences of input from each other (the equivalent of a period in sentances).
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.