What properties of sound can be represented / computed in code?

There are two sets of properties The "Frequency Domain" -- the amplitudes of overtones in a specific sample. This is the amplitudes of each overtone The "Time Domain" -- the sequence of amplitude samples through time You can, using Fourier Transforms convert between the two The time domain is what sound "is" -- a sequence of amplitudes. The frequency domain is what we "hear" -- a set of overtones and pitches that determine instruments, harmonies, and dissonance A mixture of the two -- frequencies varying through time -- is the perception of melody.

There are two sets of properties. The "Frequency Domain" -- the amplitudes of overtones in a specific sample. This is the amplitudes of each overtone.

The "Time Domain" -- the sequence of amplitude samples through time. You can, using Fourier Transforms, convert between the two. The time domain is what sound "is" -- a sequence of amplitudes.

The frequency domain is what we "hear" -- a set of overtones and pitches that determine instruments, harmonies, and dissonance. A mixture of the two -- frequencies varying through time -- is the perception of melody.

The Echo Nest has easy-to-use analysis apis to find out all you might want to know about a piece of music. You might find the analyze documentation (warning, pdf link) helpful.

Any and all properties of sound can be represented / computed - you just need to know how. One of the more interesting is spectral analysis / spectrogramming (see en.wikipedia.org/wiki/Spectrogram).

Any properties you want can be measured or represented in code. What do you want? Do you want to test if two samples came from the same instrument?

That two samples of different instruments have the same pitch? That two samples have the same amplitude? The same decay?

That two sounds have similar spectral centroids? That two samples are identical? That they're identical but maybe one has been reverberated or passed through a filter?

Ignore all the arbitrary human-created terms that you may be unfamiliar with, and consider a simpler description of reality. Sound, like anything else that we perceive is simply a spatial-temporal pattern, in this case "of movement"... of atoms (air particles, piano strings, etc. ). Movement of objects leads to movement of air that creates pressure waves in our ear, which we interpret as sound.

Computationally, this is easy to model; however, because this movement can be any pattern at all -- from a violent random shaking to a highly regular oscillation -- there often is no constant identifiable "frequency", because it's often not a perfectly regular oscillation. The shape of the moving object, waves reverberating through it, etc.All cause very complex patterns in the air... like the waves you'd see if you punched a pool of water. The problem reduces to identifying common patterns and features of movement (at very high speeds).

Because patterns are arbitrary, you really need a system that learns and classify common patterns of movement (i.e. Movement represented numerically in the computer) into various conceptual buckets of some sort.

Any properties you want can be measured or represented in code. What do you want? Do you want to test if two samples came from the same instrument?

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions