Rerurn to Romy the Cat's Site

Audio Discussions
Topic: Further on the subject of difference frequency measurements...

Page 1 of 1 (11 items)


Posted by decoud on 03-02-2008
It seems there is broad agreement here with what I take to be Romy's position, which is that one cannot have a sophisticated understanding of music reproduction if one does not have a sophisticated knowledge of music. Indeed, it ought to be obvious that for the person whose only exposure to live music is pop concerts - where everything is crudely amplified - "realism" will be only a matter of loudness. But knowledge of live music is not sufficient: one also needs to have a deep understanding of the possibilities of reproduction, which - as this site illustrates - is hard to achieve without long and arduous exploration.

Now all of this would be easier if one had some better measures by which the fidelity of reproduction could be gauged - not as a substitute for listening but as an adjunct. The various functions of frequency commonly quoted are not very useful because they assume that the sound is stationary, which of course it is not. The only non-stationary test one tends to see - the response to a step function - is of course a very artifical thing.

So, what we need is a method for analysing non-stationary, non-linear, time-varying signals. The best of these is probably empirical mode decomposition.  Romy, would you be persuaded to give it a try? (there is a version for MATLAB here http://perso.ens-lyon.fr/patrick.flandrin/emd.html) I do not for a minute believe it would be as good as a human ear, but it may be interesting to show that it can capture aspects of "reality" missed by crude things such as bandwidth and distortion values.

Posted by Romy the Cat on 03-03-2008

 decoud wrote:
It seems there is broad agreement here with what I take to be Romy's position, which is that one cannot have a sophisticated understanding of music reproduction if one does not have a sophisticated knowledge of music.

It never was my position, sorry. I never stressed any “knowledge of music”, not a lot of the reason is that I do not have this knowledge. So, what you said is your perception not my expression.

 decoud wrote:
Now all of this would be easier if one had some better measures by which the fidelity of reproduction could be gauged - not as a substitute for listening but as an adjunct. The various functions of frequency commonly quoted are not very useful because they assume that the sound is stationary, which of course it is not. The only non-stationary test one tends to see - the response to a step function - is of course a very artifical thing.

So, what we need is a method for analysing non-stationary, non-linear, time-varying signals. The best of these is probably empirical mode decomposition.  Romy, would you be persuaded to give it a try? (there is a version for MATLAB here http://perso.ens-lyon.fr/patrick.flandrin/emd.html) I do not for a minute believe it would be as good as a human ear, but it may be interesting to show that it can capture aspects of "reality" missed by crude things such as bandwidth and distortion values.

Sounds interesting or at least entertaining. I have no personal experience with EMD, and even if I had I doubt that I have expertise to enterprise the result into some kind of useful conclusions. I generally do not use any of the contemporary speakers design and evaluation techniques. All those TEF analyses and maximum length sequence system analyses are good but I do not do it. It is not that I am against any time-domain metrics but I never had needed it and my thinking about sound does not incorporate any time-domain analyses. You probably found someone more intelligent on the subject even to support this conversation…

Do not forget, decoud, that no of the methods of mathematic approximation of realty measure the actual Reality. The time-domain metrics, the decomposition method and anything else measure juts reflection of Reality in term of language of Reality description, nothing more. It is a powerful tool… for those who need it. However, the adoption of this language to describe Reality has some intrinsic problems in my view a person do not deal with Realty anymore but with semi-scientific surrogate of Reality.

Rgs, Romy the caT

Posted by decoud on 03-03-2008
Apologies: by knowledge of music I meant familiarity with the experience of music rather than declarative knowledge of musical theory or practice. I meant to contrast it with being familiar only - or mostly - with music already processed by an amplifier and reproduced by a loudspeaker, which is the condition that most people find themselves in (how many people  listen to much more live music than reproduced music?).

One might argue that what the majority hears as their "reality" is therefore the reproduction and not the real thing. To get closer to the actual reality these people need to be guided by something else than their ears: hence the idea of using a measure that might get at what *you* hear, even if - as you say - it can only be an imperfect surrogate.

But perhaps a  better way forward is to just to drag them out of their cave and into the sunlight of live music.

Best, D

Posted by N-set on 03-04-2008
I second the need to go beyond 19th century FT to describe musical events.
With all Romy's reservation (you can apply the same reasoning to
all other languages we use, e.g. that of musical expression itself)
I'd be very happy to see some new language
to talk about music reproduction.
Are you familiar with wavelets? Are they usefull here?


Best,
N-set

Posted by decoud on 03-04-2008
I suspect wavelets would be much better than a fourier transform and rather less good than emd. The reason is that since we do not know what features of the signal are decisive in perceiving it as real, a signal-adaptive technique such as emd is more likely to capture them than a decomposition based on a set of arbitrarily chosen functions such as wavelet analysis. The beauty of emd is that the decomposition is determined by the signal itself and so is highly efficient at extracting its essential features.

Of course, I have no idea how it would work out in practice: my only experience of these things is in analysing neurophysiological signals, but it is not a wholly dissimilar situation in that we generally have no idea about the relation between the signal and what actually goes on in the subject's brain.

Best, D


Posted by el`Ol on 03-05-2008
I don´t know whether the audio industry`s products are so sophisticatet that they need mathematical steam hammers to improve them. In a German forum there is a discussion about a diploma work from the TU Berlin where the student found out in psychoacoustic tests that the subtraction frequency of the intermodulation products is the non-linearity that interferes most with the perceived sound quality, and built a valve anplifier that is optimized in this respect.

Quote:

"Der "BLACK CAT 2" hat einen Dfferenztonfaktor von 0,002% und einen Klirrfaktor von 0,033 %, während der Telefunken HA990, als Beispiel eines handelsüblichen HiFi-Transistorverstärkers, einen Differenztonfaktor von 0,366 ( das 183-fache des BC2 ) und einen Klirrfaktor von 0,0048 hat."

What can be critizised is that he compares his amp with a quite old mass product, but he says it is representative for what average people have in their living rooms. However, finding out that the subtraction frequency factor of this mass product is two orders of magnitude above its THD, where in his amp it is one magnitude below, is quite shocking.

Posted by N-set on 03-05-2008
 decoud wrote:
I suspect wavelets would be much better than a fourier transform and rather less good than emd. The reason is that since we do not know what features of the signal are decisive in perceiving it as real, a signal-adaptive technique such as emd is more likely to capture them than a decomposition based on a set of arbitrarily chosen functions such as wavelet analysis. The beauty of emd is that the decomposition is determined by the signal itself and so is highly efficient at extracting its essential features.


Could you please briefly explain the idea?
Is it something you can use only for analysis of for synthesis as well?
Apologies for my lazyness, but really no time at the moment
to read the papers other than I have to.
Thanks!
N-set

Posted by Paul S on 03-05-2008
el 'Ol, I find this sort of experiment very interesting.  There have been a few such studies that attempt to determine what most annoys us in sound and/or music, and some then attempt to rectify the isolated problems by manipulating selected data into "optimal" types, ranges and levels.  The thing that sets high-end audio apart, I think, is that listeners continue listening over time, becoming more sensitive to other factors not "dealt with" and/or not "fixed.

I don't know if it is appropriate to mention here, but Lamm is supposed to have incorporated such thinking into his designs, which may or may not explain the way they affect the sound they produce, although I believe that it does.

One thing people often seem to fail to consider when modeling "corrective" processes for audio is the overall end quality of the "target" sound with respect to music.

Best regards,
Paul S

Posted by Andy Simpson on 02-08-2009
fiogf49gjkf0d
 el`Ol wrote:
I don´t know whether the audio industry`s products are so sophisticatet that they need mathematical steam hammers to improve them. In a German forum there is a discussion about a diploma work from the TU Berlin where the student found out in psychoacoustic tests that the subtraction frequency of the intermodulation products is the non-linearity that interferes most with the perceived sound quality, and built a valve anplifier that is optimized in this respect.

Quote:

"Der "BLACK CAT 2" hat einen Dfferenztonfaktor von 0,002% und einen Klirrfaktor von 0,033 %, während der Telefunken HA990, als Beispiel eines handelsüblichen HiFi-Transistorverstärkers, einen Differenztonfaktor von 0,366 ( das 183-fache des BC2 ) und einen Klirrfaktor von 0,0048 hat."

What can be critizised is that he compares his amp with a quite old mass product, but he says it is representative for what average people have in their living rooms. However, finding out that the subtraction frequency factor of this mass product is two orders of magnitude above its THD, where in his amp it is one magnitude below, is quite shocking.


It is quite shocking and I would like to hear about the mechanism involved....

I realise that my reply is late here - but do you have any further info on this?

This type of test is more commonly called DFD (difference frequency distortion) and is sometimes used in microphone measurement to measure acoustic/mechanical non-linearity.

I have been up to my eyes in exactly this kind of test with my microphones in order to prove the improvement in linearity and am finding that the microphone capsule distortion in the average studio condenser mic is often hugely worse than any amplifier at musical SPLs.

For my work, this test involves a pair of loudspeakers cranking out each a sinewave at 110dB SPL. The two sinewaves differ in frequency by 200Hz and the microphone output is measured for the amplitude of a 'difference frequency' at 200Hz, which distortion product is independent of any distortion introduced in the speakers, so the test is reliable.

Shockingly, in the case of the condenser microphone used on high SPL source, played back at low SPL by the end-user, we might expect distortion from the microphone to exceed that of the speaker system.

I have intimated this to Romy on several occasions but it is clearer now than it has been before.

Andy

PS - Romy, in comparing those two A/B files, on the orchestral peaks (assuming peaks of 130dB @2kHz) a distortion figure of some 10% (-20dB) applies to microphone A and more like 0.1% (-60dB) to microphone B. Was that audible to you?

For anybody else interested, A/B files here: http://www.simpsonmicrophonesarchives.com/AB/

If anybody is interested in listening, I am interested in perception of differences - dynamics, clarity, TUNING (ie. audible presence of inharmonic products), etc.

Posted by drdna on 02-08-2009
fiogf49gjkf0d
 Andy Simpson wrote:
This type of test is more commonly called DFD (difference frequency distortion) and is sometimes used in microphone measurement to measure acoustic/mechanical non-linearity.
This is interesting. What physical methods are employed to minimize the distortion?

 Andy Simpson wrote:
If anybody is interested in listening, I am interested in perception of differences - dynamics, clarity, TUNING (ie. audible presence of inharmonic products), etc.
Well I listened to these files. There was an obvious difference. It did not seem to be blinded: the "A" files seem to have more distortion. I perceived the distortion as a loss of some of the correct sound which was then made into noise. The analogy: taking a fine wood carving, sanding it lightly and sprinkling the surface with the resulting sawdust. The noise floor and hence clarity and dynamics suffer, as well as correct timbre of sound.

In general, my impressions of distortion are:

harmonic distortion: usually too slight to be heard
intermodulation distortion: affects timbre, focus, presence
frequency response: affects "connection", emotional color

SO far...

Adrian

Posted by Andy Simpson on 02-15-2009
fiogf49gjkf0d
 drdna wrote:
 Andy Simpson wrote:
This type of test is more commonly called DFD (difference frequency distortion) and is sometimes used in microphone measurement to measure acoustic/mechanical non-linearity.
This is interesting. What physical methods are employed to minimize the distortion?


I'm not sure if you mean distortion of the measurements or of the microphone?


 Andy Simpson wrote:
If anybody is interested in listening, I am interested in perception of differences - dynamics, clarity, TUNING (ie. audible presence of inharmonic products), etc.
Well I listened to these files. There was an obvious difference. It did not seem to be blinded: the "A" files seem to have more distortion. I perceived the distortion as a loss of some of the correct sound which was then made into noise. The analogy: taking a fine wood carving, sanding it lightly and sprinkling the surface with the resulting sawdust. The noise floor and hence clarity and dynamics suffer, as well as correct timbre of sound.

In general, my impressions of distortion are:

harmonic distortion: usually too slight to be heard
intermodulation distortion: affects timbre, focus, presence
frequency response: affects "connection", emotional color

SO far...

Adrian


Thanks Adrian - your wood-shavings description is interesting, not least as where energy is 'shaved-off' the top and redistributed as harmonic & inharmonic products this is close enough to true.

While we could perhaps argue that perception of dynamics can suffer simply with added constant noise-floor, I would not say that it is directly noise-floor in this case, but actual compression.

We could measure a compressor unit the same way and would see similar products, but the perception of compression is usually attributed to the actual non-linear gain reduction, rather than the products directly.

I think I recall a conclusion from a paper which studied audibility of IMD in amplifiers, which hypothesized that the ear was not sensitive to the distortion products but to the expectation of linear projection of the sound - which would agree with the compressor perception.

Andy

Page 1 of 1 (11 items)