Friday, September 25, 2009

Wednesday, September 23, 2009

Saturday, September 19, 2009

Friday, September 18, 2009

Probabilities

Protassov, Konstantin Probabilities and Uncertainties in the Analysis of Experimental Data Grenoble Sciences, 2002.

INTRODUCTION

Why do uncertainties exist?

The purpose of the majority of experiments in physics consists in understanding a phenomenon and constructing a correct model of it. We take measurements and often have to ask ourselves the question: “What is the value of this or that magnitude?”, sometimes without first asking ourselves if this formulation is correct and if we will be able to find an answer.

The necessity for this prior questioning becomes evident as soon as one measures the same magnitude many times. The experimenter doing this is frequently confronted with a rather interesting situation: if he uses sufficiently precise instruments, he notices that repeated measurements of the same magnitude sometimes give results that are a bit different from those of the first measurement. This phenomenon is general, be it for simple or sophisticated measurements. Even repeated measurements of a metallic rod can give different measurements. Repeating the experiment shows that, on the one hand, this difference is not in general very large. In most cases, one stays close to a certain average value, but once in a while one finds values that are different from it. The more results are far from this average, the more they are rare.

Why does this dispersion exist? Whence comes this variation? A first reason for this effect is evident: the conditions under which an experiment is conducted always vary slightly, which modifies the magnitude to be measured. For example, where one determines many times the length of a metallic rod, it is the ambient temperature which may vary and thus cause the length to vary. This variation in external circumstances ( and the corresponding variance in the physical values) may be more or less important, but it is inevitable and, in the real circumstances of a physical experiment, one cannot escape it.

We are ‘condemned’ to effect measures of magnitude which are never constant. Which is why the very question of knowing what the value of a parameter is may not be absolutely correct. One must ask this question in a pertinent manner and find adequate means to describe physical magnitudes. One must find a definition which can express this physical particularity. This definition must reflect the fact that a physical value always varies, but that these variations group themselves around an average value.

The solution is to characterize a physical magnitude not by a value, but rather by the probability of finding in an experiment this or that value. For this one introduces a function called distribution of probability of detection of a physical value, or simply the distribution of a physical value which shows which values are the most frequent or the most rare. One must emphasize yet again, in this approach, it is not so much the concrete value of a physical magnitude, but especially the probability of arriving at different values.

One will see later that this function – the distribution of a physical value – is happily sufficiently simple ( in any event, in the majority of experiments). It has two characteristics. The first is the average value which is also the most probable. The second characteristic of this distribution function indicates, grosso modo, the region surrounding this average in which are grouped the majority of the results of the measurements. It characterizes the width of this distribution and is called uncertainty. As we will see later, this width has a rigorous interpretation in terms of probability. For reasons of simplicity, we will call this uncertainty ‘natural uncertainty’ or ‘initial’ of the physical magnitude itself. It is not altogether true, because this error or uncertainty is often due to experimental conditions. Although this definition is not perfectly rigorous, it is very useful for understanding.

The fact that, in most experiments, the result can be characterized by a mere two values, permits one to return to the question with which we began our discussion: “Can one ask what is the value of a physical parameter?” It ensues that in the case where two parameters are necessary and sufficient to characterize a physical magnitude, one can reconcile our desire to ask this question and the region of the interpretation of the results in term s of probabilities. The solution exists: one will call physical value the average value of this distribution and uncertainty or error the physical value of the width of the distribution. It is an accepted convention to say “the physical magnitude has a given value with a given error”. This means we are presenting an average value and the width of a distribution and that this answer has a precise interpretation in terms of probabilities. In this work, we use the terms ‘uncertainty’ and ‘error’ to describe the width of the distribution because, for historical reasons, the two are used in physics.

The aim of physical measures is the determination of this distribution function or, at least, of its two major parameters: the average and the width. To determine a distribution one must repeat a measure many times to find the frequency of appearance of values. The obtain the whole of the possible values as well as their probability of appearance, one should in fact make an infinite number of measurements. It is too long, too expensive, and no one needs it.

One thus limits oneself to a finite number of measurements. Of course this introduces an additional error (uncertainty). This uncertainty, due to the impossibility of measuring with absolute precision the initial distribution (natural), is called the statistical error. It is easy enough, at least in theory, to diminish this error: it is sufficient to augment the number of measurements. In principle, one can make it negligeable with respect to the initial uncertainty of the physical magnitude. However, a more delicate problem appears.

It is linked to the fact that, in each physical experiment one finds an apparatus, more or less complicated, between the experimenter and the measurable object. This apparatus inevitably brings modifications to the initial distribution: it deforms it. In the simplest case, these changes can be of two types: the apparatus can ‘displace’ the average value or it can enlarge the distribution.

The displacement of the average value is an example of what one calls “systematic errors”. The name indicates that such errors appear in all measurements. The apparatus systematically gives a value different (larger or smaller) than the ‘real’. Measuring with an apparatus with a badly calibrated zero is the most frequent example of this kind of error. Unfortunately, it is very difficult to combat this type of error: it is both difficult to detect and correct. For this, there are no general methods and one must proceed on a case by case basis.

However, it is easier to come to grips with the enlargement of the distribution introduced by the apparatus. One will see that this uncertainty having the same origin as the initial (natural) uncertainties “simply” adds on to these. In a large number of cases, enlargement due to the apparatus permits simplifying the measurements: suppose that we know the uncertainty (width) introduced by a given apparatus, and that it is clearly larger than the original uncertainty. It is possible to neglect the natural uncertainty with respect to the apparatus uncertainty. It is thus sufficient to take but one measure and to take the uncertainty of the apparatus as the uncertainty of the measure. Obviously, in this kind of experiment, one must be certain that apparatus uncertainty dominates natural uncertainty, but one can always verify this by taking repeated measurements. An apparatus of poor precision will not permit finding variations due to initial width.

One must note that a separation between apparatus uncertainty and natural uncertainty remains conventional enough. Nonetheless, one can always say that variations in experimental conditions are part and parcel of apparatus uncertainty. In this work, one does not deal with measurements from quantum physics, where there is an uncertainty in physical measurement due to the Heisenberg Uncertainty Principle. Furthermore, in quantum mechanics the interference between apparatus and object becomes more complicated and interesting. Nonetheless our general conclusions are not modified since, in quantum mechanics, the notion of probability is not only useful and natural, it is also impossible to do without.

We have understood that to experimentally determine a physical value it is necessary (but not always sufficient) to find the average (value) and the width (uncertainty). Without a determination of uncertainty, the experience is not complete: one cannot compare it to either a theory or another experiment. We have also seen that his uncertainty contains three possible contributing factors. The first is uncertainty due to changes in experimental conditions or to the very nature of magnitudes (in statistics or in quantum mechanics). The second is statistical uncertainty due to the impossibility of measuring precisely the initial distribution. The third is apparatus uncertainty due to the imperfection of the working tools of the experimenter.

An experimenter always asks two questions: First, how can one measure a physical magnitude, that is to say the characteristics of its distribution: the average and the width? Secondly, how and to what extent must this uncertainty be diminished? This is why the experimenter must understand the relations between the three components of uncertainty and find how to minimize them: one can minimize natural uncertainty by changing experimental conditions, statistical uncertainty by augmenting the number of measurements, apparatus uncertainty by using more precise instruments.

Yet, one cannot reduce uncertainty indefinitely. There is a reasonable limit to uncertainty. An evaluation of this limit is not only a question of time and money spent, but also an essential question within physics. One must not forget that, whatever the magnitude to be measured, one will never be able to take into account all the physical factors that can influence its value. Further, all of our reasonings and discussions take place within the context of a model or, more generally, our view of the world. That context may not be exact.

This is why our problem is to choose experimental methods and methods for estimating uncertainties adequate to a desirable and possible level of precision.

Various situations exist as a function of desired precision. In the first we are merely seeking to obtain the order of magnitude of the measured value: in that case, uncertainty as well should be grossly evaluated. In the second we want precision in the order of one to ten percent; one must be careful in determining uncertainties, and the methods used will vary to seek greater precision. The more one looks for precision the more the methods will be elaborate, but the price to pay is the slowness of calculations and their volume. In the third we are looking for an order of precision equal to that of the standard corresponding to the physical parameter being measured; the problem of uncertainty can become more important than that of the value.

In this work, we will only consider methods for estimating error in the second situation which corresponds to the majority of experiments undertaken as practical work. Most paragraphs answer a concrete question: how does one calculate uncertainties for an experiment where few measurements are taken? How may one adjust the parameters of a curve? How does one compare an experiment and a theory? What is a number of significant digits? Etc. The reader who knows the basics of Statistics can omit without danger the first paragraphs and seek the answer to his problem. In the contrary case, this work gives him the necessary information on those parts of Statistics useful for the treatment of uncertainties.

Thursday, September 17, 2009

Poisson

A problem I find charming from my recent readings in mathematics is the following: Let us suppose that it is known that accidents along a given stretch of road average three per day. What are the chances of there being precisely three on a given day. This is all the information that is given.


An accident is a multi-causal event: impossible to nail precisely what caused any given accident or series. The occurence

of an accident is, accidental; and the knowledge that they average three is statistical i.e. empirical. Someone counted them.


Our problem, then, is that we know the average - or in mathematical terms, the variance - and want to reason back to the probability for one occurence. The Poisson function turns this little trick for us.


p(X) = (e^-λ) × (λ^k∕k!) here for λ=3 et k=3

= (0,049787) × (27∕6)

= 0,224 thus, 22,4 %


It could have been three anything, as long as the occurence was too complex to untangle but the average well established. Luv’s it!


Wednesday, September 16, 2009

Tuesday, September 15, 2009

Saturday, September 12, 2009

Saturday, September 5, 2009

Tuesday, September 1, 2009