Loading...

Follow Fabulous adventures in coding on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Let’s sum up the last few episodes:

Suppose we have a distribution of doubles, p, and a function f from double to double. We often want to answer the question “what is the average value of f when it is given samples from p?” This quantity is called the expected value.

The obvious (or “naive”) way to do it is: take a bunch of samples, evaluate the function on those samples, take the average. Easy! However, this can lead to problems if there are “black swans”: values that are rarely sampled, but massively affect the value of the average when run through f. We would like to get a good estimate without having to massively increase the number of samples in our average.

We developed two techniques to estimate the expected value:

First, abandon sampling entirely and do numerical integral calculus:

  • Use quadrature to compute two areas: the area under f(x)*p.Weight(x)  and the area under p.Weight(x) (which is the normalization constant of p)
  • Their quotient is an extremely accurate estimate of the expected value
  • But we have to know what region to do quadrature over.

Second, use importance sampling:

  • Find a helper distribution q whose weight is large where f(x)*p.Weight(x) bounds a lot of area.
  • Use the naive algorithm to estimate the expected value of  x=>f(x)*p.Weight(x)/q.Weight(x)from samples of q
  • That is proportional to the expected value of f with respect to p
  • We gave a technique for estimating the proportionality constant by sampling from q also.

The problem with importance sampling then is finding a good q. We discussed some techniques:

  • If you know the range, just use a uniform distribution over that range.
  • Stretch and shift p so that the transformed PDF doesn’t have a “black swan”, but the normalization constant is the same.
  • Use the Metropolis algorithm to generate a helper PDF from Abs(f*p), though in my experiments this worked poorly
  • If we know the range of interest, we can use the VEGAS algorithm. It makes cheap, naive estimates of the area of subranges, and then uses that information to gradually refine a piecewise-uniform helper PDF that targets spikes and avoid flat areas of f*p.
  • However, the VEGAS algorithm is complex, and I did not attempt to implement it for this series.

The question you may have been asking yourself these past few episodes is:

If quadrature is an accurate and cheap way to estimate the expected value of f over samples from p then why are we even considering doing sampling at all? Surely we typically know at least approximately the range over which f*p has some area. What’s the point of all this?

Quadrature just splits up the range into some number — say, a thousand — equally-sized pieces, evaluates f*p on each of them, and takes the average. That sure seems cheaper and easier than all this mucking around with sampling. Have I just been wasting your time these past few episodes? And why has there been so much research and effort put into finding techniques for estimating expected value?

This series is called “Fixing Random” because the built-in base class library tools we have in C# for representing probabilities are weak. I’ve approached everything in this series from the perspective of “I want to have an object that represents probabilities in my business domain, and I want to use that object to solve my business problems”.

“What is the expected value of this function given this distribution?” is a very natural question to ask when solving business problems that involve probabilities, and as we’ve seen, you can answer that question by simulating integral calculus through quadrature.

But, as I keep on saying: things equal to the same are equal to each other. Flip the script. Suppose our business domain involves solving integral calculus problems. And suppose there is an integral calculus problem that we cannot efficiently solve with quadrature. What do we do?

  • We can solve expected value problems with integral calculus techniques such as quadrature.
  • We can solve expected value problems with sampling techniques
  • Things equal to the same are equal to each other.
  • Therefore we can solve integral calculus problems with sampling techniques. 

That is why there has been so much research into computing expected values: the expected value is the area under the function f(x)*p.Weight(x) so if we can compute the expected value by sampling, then we can compute that area and solve the integral calculus problem without doing quadrature!

I said above “if quadrature is accurate and cheap”, but there are many scenarios in which quadrature is not a cheap way to compute an area.

What’s an example? Well, let’s generalize. So far in this series I’ve assumed that f is a Func<double, double> . What if f is a Func<double, double, double> — a function from pairs of doubles to double. That is f is not a line in two dimensions, it is a surface in three.

Let’s suppose we have f being such a function, and we would like to solve a calculus problem: what is the volume under f on the range (0,0) to (1, 1)?

We could do it by quadrature, but remember, in my example we split up the range 0-to-1 into a thousand points. If we do quadrature in two dimensions with the same granularity of 0.001, that’s a million points we have to evaluate and sum. If we only have computational resources to do a thousand points, then we have to have a granularity of around 0.03.

What if the function is zero at most of those points? We could then have a really crappy estimate of the total area because our granularity is so low.

We now reason as follows: take a two-dimensional probability distribution. Let’s say we have the standard continuous uniform implementation of  IWeightedDistribution<(double, double)> .

All the techniques I have explored in this series work equally well in two dimensions as one! So we can use those techniques. Let’s do so:

  • What is the estimated value of f when applied to samples from this distribution?
  • It is equal to the volume under f(x,y)*p.Weight((x,y)). 
  • But p.Weight((x,y)) is always 1.0 on the region we care about; it’s the standard continuous uniform distribution, after all.
  • Therefore the estimated expected value of f when evaluated on samples from p is an estimate of the volume we care about.

How does that help?

It doesn’t.

If we’re taking a thousand points by quadrature or a thousand points by sampling from a uniform distribution over the same range, it doesn’t matter. We’re still computing a value at a thousand points and taking an average.

But now here’s the trick.

Suppose we can find a helper distribution q that is large where f(x,y) has a lot of volume and very small where it has little volume.

We can then use importance sampling to compute a more accurate estimate of the desired expected value, and therefore the desired volume, because most of the points we sample from q are in high-volume regions. Our thousand points from q will give us a better estimate!

Now, up the dimensionality further. Suppose we’ve got a function that takes three doubles and goes to double, and we wish to know its hypervolume over (0, 0, 0) to (1, 1, 1).

With quadrature, we’re either doing a billion computations at a granularity of 0.001, or, if we can only afford to do a thousand evaluations, that’s a granularity of 0.1.

Every time we add a dimension, either the cost of our quadrature goes up by a factor of a thousand, or the cost stays the same but the granularity is enormously coarsened.

Oh, but it gets worse.

When you are evaluating the hypervolume of a 3-d surface embedded in 4 dimensions, there are a lot more points where the function can be zero! There is just so much room in high dimensions for stuff to be. The higher the dimensionality gets, the more important it is that you find the spikes and avoid the flats. 

Exercise: Consider an n-dimensional cube of side 1. That thing always has a hypervolume of 1, no matter what n is.

Now consider a concentric n-dimensional cube inside it where the sides are 0.9 long.

  • For a 1-dimensional cube — a line — the inner line is 90% of the length of the outer line, so we’ll say that 10% of the length of the outer line is “close to the surface”.
  • For a 2-dimensional cube — a square — the inner square has 81% of the area of the outer square, so 19% of the area of the outer square is “close to the surface”.

At what dimensionality is more than 50% of the hypervolume of the outer hypercube “close to the surface”?

Exercise: Now consider an n-dimensional cube of side 1 again, and the concentric n-dimensional sphere. That is, a circle that exactly fits inside a square, a sphere that exactly fits inside a cube, and so on. The radius is 1/2.

  • The area of the circle is pi/4 = 79% of the area of the square.
  • The volume of the sphere is pi/6 = 52% of the volume of the cube.
  • … and so on

At what value for n does the volume of the hypersphere become 1% of the volume of the hypercube?

In high dimensions, any shape that is anywhere on the interior of a hypercube is tiny when compared to the massive hypervolume near the cube’s surface!

That means: if you’re trying to determine the hypervolume bounded by a function that has large values somewhere inside a hypercube, the samples must frequently hit that important region where the values are big. If you spend time “near the edges” where the values are small, you’ll spend >90% of your time sampling irrelevant values.

That’s why importance sampling is so useful, and why we spend so much effort studying how to find distributions that compute expected values. Importance sampling allows us to numerically solve multidimensional integral calculus problems with reasonable compute resources.

Aside: Now you know why I said earlier that I misled you when I said that the VEGAS algorithm was designed to find helpful distributions for importance sampling. The VEGAS algorithm absolutely does that, but that’s not what it was designed to do; it was designed to solve multidimensional integral calculus problems. Finding good helper distributions is how it does its job.

Exercise: Perhaps you can see how we would extend the algorithms we’ve implemented on distributions of doubles to distributions of tuples of doubles; I’m not going to do that in this series; give it a shot and see how it goes!

Next time on FAIC: This has been one of the longest blog series I’ve done, and looking back over the last sixteen years, I have never actually completed any of the really big projects I started: building a script engine, building a Zork implementation, explaining Milner’s paper, and so on. I’m going to complete this one!

There is so much more to say on this topic; people spend their careers studying this stuff. But I’m going to wrap it up in the next couple of episodes by giving some final thoughts, a summary of the work we’ve done, a list of some of the topics I did not cover that I’d hoped to, and a partial bibliography of the papers and other resources that I read when doing this series.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last time on FAIC we were attacking our final problem in computing the expected value of a function f applied to a set of samples from a distribution p. We discovered that we could sometimes do a “stretch and shift” of p, and then run importance sampling on the stretched distribution; that way we are more likely to sample from “black swan” regions, and therefore the estimated expected value is more likely to be accurate.

However, determining what the parameters to Stretch.Distribution should be to get a good result is not obvious; it seems like we’d want to do what we did: actually look at the graphs and play around with parameters until we get something that looks right.

It seems like there ought to be a way to automate this process to get an accurate estimate of the expected value. Let’s take a step back and review what exactly it is we need from the helper distribution. Start with the things it must have:

  • Obviously it must be a weighted distribution of doubles that we can sample from!
  • That means that it must have a weight function that is always non-negative.
  • And the area under its weight function must be finite, though not necessarily 1.0.

And then the things we want:

  • The support of the helper distribution does not have to be exactly support of the p, but it’s nice if it is.
  • The helper’s weight function should be large in ranges where f(x) * p.Weight(x) bounds a large area, positive or negative.
  • And conversely, it’s helpful if the weight function is small in areas where the area is small.

Well, where is the area likely to be large? Precisely in the places where Abs(f(x)*p.Weight(x)) is large. Where is it likely to be small? Where that quantity is small… so…

why don’t we use that as the weight function for the helper distribution?

Great Scott, why didn’t we think of that before?

Aside: As I noted before in this series, all of these techniques require that the expected value actually exist. I’m sure you can imagine functions where f*p bounds a finite area, so the expected value exists, but abs(f*p) does not bound a finite area, and therefore is not the weight function of a distribution. This technique will probably not work well in those weird cases.

If only we had a way to turn an arbitrary function into a non-normalized distribution we could sample from… oh wait, we do. (Code is here.)

var p = Normal.Distribution(0.75, 0.09);
double f(double x) => Atan(1000 * (x – .45)) * 20 – 31.2;
var m = Metropolis<double>.Distribution(
  x => Abs(f(x) * p.Weight(x)),
  p,
  x => Normal.Distribution(x, 0.15));

Let’s take a look at it:

Console.WriteLine(m.Histogram(0.3, 1));

                         ***            
                         ****           
                        *****           
                       *******          
        *              *******          
       **             *********         
       **             *********         
       **             *********         
       **            ***********        
       **            ***********        
       **           *************       
       **           *************       
      ***           **************      
      ***          ***************      
      ***          ***************      
      ***         *****************     
     ****        *******************    
     ****        ********************   
    *****       *********************** 
----------------------------------------

That sure looks like the distribution we want.

What happens if we try it as the helper distribution in importance sampling? Unfortunately, the results are not so good:

0.11, 0.14, 0.137, 0.126, 0.153, 0.094, ...

Recall that again, the correct result is 0.113. We’re getting worse results with this helper distribution than we did with the original black-swan-susceptible distribution.

I’m not sure what has gone wrong here. I tried experimenting with different proposal distributions and couldn’t find one that gave better results than just using the proposal distribution itself as the helper distribution.

So once again we’ve discovered that there’s some art here; this technique looks like it should work right out of the box, but there’s something that needs tweaking here. Any experts in this area who want to comment on why this didn’t work, please leave comments.

And of course all we’ve done here is pushed the problem off a level; our problem is to find a good helper distribution for this expected value problem, but to do that with Metropolis, we need to find a good proposal distribution for the Metropolis algorithm to consume, so it is not clear that we’ve made much progress here. Sampling efficiently and accurately is hard!

I’ll finish up this topic with a sketch of a rather complicated algorithm called VEGAS; this is an algorithm for solving the problem “how do we generate a good helper distribution for importance sampling knowing only p and f?”

Aside: The statement above is slightly misleading, but we’ll say why in the next episode!

This technique, like quadrature, does require us to have a “range” over which we know that the bulk of the area of f(x)*p.Weight(x) is found. Like our disappointing attempt above, the idea is to find a distribution whose weight function is large where it needs to be, and small where it is not.

I am not an expert on this algorithm by any means, but I can give you a quick sketch of the idea. The first thing we do is divide up our range of interest into some number of equally-sized subranges. On each of those subranges we make a uniform distribution and use it to make an estimate of the area of the function on that subrange.

How do we do that? Remember that the expected value of a function evaluated on samples drawn from a distribution is equal to the area of the function divided by the area of the distribution. We can construct a uniform distribution to have area of 1.0, so the expected value is equal to the area. But we can estimate the expected value by sampling. So we can estimate areas by sampling too! Again: things equal to the same are equal to each other; if we need to find an area, we can find it by sampling to determine an expected value.

So we estimate the expected value of a uniform distribution restricted to each sub-range. Again, here’s the function of interest, f(x)*p.Weight(x)

Ultimately we want to accurately find the area of this thing, but we need a black-swan-free distribution that samples a lot where the area of this thing is big.

Let’s start by making some cheap estimates of the area of subranges. We’ll split this thing up into ten sub-ranges, and do a super cheap estimate of the area of the subrange by sampling over a uniform distribution confined to that subrange.

Let’s suppose our cheap estimate finds the area of each subrange as follows:

Now, you might say, hey, the sum of all of these is an estimate of the area, and that’s what we’re after; and sure, in this case it would be pretty good. But stay focussed: what we’re after here with this technique is a distribution that we can sample from that is likely to have high weight where the area is high.

So what do we do? We now have an estimate of where the area of the function is big — where the expected value of the sub-range is far from zero — and where it is small.

We could just take the absolute value and stitch it all together:

And then use this as our helper distribution; as we prefer, it will be large when the area is likely to be large, and small where it is likely to be small. We’ll spend almost no time sampling from 0.0 to 0.3 where the contribution to the expected value is very small, but lots of time sampling near both the big lumps.

Aside: This is an interesting distribution: it’s a piecewise uniform distribution. We have not shown how to sample from such a distribution in this series, but if you’ve been following along, I’m sure you can see how to do it efficiently; after all, our “unfair die” distribution from way back is basically the same. You can efficiently implement distributions shaped like the above using similar techniques.

This is already pretty good; we’ve done ten cheap area estimates and generated a reasonably high-quality helper PDF that we can then use for importance sampling. But you’ve probably noticed that it is far from perfect; it seems like the subranges on the right side are either way too big or way too small, and this might skew the results.

The insight of the VEGAS algorithm’s designer was: don’t stop now! We have information to refine our helper PDF further.

How?

We started with ten equally-sized subranges. Numbering them from the left, it sure looks like regions 1, 2, 3, 5 and 6 were useless in terms of providing area, and regions 5 and 9 were awesome, so let’s start over with ten unequally sized ranges. We’ll make regions 1, 2, and 3 into one big subrange, and also regions 5 and 6 into one big subrange, and then split up regions 4, 7, 8, 9 and 10 into eight smaller regions and do it again.

Aside: The exact details of how we rebalance the subranges involve a lot of fiddly bookkeeping, and that’s why I don’t want to go there in this series; getting the alias algorithm right was enough work, and this would be more. Maybe in a later series I’ll investigate this algorithm in more detail.

We can then keep on repeating that process until we have a helper PDF that is fine-grained where it needs to be: in the places where the area is large and changing rapidly. And it is then coarse-grained where there is not much change in area and the area is small.

Or, put another way: VEGAS looks for the spikes and the flats, and refines its estimate to be more accurate at the spikes because that’s where the area is at.

And bonus, the helper PDF is always piecewise continuous uniform, which as I noted above, is relatively easy to implement and very cheap to sample from.

This technique really does generate a high-quality helper PDF for importance sampling when given a probability distribution and a function. But, it sounds insanely complicated; why would we bother?

Next time on FAIC: I’ll wrap up this topic with some thoughts on why we have so many techniques for computing expected value.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last time on FAIC we finally wrote a tiny handful of lines of code to correctly implement importance sampling; if we have a distribution p that we’re sampling from, and a function f that we’re running those samples through, we can compute the expected value of f even if there are “black swan” regions in p. All we need is a helper distribution q that has the same support as p, but no black swans.

Great. How are we going to find that?

A variation on this problem has come up before this series: what should the initial and proposal distributions be when using Metropolis? If we’re using Metropolis to compute a posterior from a prior then we can use the prior as the initial distribution. But it’s not at all clear in general how to choose a high-quality proposal distribution; there’s some art there.

There is also some art in choosing appropriate helper distributions when doing importance sampling. Let’s once again take a look at our “black swan” situation:

As we’ve discussed, I contrived the “black swan” situation by ensuring that there was a region of the graph where the orange line bounded a large area, but the blue line bounded a very tiny area there.

First off: in my initial description of the problem I made the assumption that I only cared about the function on the range of 0.0 to 1.0. If you know ahead of time that there is a specific range of interest, you can always use a uniform distribution over that range as a good guess at a helper distribution. As we saw in a previous episode, doing so here gave good results. Sure, we spend a lot of time sampling a region of low area, but we could tweak that to exclude the region between 0.0 and 0.3.

What if we don’t know the region of interest ahead of time though? What if the PDF and the function we’re evaluating are defined over the whole real line and we’re not sure where to put a uniform distribution? Let’s think about some quick-and-dirty hacks for solving that problem.

What if we “stretched” the blue line a little bit around 0.75, and “squashed” it down a little bit?

The light blue line is not perfect by any means, but we are now likely to get at least a few samples from the part of the orange line that has large negative area.

Since the original distribution is just a normal, we can easily make this helper distribution by increasing the standard deviation. (Code for this episode can be found here.)

var p = Normal.Distribution(0.75, 0.09);
var p2 = Normal.Distribution(0.75, 0.15);
double f(double x) => Atan(1000 * (x – .45)) * 20 – 31.2;
for (int i = 0; i < 10; ++i)
  Console.WriteLine(
    $”{p.ExpectedValueByImportance(f, 1.0, p2):0.###}“);

0.119, 0.114, 0.108, 0.117, 0.116, 0.108, 0.121, ...

Recall that the correct value is 0.113. This is not perfect, but it’s a lot better than the original.

Now, it is all well and good to say that we know how to solve the problem when the nominal distribution is normal; what if it is some arbitrary distribution and we want to “stretch” it?

That’s actually pretty straightforward. Let’s suppose we want to stretch a distribution around a particular center, and, optionally, also move it to the left or right on the real line. Here’s the code:

public sealed class Stretch : IWeightedDistribution<double>
{
  private IWeightedDistribution<double> d;
  private double shift;
  private double stretch;
  public static IWeightedDistribution<double> Distribution(
    IWeightedDistribution<double> d,
    double stretch,
    double shift = 0.0,
    double around = 0.0)
  {
    if (stretch == 1.0 && shift == 0.0) return d;
    return new Stretch(
      d, stretch, shift + around – around * stretch);
  }
  private Stretch(
    IWeightedDistribution<double> d,
    double stretch,
    double shift)
  {
    this.d = d;
    this.stretch = stretch;
    this.shift = shift;
  }
  public double Sample() =>
    d.Sample() * stretch + shift;
  // Dividing the weight by stretch preserves
  // the normalization constant 
  public double Weight(double x) =>
    d.Weight((x – shift) / stretch) / stretch;
}

And now we can get similar results:

var p = Normal.Distribution(0.75, 0.09);
var ps = Stretch.Distribution(p, 2.0, 0, 0.75);
double f(double x) => Atan(1000 * (x – .45)) * 20 – 31.2;
for (int i = 0; i < 10; ++i)
  Console.WriteLine(
    $”{p.ExpectedValueByImportance(f, 1.0, ps):0.###}“);

0.113, 0.117, 0.121, 0.124, 0.12 ...

Again, not perfect, but we’re getting at least reasonable results here.

But like I said before, these are both “more art than science” techniques; they are useful if you have a particular distribution and a particular function, and you’re looking for an expected value, and you’re willing to spend some time writing different programs and trying out different techniques and parameters to tweak it to get good results. We still have not got an algorithm where a distribution and a function go in, and an accurate estimate of the expected value comes out.

Next time on FAIC: I’ll make one more attempt at writing code to get a good result automatically that will fail for reasons I do not know, and sketch out a much more sophisticated algorithm but not implement it.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One more time! Suppose we have our nominal distribution p that possibly has “black swans” and our helper distribution q which has the same support, but no black swans.

We wish to compute the expected value of f when applied to samples from p, and we’ve seen that we can estimate it by computing the expected value of g:

x => f(x) * p.Weight(x) / q.Weight(x)

applied to samples of q.

Unfortunately, the last two times on FAIC we saw that the result will be wrong by a constant factor; the constant factor is the quotient of the normalization constants of q and p.

It seems like we’re stuck; it can be expensive or difficult to determine the normalization factor for an arbitrary distribution. We’ve created infrastructure for building weighted distributions and computing posteriors and all sorts of fun stuff, and none of it assumes that weights are normalized so that the area under the PDF is 1.0.

But… we don’t need to know the normalization factors. We never did, and every time I said we did, I lied to you. Because I am devious.

What do we really need to know? We need to know the quotient of two normalization constants. That is less information than knowing two normalization constants, and maybe there is a cheap way to compute that fraction.

Well, let’s play around with computing fractions of weights; our intuition is: maybe the quotient of the normalization constants is the average of the quotients of the weights. So let’s make a function and call it h:

x => p.Weight(x) / q.Weight(x)

What is the expected value of h when applied to samples drawn from q?

Well, we know that it could be computed by:

Area(x => h(x) * q.Weight(x)) / Area(q.Weight)

But do the algebra: that’s equal to

Area(p.Weight) / Area(q.Weight)

Which is the inverse of the quantity that we need, so we can just divide by it instead of multiplying!

Here’s our logic:

  • We can estimate the expected value of g on samples of  q by sampling.
  • We can estimate the expected value of h on samples of q by sampling.
  • The quotient of these two estimates is an estimate of the expected value of f on samples of p, which is what we’ve been after this whole time.

Whew!

Aside: I would be remiss if I did not point out that there is one additional restriction that we’ve got to put on helper distribution q : there must be no likely values of x in the support of q such that q.Weight(x) is tiny but p.Weight(x) is extremely large, because their quotient is then going to blow up huge if we happen to sample that value, and that’s going to wreck the average.

We can now actually implement some code that computes expected values using importance sampling and no quadrature. Let’s put the whole thing together, finally: (All the code can be found here.)

public static double ExpectedValueBySampling<T>(
    this IDistribution<T> d,
    Func<T, double> f,
    int samples = 1000) =>
  d.Samples().Take(samples).Select(f).Average();

public static double ExpectedValueByImportance(
    this IWeightedDistribution<double> p,
    Func<double, double> f,
    double qOverP,
    IWeightedDistribution<double> q,
    int n = 1000) =>
  qOverP * q.ExpectedValueBySampling(
    x => f(x) * p.Weight(x) / q.Weight(x), n);

public static double ExpectedValueByImportance(
  this IWeightedDistribution<double> p,
  Func<double, double> f,
  IWeightedDistribution<double> q,
  int n = 1000)
{
  var pOverQ = q.ExpectedValueBySampling(
    x => p.Weight(x) / q.Weight(x), n);
  return p.ExpectedValueByImportance(f, 1.0 / pOverQ, q, n);
}

Look at that; the signatures of the methods are longer than the method bodies! Basically there’s only four lines of code here. Obviously I’m omitting error handling and parameter checking and all that stuff that would be necessary in a robust implementation, but the point is: even though it took me six math-heavy episodes to justify why this is the correct code to write, actually writing the code to solve this problem is very straightforward.

Once we have that code, we can use importance sampling and never have to do any quadrature, even if we do not give the ratio of the two normalization constants:

var p = Normal.Distribution(0.75, 0.09);
double f(double x) => Atan(1000 * (x – .45)) * 20 – 31.2;
var u = StandardContinuousUniform.Distribution;
var expected = p.ExpectedValueByImportance(f, u);

Summing up:

  • If we have two distributions p and q with the same support…
  • and a function f that we would like to evaluate on samples of p…
  • and we want to estimate the average value of f …
  • but p has “black swans” and q does not, then:
  • we can still efficiently get an estimate by sampling q
  • bonus: we can compute an estimate of the ratios of the normalization constants of p and q
  • extra bonus: if we already know one of the normalization constants, we can compute an estimate of the other from the ratio.

Super; are we done?

In the last two episodes we pointed out that there are two problems: we don’t know the correction factor, and we don’t know how to pick a good q. We’ve only solved the first of those problems.

Next time on FAIC: We’ll dig into the problem of finding a good helper distribution q.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last time on FAIC we deduced the idea behind the “importance sampling” technique for determining the average value of a function from double to double — call it  f — when it is applied to samples from a possibly-non-normalized weighted distribution of doubles — call it p.

Just for fun, I’m going to do the derivation all over again. I’ll again be using the technique “things equal to the same are equal to each other”, but this time we’re going to start from the other end. Let’s jump right in!

Again, for pedagogical purposes I’m going to consider only distributions with support from 0.0 to 1.0; we’ll eliminate that restriction when we can.

We discovered a while back that the expected value of f applied to samples of pis equal to the quotient of Area(x => f(x) * p.Weight(x) divided by ​​Area(x => p.Weight(x)). The latter term is the normalization constant for p, which we might not know.

Let’s take any weighted distribution q also with support from 0.0 to 1.0; it also might not be normalized.

We’re now going to do some manipulations to our expression that are obviously identities. We’ll start with the fact that

Area(x => f(x) * p.Weight(x))

obviously must be equal to:

Area(x => (f(x) * p.Weight(x) / q.Weight(x)) * q.Weight(x))

We’ve just divided and multiplied by the same quantity, so that is no change. And we’ve assumed that q has the same support as p, so the weight we’re dividing by is always non-zero.

Similarly,

Area(p.Weight)

must be equal to

(Area(q.Weight) * (Area(p.Weight) / Area(q.Weight)))

for the same reason.

So the quotient of these two quantities must also be equal to the expected value of f applied to samples from p; they’re the same quantities! Our original expression

Area(x => f(x) * p.Weight(x) /
 Area(x => p.Weight(x))

is equal to:

Area(x => (f(x) * p.Weight(x) / q.Weight(x)) * q.Weight(x)) / 
  (Area(q.Weight) * (Area(p.Weight) / Area(q.Weight)))

For any suitable q.

Let’s call that value exp_fp for “expected value of f on samples of p“. We’ve just written that value in two ways, one very straightforward, and one excessively complicated.

Unsurprisingly, my next question is: what is the expected value of function g

x => f(x) * p.Weight(x) / q.Weight(x)

over samples from q?

We know that it is estimated by this quotient of areas:

Area(x => g(x) * q.Weight(x)) /
Area(q.Weight)

The denominator is the normalization constant of q which we might not know.

Call that value exp_gq, for “expected value of g on samples of q”

What is the relationship between exp_gq and exp_fp?

Well, just look at the two expressions carefully; plainly they differ by no more than a constant factor. exp_fp is equal to exp_gq * Area(q.Weight) / Area(p.Weight)

And now we are right where we ended last time. Summing up:

  • Once again, we have deduced that importance sampling works:
    • An estimate of the expected value of g applied to samples from q is proportional to the expected value of f applied to samples from p
    • the proportionality constant is exactly the quotient of the normalization constants of q and p
    • If q and p are known to be normalized, then that constant is 1.
  • Once again, we can extend this result to q and p with any support
    • All we really need for an accurate estimate is that q have support in places where f(x) * p.Weight(x) has a lot of area.
    • But it is also nice if q has low weight in where that function has small area.
  • Once again, we have two serious problems:
    • how do we find a good q?
    • we are required to know the normalization constants of both p and q, either of which might be hard to compute in general
  • Once again, the previous statement contains a subtle error.
    • I’m so devious.
    • I asked what the error was in the previous episode as well, and there is already an answer in the comments to that episode, so beware of spoilers if you want to try to figure it out for yourself.

We are so close to success, but it seems to be just out of our grasp. It’s vexing!

Next time on FAIC: Amazingly, we’ll implement a version of this algorithm entirely based on sampling; we do not actually need to do the quadrature to compute the normalization factors!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last time on FAIC I showed why our naïve implementation of computing the expected value can be fatally flawed: there could be a “black swan” region where the “profit” function f is different enough to make a big difference in the average, but the region is just small enough to be missed sometimes when we’re sampling from our distribution p

It looks so harmless [Photo credits]

The obvious solution is to work harder, not smarter: just do more random samples when we’re taking the average! But doesn’t it seem to be a little wasteful to be running ten thousand samples in order to get 9990 results that are mostly redundant and 10 results that are extremely relevant outliers?

Perhaps we can be smarter.

We know how to compute the expected value in a discrete non-uniform distribution of doubles: multiply each value by its weight, sum those, and divide by the total weight. But we should think for a moment about why that works.

If we have an unfair two-sided die — a coin — stamped with value 1.23 on one side, and -5.87 on the other, and 1.23 is twice as likely as -5.87, then that is the same as a fair three sided die — whatever that looks like — with 1.23 on two sides and -5.87 on the other. One third of the time we get the first 1.23, one third of the time we get the second 1.23, and one third of the time we get -5.87, so the expected value is 1.23/3 + 1.23/3 – 5.87/3, and that’s equivalent to (2 * 1.23 – 5.87) / 3. This justifies our algorithm.

Can we use this insight to get a better estimate of the expected value in the continuous case? What if we thought about our continuous distribution as just a special kind of discrete distribution?

Aside: In the presentation of discrete distributions I made in this series, of course we had integer weights. But that doesn’t make any difference in the computation of expected value; we can still multiply values by weights, take the sum, and divide by the total weight.

Our original PDF — that shifted, truncated bell curve — has support from 0.0 to 1.0. Let’s suppose that instead we have an unfair 1000-sided die, by dividing up the range into 1000 slices of size 0.001 each.

  • The weight of each side of our unfair die is the probability of rolling that side.
    • Since we have a normalized PDF, the probability is the area of that slice. 
    • Since the slices are very thin, we can ignore the fact that the top of the shape is not “level”; let’s just treat it as a rectangle.
    • The width is 0.001; the height is the value of the PDF at that point.
    • That gives us enough information to compute the area.
  • Since we have a normalized PDF, the total weight that we have to divide through is 1.0, so we can ignore it. Dividing by 1.0 is an identity.
  • The value on each side of the die is the value of our profit function at that point.

Now we have enough information to make an estimate of the expected value using our technique for discrete distributions.

Aside: Had I made our discrete distributions take double weights instead of integer weights, at this point I could simply implement a “discretize this distribution into 1000 buckets” operation that turns weighted continuous distributions into weighted discrete distributions.

However, I don’t really regret making the simplifying choice to go with integer weights early in this series; we’re immediately going to refactor this code away anyways, so turning it into a discrete distribution would have been a distraction.

Let’s write the code: (Code for this episode is here.)

 public static double ExpectedValue(
    this IWeightedDistribution<double> p,
    Func<double, double> f) =>
  // Let’s make a 1000 sided die:
  Enumerable.Range(0, 1000)
  // … from 0.0 to 1.0:
  .Select(i => ((double)i) / 1000)
  // The value on the “face of the die” is f(x)
  // The weight of that face is the probability
// of choosing this slot
  .Select(x => f(x) * p.Weight(x) / 1000)
  .Sum();
  // No need to divide by the total weight since it is 1.0.

And if we run that:

Console.WriteLine($”{p.ExpectedValue(f):0.###}“);

we get

0.113

which is a close approximation of the true expected value of this profit function over this distribution. Total success, finally!

Or, maybe not.

I mean, that answer is correct, so that’s good, but we haven’t solved the problem in general.

The obvious problem with this implementation is: it only works on normalized distributions whose support is between 0.0 and 1.0. Also, it assumes that 1000 is a magic number that always works. It would be nice if this worked on non-normalized distributions over any range with any number of buckets.

Fortunately, we can solve these problems by making our implementation only slightly more complicated:

public static double ExpectedValue(
  this IWeightedDistribution<double> p,
  Func<double, double> f,
  double start = 0.0,
  double end = 1.0,
  int buckets = 1000)
{
  double sum = 0.0;
  double total = 0.0;
  for (int i = 0; i < buckets; i += 1)
  {
    double x = start + (end – start) * i / buckets;
    double w = p.Weight(x) / buckets;
    sum += f(x) * w;
    total += w;
  }
  return sum / total;
}

That totally works, but take a closer look at what this is really doing. We’re computing two sums, sum and total, in exactly the same manner. Let’s make this a bit more elegant by extracting out the summation into its own method:

public static double Area(
    Func<double, double> f,
    double start = 0.0,
    double end = 1.0,
    int buckets = 1000) =>
  Enumerable.Range(0, buckets)
    .Select(i => start + (end – start) * i / buckets)
    .Select(x => f(x) / buckets)
    .Sum();
public static double ExpectedValue(
    this IWeightedDistribution<double> p,
    Func<double, double> f,
    double start = 0.0,
    double end = 1.0,
    int buckets = 1000) =>
  Area(x => f(x) * p.Weight(x), start, end, buckets) /
    Area(p.Weight, start, end, buckets);

As often happens, by making code more elegant, we gain insights into the meaning of the code. That first function should look awfully familiar, and I’ve renamed it for a reason. The first helper function computes an approximation of the area under a curve; the second one computes the expected value as the quotient of two areas. 

It might be easier to understand it with a graph; here I’ve graphed the distribution p.Weight(x) as the blue line and the profit times the distribution f(x) * p.Weight(x)as the orange line:

The total area under the blue curve is 1.0; this is a normalized distribution.

The orange curve is the blue curve multiplied by the profit function at that point. The total area under the orange curve — remembering that area below the zero line is negative area — divided by the area of the blue curve (1.0) is the expected value.

Aside: You can see from the graph how carefully I had to contrive this “black swan” scenario. I needed a region of the graph where the area under the blue line is close to 0.001 and the profit function is so negative that it makes a large negative area there when multiplied, but without making a large negative area anywhere else.

Of course this example is contrived, but it is not unrealistic; unlikely things happen all the time, and sometimes those unlikely things have important consequences.

An interesting feature of this scenario is: look at how wide the negative region is! It looks like it is around 10% of the total support of the distribution; the problem is that we sample from this range only 0.1% of the time because the blue line is so low here. We’ll return to this point in the next episode.

Aside: The day I wrote this article I learned that this concept of computing expected value of a function applied to a distribution by computing the area of the product has a delightful name: LOTUS, which stands for the Law Of The Unconscious Statistician.

The tongue-in-cheek name is apparently because statistics students frequently believe that “the expected value is the area under this curve” is the definition of expected value. I hope I avoided any possible accusation of falling prey to the LOTUS fallacy. We started with a correct definition of expected value: the average value of a function applied to a bunch of samples, as the size of the bunch gets large. I then gave an admittedly unrigorous, informal and hand-wavy justification for computing it by approximating area, but it was an argument.

We’ve now got two ways of computing an approximation of the expected value when given a distribution and a function:

  • Compute a bunch of samples and take their average.
  • Compute approximate values of two areas and divide them.

As we know, the first has problems: we might need a very large set of samples to find all the relevant “black swan” events, and therefore we spend most of our time sampling the same boring high-probability region over and over.

However, the second has some problems too:

  • We need to know the support of the distribution; in my contrived example I chose a distribution over 0.0 to 1.0, but of course many distributions have much larger ranges.
  • We need to make a guess about the appropriate number of buckets to get an accurate answer.
  • We are doing a lot of seemingly unnecessary math; between 0.0 and, say 0.3, the contribution of both the blue and orange curves to the total area is basically zero. Seems like we could have skipped that, but then again, skipping a region with total probability of one-in-a-thousand led to a bad result before, so it’s not entirely clear when it is possible to save on work.
  • Our first algorithm was fundamentally about sampling, which seems appropriate, since “average of a set of samples” is the definition of expected value. This algorithm is just doing an approximation of integral calculus; something seems “off” about that.

It seems like we ought to be able to find an algorithm for computing expected value that is more accurate than our naive algorithm, but does not rely so much on calculus.

Next time on FAIC: We’ll keep working on this problem!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’m continuing with my project to port over, reformat and update a decade of old blog posts. Today, a few days in mid-October 2003; this is still my second month of blogging and I am writing at what I would consider today to be a ridiculous rate.

Why is there no #include?

Once again I fail to understand the customer mindset.

How do I script a non-default dispatch?

First, COM is too complicated. Second, sometimes defense in depth can go too far.

What everyone should know about character encoding

I had been meaning to write this post, and then Joel ended up doing a far better job than I would have. That’s a win in my opinion.

It never leaks but it pours

Some thoughts on memory leaks and language design to avoid them.

I take exception to that

Long jumps considered way more harmful than exceptions

Whether you should use exception handling or not is a decision to be made on its merits; but trying to mix code that uses exception handling with code that uses return codes is going to be a mess. Pick one.

Designing JScript .NET

The first of what will be a lot of articles describing the JScript .NET language design process.

Microsoft’s under-funding of this language and the long-term strategic ramifications of that choice continues to be the corporate strategy decision affecting me directly that I most strongly disagreed with. It’s super irritating to have a vision of the future, try to build towards it, and have all those efforts rebuked, only to see many aspects of that vision realized a decade later at many times the cost. I’m glad Microsoft got there eventually, but it could have been a lot sooner, a lot cheaper.

Expect to see a bunch more of this rant as I port over those articles; you’ve been warned.

Wrox is dead, long live Wrox
Digging a security hole all the way to China
Dead trees vs bits

Some thoughts on the economics of publishing computer books in 2003, before the e-book revolution.

VBScript is to VB as ping pong is to volleyball

Is it more important to learn the language, or the object model? Plus, some musings on the 2003 mobile phone operating system business.

How bad is good enough?

The first of many rants about how to think about improving performance of software.

More soon!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last time on FAIC we reviewed the meaning of “expected value”: when you get a whole bunch of samples from a distribution, and a function on those samples, what is the average value of the function’s value as the number of samples gets large?

I gave a naive implementation:

public static double ExpectedValue<T>(
    this IDistribution<T> d,
    Func<T, double> f) =>
  d.Samples().Take(1000).Select(f).Average();

public static double ExpectedValue(
    this IDistribution<double> d) =>
  d.ExpectedValue(x=>x);

Though short and sweet, this implementation has some problems; the most obvious one is that hard-coded 1000 in there; where did it come from? Nowhere in particular, that’s where.

It seems highly likely that this is either too big, and we can compute a good estimate in fewer samples, or that it is too small, and we’re missing some important values.

Let’s explore a scenario where the number of samples is too small. The scenario will be contrived, but not entirely unrealistic.

Let’s suppose we have an investment strategy; we invest a certain amount of money with this strategy, and when the strategy is complete, we have either more or less money than we started with. To simplify things, let’s say that the “outcome” of this strategy is just a number between 0.0 and 1.0; 0.0 indicates that the strategy has completely failed, resulting in a loss, and 1.0 indicates that the strategy has completely succeeded, resulting in the maximum possible return.

Before we go on, I want to talk a bit about that “resulting in a loss” part. If you’re a normal, sensible investor like me and you have $100 to invest, you buy a stock or a mutual fund for $100 because you believe it will increase in value. If it goes up to $110 you sell it and pocket the $10 profit. If it goes down to $90, you sell it and take the $10 loss. But in no case do you ever lose more than the $100 you invested. (Though of course you do pay fees on each trade whether it goes up or down; let’s suppose those are negligible.) Our goal is to get that 10% return on investment.

Now consider the following much riskier strategy for spending $100 to speculate in the market: suppose there is a stock currently at $100 which we believe will go down, not up. We borrow a hundred shares from someone willing to lend them to us, and sell them for $10000. We pay the lender $100 interest for their trouble. Now if the stock goes down to $90, we buy the hundred shares back for $9000, return them to the owner, and we’ve spent $100 but received $1000; we’ve made a 900% return on the $100 we “invested”. This is a “short sale”, and as you can see, you get a 900% return instead of a 10% return.

(I say “invested” in scare quotes because this isn’t really investing; it’s speculation. Which is a genteel word for “gambling”.)

But perhaps you also see the danger here. Suppose the stock goes down but only to $99.50. We buy back the shares for $9950, and we’ve only gained $50 on the trade, so we’ve lost half of our $100; in the “long” strategy, we would only have lost fifty cents; your losses can easily be bigger with the short strategy than with the long strategy.

But worse, what if the stock goes up to $110. We have to buy back those shares for $11000, so we’ve “invested” $100 and gotten a “return” of -$1000, for a total loss of $1100 on a $100 investment. In a short sale if things go catastrophically wrong you can end up losing more than your original investment. A lot more!

As I often say, foreshadowing is the sign of a quality blog; let’s continue with our example.

We have a process that produces values from 0.0 to 1.0 that indicates the “success level” of the strategy. What is the distribution of success level? Let’s suppose it’s a straightforward bell-shaped curve with the mean on the “success” side:

This is a PDF, and extra bonus, it is normalized: the total area under the curve is 1.0. The vast majority of the time — 99.9% — our investment strategy produces an outcome between 0.48 and 1.0, so that’s the area of the graph I’m going to focus on.

Aside: I know it looks a little weird seeing a bell-shaped curve that just cuts off abruptly at 1.0, but let’s not stress about it. Again, this is a contrived example, and we’ll see why it is irrelevant later.

Now, I don’t intend to imply that the “success level” is the return: I’m not saying that most of the time we get a 75% return. Let’s suppose that we’ve worked out a function that translates “strategy outcome level” to the percentage gain on our investment. That is, if the function is 0.0, we’ve not gained anything but we’ve not lost anything either. If it is 0.5 then our profits are 50% of our investment; if it’s -0.25 then we’ve taken a net loss of 25% of our investment, and so on.

Let’s propose such a function, and zoom in on the right side of the diagram where the 99.9% of cases are found: from 0.46 to 1.0. Here’s our function:

If our process produces an outcome of 0.54 or larger, we make between a 0% and an 18% return, which seems pretty good; after all the vast majority of the bulk of the distribution is to the right of 0.54. Now, in the unlikely case where we get an outcome from 0.48 to 0.54, we lose money; on the left hand end, we’re losing almost 200%; that is, if we put in $100 we not only fail to make it back, we end up owing $100.

The question at hand, of course, is what is the average value of the “profit function” given the distribution of outcomes? That is, if we ran this strategy a thousand times, on average what return would we get?

Well, we already wrote the code, so let’s run it: (Code can be found here.)

var p = Normal.Distribution(0.75, 0.09);
double f(double x) =>
  Atan(1000 * (x – .45)) * 20 – 31.2;
Console.WriteLine($”{p.ExpectedValue(f):0.##}“);

Aside: I’m ignoring the fact that if we get a sample out of the normal distribution that is greater than 1.0 or less than 0.0 we do not discard it. The region to the left of 0.0 is absurdly unlikely, and the region to the right of 1.0 has low probability and the value of the profit function is very flat. Writing a lot of fiddly code to correct for this problem would not change the estimated expected value much and would be a distraction from the far deeper problem we’re about to discover.

Also, I’m limiting the distribution to a support of 0.0 to 1.0 for pedagogical reasons; as we’ll see in later episodes, we will eventually remove this restriction, so let’s just ignore it now.

We get the answer:

0.14

Super, we will get on average a 14% return on our investment with this strategy. Seems like a good investment; though we have a small chance of losing big, the much larger chance of getting a solid 0% to 18% return outweighs it. On average.

Aside: It is important to realize that this 14% figure does not necessarily imply that typically we get a 14% return when we run this strategy, any more than the expected value of 3.5 implies that we’ll typically get 3.5 when we roll a d6. The expected value only tells you what the average behaviour is; it doesn’t tell you, for example, what the probability is that you’ll lose everything. It tells you that if you ran this strategy a million times, on average you’d get a 14% return, and that’s all it tells you. Be careful when making decisions relying upon expected values!

This 14% expected result totally makes sense, right? We know most of the time the result will be between 0% and 18%, so 14% seems like a totally reasonable expected value.

As one of my managers at Microsoft used to say: would you bet your car on it?

Let’s just run the code again to be sure.

0.14

Whew. That’s a relief.

You know what they say: doing the same thing twice and expecting different results is the definition of practicing the piano, but I’m still not satisfied. Let’s run it another hundred times:

0.14, 0.08, 0.09, 0.14, 0.08, 0.14, 0.14, 0.07, 0.07, 0.14, ...

Oh dear me.

The expected return of our strategy is usually 14%; why it is sometimes half that? Yes, there is randomness in our algorithm, but surely it should not cause the value computed to vary so widely!

It seems like we cannot rely on this implementation, but why not? The code looks right.

As it turns out, I showed you exactly the wrong part of the graphs above. (Because I am devious.) I showed you the parts that make up 99.9% of the likely cases. Let’s look at the graph of the profit-or-loss function over the whole range of outcomes from 0.0 to 1.0:

OUCH. It’s an approximation of a step function; in the original graph that started at 0.46, we should have noticed that the extremely steep part of the curve was trending downwards to the left at an alarming rate. If you have an outcome of 0.48 from our process, you’re going to lose half your money; 0.46, you lose all your money and a little bit more. But by the time we get to 0.44, we’re down to losing over 60 times your original investment; this is a catastrophic outcome.

How likely is that catastrophic outcome? Let’s zoom in on just that part of the probability distribution. I’ll re-scale the profit function so that they fit on the same graph, so you can see where exactly the step happens:

This is a normalized PDF, so the probability of getting a particular outcome is equal to the area of the region under the curve. The region under the curve from 0.38 to 0.45 is a little less than a triangle with base 0.07 and height 0.02, so its area is in the ballpark of 0.0007. We will generate a sample in that region roughly once every 1400 samples.

But… we’re computing the expected value by taking only a thousand samples, so it is quite likely that we don’t hit this region at all. The expected losses in that tiny area are so enormous that it changes the average every time we do sample from it. Whether you include a -60 value or not in the average is a huge influence as to whether the average is 0.14 or 0.07. 

Aside: if you run the expected value estimate a few thousand times or more it just gets crazier; sometimes the expected value is actually negative if we just by luck get more than one sample in this critical region.

I contrived this example to produce occasional “black swan events“; obviously, our naïve algorithm for computing expected value does not take into account the difficulty of sampling a black swan event, and therefore produces inaccurate and inconsistent results.

Next time on FAIC: We know how to solve this problem exactly for discrete distributions; can we use some of those ideas to improve our estimate of the expected value in this scenario?

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last time in this series we saw that we could compute a continuous posterior distribution when given a continuous prior and a discrete likelihood function; I hope it is clear how that is useful, but I’d like to switch gears for a moment and look at a different (but also extremely useful) computation: the expected value.

I’ll start with a quick refresher on how to compute the expected value of a discrete distribution.

You probably already know what expected value of a discrete distribution is; we’ve seen it before in this series. But in case you don’t recall, the basic idea is: supposing we have a distribution of values of a type where we can meaningfully take an average; the “expected value” is the average value of a set of samples as the number of samples gets very large.

A simple example is: what’s the expected value of rolling a standard, fair six-sided die? You could compute it empirically by rolling 6000d6 and dividing by 6000, but that would take a while.

Aside: Again, recall that in Dungeons and Dragons, XdY is “roll a fair Y-sided die X times and take the sum”.

We could also compute this without doing any rolling; we’d expect that about 1000 of those rolls would be 1, 1000 would be 2, and so on.  So we should add up (1000 + 2000 + … + 6000) / 6000, which is just (1 + 2 + 3 + 4 + 5 + 6) / 6, which is 3.5. On average when you roll a fair six-sided die, you get 3.5.

We can then do variations on this scenario; what if the die is still fair, but the labels are not 1, 2, 3, 4, 5, 6, but instead -9, -1, 0, 1, 3, 30?  As I’m sure you can imagine, once again we can compute the expected value by just taking the average: (-9 – 1 + 0 + 1 + 3 + 30) / 6 = 4.

Aside: It’s a bit weird that the “expected value” of a distribution is a value that is not even in the support of the distribution; I’ve never once rolled a 3.5 on a d6. Beginners sometimes confuse the expected value with the mode: that is, the value that you’d expect to get more often than any other value. Remember, the expected value is just an average; it only makes sense in distributions where the sampled values can be averaged.

What if the die isn’t fair? In that case we can compute a weighted average; the computation is the value of each side, multiplied by the weight of that side, sum that, and divide by the total weight. As we saw in a previous episode:

public static double ExpectedValue(
    this IDiscreteDistribution<int> d) =>
  d.Support()
   .Select(s => 
     (double)s * d.Weight(s)).Sum() / d.TotalWeight();

And of course we could similarly define methods for discrete distributions of double and so on. Hopefully that is all clear.

The question I want to explore in the next few episodes requires us to make a small extension of the meaning of “expected value”:

  • Suppose we have a distribution of outcomes d, in the form of an IWeightedDistribution<double>
  • Suppose we have a function f from double to double which assigns a value to each outcome.
  • We wish to accurately and efficiently estimate the average value of f(d.Sample()) as the number of samples becomes large.

Aside: If we had implemented Select on weighted distributions, that would be the same as the expected value of d.Select(f)— but we didn’t!

Aside: We could be slightly more general and say that the distribution is on any T, and f is a function from T to double, but for simplicity’s sake we’ll stick to continuous, one-dimensional distributions in this series. At least for now.

There’s an obvious and extremely concise solution; if we want the average as the number of samples gets large, just compute the average of a large number of samples! It’s like one line of code. Since we make no use of any weights, we can take any distribution:

public static double ExpectedValue(
    this IDistribution<double> d,
    Func<double, double> f) =>
  d.Samples().Take(1000).Select(f).Average();

In fact, we could make this even more general; we only need to get a number out of the function:

public static double ExpectedValue<T>(
    this IDistribution<T> d,
    Func<T, double> f) =>
  d.Samples().Take(1000).Select(f).Average();

We could also make it more specific, for the common case where the function is an identity:

public static double ExpectedValue(
    this IDistribution<double> d) =>
  d.ExpectedValue(x => x);

Let’s look at a couple of examples. (Code for this episode can be found here.) Suppose we have a distribution from 0.0 to 1.0, say the beta distribution from last time, but we’ll skew it a bit:

var distribution = Beta.Distribution(2, 5);
Console.WriteLine(distribution.Histogram(0, 1));

      ****                              
     *******                            
    *********                           
    *********                           
   ***********                          
   ************                         
   *************                        
   **************                       
  ***************                       
  ****************                      
  *****************                     
  ******************                    
 *******************                    
 ********************                   
 **********************                 
 ***********************                
 ************************               
***************************             
*****************************           
----------------------------------------

It looks like we’ve heavily biased these coins towards flipping tails; suppose we draw a coin from this mint; what is the average fairness of the coins we draw? We can just draw a thousand of them and take the average to get an estimate of the expected value:

Console.WriteLine(distribution.ExpectedValue());

0.28099740981762

That is, we expect that a coin drawn from this mint will come up heads about 28% of the time and tails 72% of the time, which conforms to our intuition that this mint produces coins that are heavily weighted towards tails.

Or, here’s an idea; remember the distribution we determined last time: the posterior distribution of fairness of a coin drawn from a Beta(5, 5) mint, flipped once, that turned up heads. On average, what is the fairness of such a coin? (Remember, this is the average given that we’ve discarded all the coins that came up tails on their first flip.)

var prior = Beta.Distribution(5, 5);
IWeightedDistribution<Result> likelihood(double d) =>
  Flip<Result>.Distribution(Heads, Tails, d);
var posterior = prior.Posterior(likelihood)(Heads);
Console.WriteLine(posterior.ExpectedValue());

0.55313807698807

As we’d expect, if we draw a coin from this mint, flip it once, and it comes up heads, on average if we did this scenario a lot of times, the coins would be biased to about 55% heads to 45% tails.

So, once again we’ve implemented a powerful tool in a single line of code! That’s awesome.

Right?

I hope?

Unfortunately, this naive implementation has a number of problems.

Exercise: What are the potential problems with this implementation? List them in the comments!

Next time on FAIC: We’ll start to address some of the problems with this naive implementation.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview