written by

Date

11/12/2024

Category

Uncategorized

Simple introduction to resource estimation part 1

This post aims to educate geoscientists without or with limited prior knowledge about estimation about the fundamentals of geostatistics. We are aware a whole swath of other methods exist for estimations. However, since geostatistics is now a dominant force in estimation, we will focus on that and gently show how it produces its estimates. If interested in alternatives we recommend checking “Local models for spatial analysis by C. D. Lloyd” to get a complete overview.

NOTE: do not be afraid to read further if you are scared of maths: this post does not contain any maths (well, only one formula, to summarize)! For the more adventurous readers we wrote the mathematics behind Kriging, RBFs and Machine Learning in a separate post. They are just modifications to what we will describe below.

With this out of the way, we can start.

A simple analogy

Before we start with resource estimation techniques we will start with an analogy: let’s say we wish to estimate a person’s length when grown up. Let’s assume a new baby is born and we wish to estimate how tall they will be as an adult. The estimate in this case is the expected value at some time in the future instead of an unknown location, but we will come to that later.

Intuitively, we understand that we cannot predict that final length with any certainty, so we have to estimate what it will be. But how can we estimate something in the future, when the future is unknown?

Think briefly: how would you estimate how tall somebody will be when they are grown up?

A typical way would be to measure the length of existing adults and average the results. The expected length will be close to this average. Most of you will know how this is done, but we’ll calculate it here to build on for later.

A simple average to produce a first estimate:

Let us assume we measured the length of 6 adult:

Person Height (cm)
1 171
2 168
3 170
4 200
5 190
6 155

The average length is then a simple matter of adding all their lengths together and dividing by the number of people:

Total height: 1054
Nr of Persons 6
Average (1054  / 6) 175.6667

So, for every new child, we estimate they will be around 176 cm tall when grown up.

A first estimate

This type of average will be familiar for most. We use it all the time for, e.g. the average earning per day or year when selling something, average spending per day when on holiday etc. etc.

It is the simplest and most commonly used way to produce an ‘estimate’. You can do the exact same thing for assay grades by adding all the assay values and dividing by the number of samples. You will then have a grade estimate.

But…

As you might realize, this estimate will not be very accurate. Being of Dutch origin, I belong to the tallest population in the world. If we measured the average length of adults elsewhere in the world, like Türkiye, the average length will not reflect the expected length of a Dutch baby. For a better estimate, we would want to include this knowledge into our estimate.

Let’s take our example again, but now apply an adjustment based on where the baby was born.

Person Height (cm) Country
1 171 NL
2 168 TR
3 170 TR
4 200 NL
5 190 NL
6 155 TR

Local estimate: Netherlands

If we know a baby was born in the Netherlands we would possibly get a better estimate if we did not include any Turkish babies in our average. We then get:

Total height: 561
Nr of Persons 3
Average 187

Local estimate: Türkiye

Similarly, if a new baby is born in Türkiye, we would omit the dutch measurements:

Total height: 494
Nr of Persons 3
Average 164.6667

And elsewhere?

Based on our prior knowledge we think can get a better estimate by only using the values from that country (thus throwing half our measurements away…). But what happens if the baby was born in Austria, or Belgium? We can’t just ignore all measurements just because we don’t have measurements from those countries. How can we still use the knowledge about where the baby was born?

The weighted average

Here we introduce how we can use a weighting factor to further improve our estimates. Belgium for example is very close to the Netherlands and far away from Türkiye. Austria on the other hand is nearly halfway, but slightly closer to the Netherlands. To get reasonable estimates we can use weighting factors in our average calculation.

This concept is perhaps the most important aspect of the whole post. When you grasp this (and it is actually not that hard), you will understand the most important aspect of interpolation.

If we slightly adjust our average calculations for a baby born in the Netherlands we can gently introduce how weights can be used and adjusted to get better estimates. To this end we do not just ignore the measurement lengths for Turkish people, we just set their weights to zero.

Height (cm) Country Weighting Factor
171 NL 1
168 TR 0
170 TR 0
200 NL 1
190 NL 1
155 TR 0

In a more general form we multiply each length with a weighting factor (WF) and then divide by the sum of all WF’s. To illustrate, let’s again calculate the average for NL first before adapting it for a Belgian baby.

The baby was in NL, but this time we estimate using WF’s to produce the total length like so:

171 x 1

168 x 0

170 x 0

200 x 1

190 x 1

155 x 0

This gives our total height of 561 cm.

Likewise, we now not just divide by the total number of measurements, but the sum of the WFs:

1 + 0 + 0 + 1 + 1 + 0 = 3

This again gives an Average 561 / 3 = 187

From simple average to weighted average: summary

To change our standard average to a weighted average, we multiply each measurement by a weighting factor (WF), and sum all these values together. Then we sum all weights together and divide by the sum of all WFs:

Weighted Average = Sum of measurements * WFs / Sum of all WFs.

We cannot overstate how important this part is to realize, as it forms the basis for most geostatistics methods!

Inverse Distance Weighting (IDW)

A natural adjustment of the general weighted average is the IDW method. In IDW the WF’s consist of measurements of distance. This means that instead of using just 0 and 1 as we did before, the WF will be determined by a distance. To demonstrate we will use this to create our estimate for somebody in Belgium. Belgium is about 200km from NL and about 2500km from TR. Intuitively we understand that since Belgium is so close to NL the estimate should emphasize measurements from NL and reduce the WF’s for measurements far way such as in TR. To this end we do not use the distances themselves, but rather their inverse: 1 / distance. This means that when the distance is large, the weight will be (very) small.

With this definition of our WF we create our estimate for a Belgian baby. This means we multiply each length with its own WF, which is 1 / distance, like so:

171 x 1 / 200 = 171 x 0.005

168 x 1 / 2500 = 168 x 0.0004

170 x 1 / 2500 = 170 x 0.0004

200 x 1 / 200 = 200 x 0.005

190 x 1 / 200 = 190 x 0.005

155 x 1 / 2500 = 155 x 0.0004

Adding these together gives us a total height x WF’s of 3.0022. Similarly, the the total of all weights is 0.0162.

This gives our weighted Average 3.0022 / 0.0162 = 185.32 cm

As you can see, for a Belgian baby we estimate it to be a little smaller than a Dutch baby, but taller than a Turkish baby.

WARNING: formula coming up!

We can summarize what we explained with many words in the following little formula:

\[e = \frac{\sum \omega * v}{\sum \omega}\]

Well, actually, since the sum of the weights ( $\frac{1}{\sum \omega}$ ) could be seen as a constant that could be included in the original weights. We can then simply things even more to:

\[e = \sum \omega * v\]

With this simplified method, it means for estimations we multiply each value (v) with a WF (w) and add them all together, where each WF is normalized (just means that when added all together, they add up to 1)

One reason we write the formula here is to highlight how flexible this fundamental method actually is. Until now we have used very simple weights, but even in Inverse Distance Weighting there normally is the option to not just use the distance itself, but either the squared distance or even cubed distance. The overall formula doesn’t change when we change the way we establish the weights.

To establish the weights we can be as creative as we want and could use a sine, cosine, fancy polynomials… whatever we think works for the problem at hand.

The thing to remember here (your take home message if you want) is:

If the samples remain the same for different estimations methods, the estimate only depends on the choice of the weights!

Trends

Before moving on to more advanced ways to estimate the weights, let us briefly look at how trends can be used. In our formula, the w’s are the weighting factors, but these can be anything. If we take the distances again as an example, we could add a directional adjustment to each weight. This is how trends in data can be used. The weights of the points in the direction of the trend are set higher than points perpendicular to a trend. Another and similar way is by scaling the data along the trend so that the distances are adjusted to reflect the trend scaling. The formula still remains the same, only the weights are adjusted to reflect distances and trends.

Final words

We have shown that the key to estimation is establishing certain weighting factors (WFs) that when multiplied with the original sample values and added together produce estimates at unsampled locations. This process is also commonly referred to as interpolation. The way the weights are determined and processed determines the shape of the interpolation. This is true if assuming the number of sample points included in the estimation remains the same across the different methods. This assumption will become important when considering ways to speed up interpolation.