LCC snowfall analysis

A brief overview

February 9, 2026

Introduction

We define total snowfall as s(d,y)s(d,y), known for all days dDy={1,2,...,ny}d \in D_y = \{1,2,...,n_y\} and years yY={199X,...,2025}y \in Y = \{199X, ..., 2025\}. We are given a set of N models {m1,m2,...,mN}\{m_1, m_2, ..., m_N\}, for which each model has a mi(d,y)=s^i(d,y)m_i(d,y) = \hat{s}_i(d,y), where s^\hat{s} approximates total snowfall.

Naive model

We simply identify the model that has historically shown the lowest error across all days. For any model mim_i, we can define

Total Model Error(i)=yYdDy[s(d,y)s^i(d,y)]2\textrm{Total Model Error}(i)=\sum_{y \in Y}\sum_{d\in D_y} [s(d,y)-\hat{s}_i(d,y)]^2

We can then call our naive prediction:

Naive Total=dD2026s^I(d,2026)\textrm{Naive Total}=\sum_{d \in D_{2026}}\hat{s}_I(d,2026)

where

I=argmaxiN(Total Model Error(i))I = \textrm{argmax}_{i \in N}\left(\textrm{Total Model Error}(i)\right)

Year-agnostic performance weighted model

Instead of blindly accepting one model, we can weight the outputs of each of the models based on how trustworthy they've been in the past. The softmax function gives us a clean, tunable way to weight the impact of the errors of each of the models. We define

wi=eβEij=1NeβEjw_i = \frac{e^{-\beta E_i}}{\sum_{j=1}^{N}e^{-\beta E_j}}

where Ei=Total Model Error(i)E_i = \textrm{Total Model Error}(i), and thus calculate

Performance Weighted Totalβ=iNwidD2026s^i(d,2026)\textrm{Performance Weighted Total}|_\beta = \sum_{i\in N}w_i\sum_{d \in D_{2026}}\hat{s}_i(d,2026)

β\beta here is an informative parameter that goes from uniform weighting (β0\beta \rightarrow 0) to winner-take-all (β\beta\rightarrow \infty).

Yearly performance weighted model

One issue here is that we treat all years as equal. However, years where all models performed badly should matter less for grading than years where most models were more accurate. [MORE TO COME]

Yearly correlation weighted model

Given different years may follow different patterns based on macros factors (e.g., El Nino, etc), we can preferentially weight models that performed well in the years that have been most similar to this year. [MORE TO COME]