Data integrity

Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire life-cycle[1] and is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context – even under the same general umbrella of computing. It is at times used as a proxy term for data quality,[2] while data validation is a pre-requisite for data integrity.[3] Data integrity is the opposite of data corruption.[4] The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). Moreover, upon later retrieval, ensure the data is the same as when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused with data security, the discipline of protecting data from unauthorized parties. Continue reading “Data integrity”

Linear regression

In statistics, linear regression is a linear approach to modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression.[1] This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.[2] Continue reading “Linear regression”

Expected shortfall

Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The “expected shortfall at q% level” is the expected return on the portfolio in the worst {\displaystyle q\%} of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.

Expected shortfall is also called conditional value at risk (CVaR),[1] average value at risk (AVaR), expected tail loss (ETL), and superquantile.[2] Continue reading “Expected shortfall”

Bachelier model

The Bachelier model is the name given by a model of an asset price under brownian motion presented by Louis Bachelier on his PhD thesis The Theory of Speculation (Théorie de la spéculation, published 1900). It is also called “Normal Model” equivalently (as opposed to “Log-Normal Model” or “Black-Scholes Model”).

On the day of 2020-04-08, CME Group posted the note CME Clearing Plan to Address the Potential of a Negative Underlying in Certain Energy Options Contracts,[1] saying that after a threshold on price, it would change energy options model from Geometric Brownian Motion model and Black–Scholes model to Bachelier model. In the day 2020-04-20, oil prices reached for first time in history negative values,[2] where Bachelier model took an important role in option pricing and risk management.

The European analytic formula for this model based on risk neutral argument is derived in Analytic Formula for the European Normal Black Scholes Formula (Kazuhiro Iwasawa, New York University, December 2nd, 2001). [3]


  1. ^ “CME Clearing Plan to Address the Potential of a Negative Underlying in Certain Energy Options Contracts”. Retrieved 2020-04-21.
  2. ^ “An oil futures contract expiring Tuesday went negative in bizarre move showing a demand collapse”. CNBC. 15 December 2003. Retrieved 21 April 2020.
  3. ^ “Analytic Formula for the European Normal Black Scholes Formula”. New York University. 2 December 2001.

TED spread

TED spread (in red) and components during the Financial crisis of 2007–08

TED spread (in green), 1986 to 2015.

The TED spread is the difference between the interest rates on interbank loans and on short-term U.S. government debt (“T-bills”). TED is an acronym formed from T-Bill and ED, the ticker symbol for the Eurodollar futures contract.

Initially, the TED spread was the difference between the interest rates for three-month U.S. Treasuries contracts and the three-month Eurodollars contract as represented by the London Interbank Offered Rate (LIBOR). However, since the Chicago Mercantile Exchange dropped T-bill futures after the 1987 crash,[1] the TED spread is now calculated as the difference between the three-month LIBOR and the three-month T-bill interest rate.

Formula and reading

{\displaystyle {\mbox{TED spread}}={{\mbox{3-month LIBOR rate}}-{\mbox{3-month T-bill interest rate}}}}

The size of the spread is usually denominated in basis points (bps). For example, if the T-bill rate is 5.10% and ED trades at 5.50%, the TED spread is 40 bps. The TED spread fluctuates over time but generally has remained within the range of 10 and 50 bps (0.1% and 0.5%) except in times of financial crisis. A rising TED spread often presages a downturn in the U.S. stock market, as it indicates that liquidity is being withdrawn.


The TED spread is an indicator of perceived credit risk in the general economy,[2] since T-bills are considered risk-free while LIBOR reflects the credit risk of lending to commercial banks. An increase in the TED spread is a sign that lenders believe the risk of default on interbank loans (also known as counterparty risk) is increasing. Interbank lenders, therefore, demand a higher rate of interest, or accept lower returns on safe investments such as T-bills. When the risk of bank defaults is considered to be decreasing, the TED spread decreases.[3] Boudt, Paulus, and Rosenthal show that a TED spread above 48 basis points is indicative of economic crisis.[4]

Historical levels


The long-term average of the TED spread has been 30 basis points with a maximum of 50 bps. During 2007, the subprime mortgage crisis ballooned the TED spread to a region of 150–200 bps. On September 17, 2008, the TED spread exceeded 300 bps, breaking the previous record set after the Black Monday crash of 1987.[5] Some higher readings for the spread were due to inability to obtain accurate LIBOR rates in the absence of a liquid unsecured lending market.[6] On October 10, 2008, the TED spread reached another new high of 457 basis points.


In October 2013, due to worries regarding a potential default on US debt, the 1-month TED went negative for the first time since it started being tracked.[7][8]

See also

  • LIBOR-OIS spread
  • Overnight indexed swap
  • Treasury Bill
  • Treasury security


  1. ^
  2. ^ Financial Glossary
  3. ^ Mission not accomplished not yet, anyway – Paul Krugman – Op-Ed Columnist – New York Times Blog
  4. ^ Boudt, K.; Paulus, E.; Rosenthal, D.W.R. (2017). “Funding liquidity, market liquidity and TED spread: A two-regime model”. Journal of Empirical Finance43: 143–158. doi:10.1016/j.jempfin.2017.06.002. hdl:10419/144456.
  5. ^ Financial Times. (2008). Panic grips credit markets
  6. ^ Bloomberg – Libor Jumps as Banks Seek Cash to Shore Up Finances
  7. ^ Obama Says Real Boss in Default Showdown Means Bonds Call Shots,, 11 October 2013
  8. ^ UBS Asset Management Taps Derivatives to Hedge U.S. Debt Risk,, 10 October 2013

Natural rate of interest

The natural rate of interest, sometimes called the neutral rate of interest,[1] is the interest rate that supports the economy at full employment/maximum output while keeping inflation constant.[2] It cannot be observed directly. Rather, policy makers and economic researchers aim to estimate the natural rate of interest as a guide to monetary policy, usually using various economic models to help them do so.


Eugen von Böhm-Bawerk used the term “natural interest” in his Capital and Interest first written in 1880s, but the concept itself was originated by the Swedish economist Knut Wicksell. Wicksell published a study in 1898 defining the natural rate of interest as the rate that would bring an economy in aggregate price equilibrium if all lending were done without reference to money. Wicksell defined the natural rate of interest as “a certain rate of interest on loans which is neutral in respect to commodity prices and tends neither to raise nor to lower them”.[3]

Following Wicksell, J. M. Keynes introduced the term “natural rate of interest” in his A Treatise on Money (1930).[4]

No further significant work on the idea of a natural rate of interest followed, partly because Wicksell’s study was originally published in German and did not become easily available in English until 1936. Further, at the time, central banks were not targeting interest rates. The level of interest rates was not a main focus of policy attention. When during the 1990s the central bank policy targets started changing, the concept of natural rate of interest began attracting attention. The US Federal Reserve decision to adopt the short term interest rate as its primary control of inflation led to growing research interest into the topic of the natural rate of interest. Using macroeconomic models, the natural rate of interest can be defined as that rate of interest where the IS curve intersects with the potential output line (a vertical line cutting the X-axis at the value of potential GDP).[5]

Recent discussion

A good deal of recent[when?] discussion about economic policy, both in the US and internationally, has centered on the idea of the natural rate of interest.[6] Following the financial crisis of 2007–08 (sometimes referred to as the “global financial crisis”), key central banks in major countries around the world expanded liquidity quickly and encouraged interest rates (especially short-term interest rates) to move to very low levels. This approach led to much discussion among economic policy makers as to what the appropriate levels of interest rates (both in the short-term, and in the long-term) might be. In 2017, for example, analysts in the Canadian central bank, the Bank of Canada, argued that the neutral rate of interest in Canada had declined significantly following the global financial crisis.[7]


Among economic policy makers, in official and academic papers, the natural rate of interest is often depicted as r* (“r-star”).[8] The president of the New York Fed, John Williams, has written extensively about the natural rate of interest[9] and has even said, lightheartedly, that he “has a passion for r-star”.

R-star (the natural rate of interest) is of particular interest because key economic issues for economic policy makers, at any time, revolve around the relationship between current long-term interest rates and r-star. Questions arise, for example, as to whether current rates are below or above r-star, and if there is a significant gap between current rates and r-star, how quickly the gap should be closed. Broader issues also attract much debate, such as whether the global natural rate of interest (“global r-star”) is stable or whether it is tending to drift up or down over time – and if it is tending to drift, what are the underlying factors (economic, social, demographic) that are causing the change.[10]

Williams has argued that it is increasingly clear that global r* has declined in recent years:[11]

The evidence of a sizable decline in r-star across economies is compelling. The weighted average of estimates for five major economic areas—Canada, the euro area, Japan, the United Kingdom, and the United States—has declined to half a percent. That’s 2 percentage points below the average natural rate that prevailed in the two decades before the financial crisis. A striking aspect of these estimates is that they show no signs of moving back to previously normal levels, even though economies have recovered from the crisis. Given the demographic waves and sustained productivity growth slowdown around the world, I see no reason to expect r-star to revert to higher levels in the foreseeable future. … The global decline in r-star will continue to pose significant challenges for monetary policy.

Other “star” variables

Senior economic policy makers and other economists often discuss the level of the natural rate of interest (r-star) in relation to several other key economic variables, sometimes also seen as “stars”. In August 2018, the Chairman of the United States Federal Reserve System (the “Fed”), Jay Powell, discussed the relationships between several of these main variables in some detail.[12] Powell noted that the natural rate of interest needed to be considered in relation to the “natural rate of unemployment” (which Powell noted is often referred to as “u-star”, written u*) and the inflation objective (“pi-star”, written Π*). Powell then went on to note that the conventional approach to economic policy making was that “policymakers should navigate by these stars”. However, he said, although navigating by the stars can sound straightforward, in practice “guiding policy by the stars … has been quite challenging of late because our best estimates of the location of the stars have been changing significantly”. Powell reviewed the history of attempts to estimate the location of the “stars” over a 40-year period 1960–2000 and noted that over time, there had been significant revisions of estimates of the positions of the stars.

Variations between countries

Estimates of the natural rate of interest vary between countries. This is because the underlying factors influencing the natural rate of interest are believed to vary between countries. Estimates of the natural rate of interest in Australia carried out in 2017 in the Reserve Bank of Australia, for example, suggest that the natural rate of interest in Australia is perhaps somewhat higher than in the US/Euro area.[13]


  1. ^ Federal Reserve Bank of San Francisco, ‘What is neutral monetary policy?’, April 2005.
  2. ^ Olson, David Wessel and Peter (October 19, 2015). “The Hutchins Center Explains: the Natural Rate of Interest”. Brookings. Retrieved 2017-09-18.
  3. ^ The study, published in German in 1898, became available in English in 1936. See Wicksell, K. Interest and Prices., Mises Institute website.
  4. ^ The Collected Writings of John Maynard Keynes. Volume 5. P. 139.
  5. ^ Laubach, Thomas (March 13, 2006). “Measuring the Natural Rate of Interest”. The Review of Economics and Statistics85 (4): 1063–1070. CiteSeerX doi:10.1162/003465303772815934.
  6. ^ “A natural long-term rate”, The Economist, 26 October 2013. Retrieved 28 May 2018.
  7. ^ Jose Dorich, Abeer Reza and Subrata Sarker, ‘An Update on the Neutral Rate of Interest’, Bank of Canada Review, Autumn 2017.
  8. ^ Gavyn Davies, ‘What investors should know about R star’, The Financial Times, 12 September 2016.
  9. ^ President’s speeches, Speeches by John C. Williams, President and CEO, Federal Reserve Bank of San Francisco.
  10. ^ John Williams, for example, refers to a “trio” of factors – demographics, productivity growth, and the global demand for safe assets.
  11. ^ John C. Williams, ‘When the facts change…’. Remarks at a High level conference in Zurich, 14 May 2019.
  12. ^ Jerome H. Powell. 2018. “Monetary Policy in a Changing Economy”, Jackson Hole Symposium, Wyoming, United States, 24 August.
  13. ^ Rachel McCririck and Daniel Rees, ‘The Neutral Interest Rate’, Bulletin, Reserve Bank of Australia, September quarter 2017.

Hull–White model

In financial mathematics, the Hull–White model is a model of future interest rates. In its most generic formulation, it belongs to the class of no-arbitrage models that are able to fit today’s term structure of interest rates. It is relatively straightforward to translate the mathematical description of the evolution of future interest rates onto a tree or lattice and so interest rate derivatives such as bermudan swaptions can be valued in the model.

The first Hull–White model was described by John C. Hull and Alan White in 1990. The model is still popular in the market today.

The model

One-factor model

The model is a short-rate model. In general, it has the following dynamics:

{\displaystyle dr(t)=\left[\theta (t)-\alpha (t)r(t)\right]\,dt+\sigma (t)\,dW(t).}

There is a degree of ambiguity among practitioners about exactly which parameters in the model are time-dependent or what name to apply to the model in each case. The most commonly accepted naming convention is the following:

  • {\displaystyle \theta } has t (time) dependence — the Hull–White model.
  • {\displaystyle \theta } and {\displaystyle \alpha } are both time-dependent — the extended Vasicek model.

Two-factor model

The two-factor Hull–White model (Hull 2006:657–658) contains an additional disturbance term whose mean reverts to zero, and is of the form:

{\displaystyle d\,f(r(t))=\left[\theta (t)+u-\alpha (t)\,f(r(t))\right]dt+\sigma _{1}(t)\,dW_{1}(t),}

where {\displaystyle \displaystyle u} has an initial value of 0 and follows the process:

{\displaystyle du=-bu\,dt+\sigma _{2}\,dW_{2}(t)}

Analysis of the one-factor model

For the rest of this article we assume only {\displaystyle \theta } has t-dependence. Neglecting the stochastic term for a moment, notice that for {\displaystyle \alpha >0} the change in r is negative if r is currently “large” (greater than {\displaystyle \theta (t)/\alpha )} and positive if the current value is small. That is, the stochastic process is a mean-reverting Ornstein–Uhlenbeck process.

θ is calculated from the initial yield curve describing the current term structure of interest rates. Typically α is left as a user input (for example it may be estimated from historical data). σ is determined via calibration to a set of caplets and swaptions readily tradeable in the market.

When {\displaystyle \alpha }{\displaystyle \theta }, and {\displaystyle \sigma } are constant, Itô’s lemma can be used to prove that

{\displaystyle r(t)=e^{-\alpha t}r(0)+{\frac {\theta }{\alpha }}\left(1-e^{-\alpha t}\right)+\sigma e^{-\alpha t}\int _{0}^{t}e^{\alpha u}\,dW(u),}

which has distribution

{\displaystyle r(t)\sim {\mathcal {N}}\left(e^{-\alpha t}r(0)+{\frac {\theta }{\alpha }}\left(1-e^{-\alpha t}\right),{\frac {\sigma ^{2}}{2\alpha }}\left(1-e^{-2\alpha t}\right)\right),}

where {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} is the normal distribution with mean {\displaystyle \mu } and variance {\displaystyle \sigma ^{2}}.

When {\displaystyle \theta (t)} is time-dependent,

{\displaystyle r(t)=e^{-\alpha t}r(0)+\int _{0}^{t}e^{\alpha (s-t)}\theta (s)ds+\sigma e^{-\alpha t}\int _{0}^{t}e^{\alpha u}\,dW(u),}

which has distribution

{\displaystyle r(t)\sim {\mathcal {N}}\left(e^{-\alpha t}r(0)+\int _{0}^{t}e^{\alpha (s-t)}\theta (s)ds,{\frac {\sigma ^{2}}{2\alpha }}\left(1-e^{-2\alpha t}\right)\right).}

Bond pricing using the Hull–White model

It turns out that the time-S value of the T-maturity discount bond has distribution (note the affine term structure here!)

{\displaystyle P(S,T)=A(S,T)\exp(-B(S,T)r(S)),}


{\displaystyle B(S,T)={\frac {1-\exp(-\alpha (T-S))}{\alpha }},}
{\displaystyle A(S,T)={\frac {P(0,T)}{P(0,S)}}\exp \left(\,-B(S,T){\frac {\partial \log(P(0,S))}{\partial S}}-{\frac {\sigma ^{2}(\exp(-\alpha T)-\exp(-\alpha S))^{2}(\exp(2\alpha S)-1)}{4\alpha ^{3}}}\right).}

Note that their terminal distribution for {\displaystyle P(S,T)} is distributed log-normally.

Derivative pricing

By selecting as numeraire the time-S bond (which corresponds to switching to the S-forward measure), we have from the fundamental theorem of arbitrage-free pricing, the value at time t of a derivative which has payoff at time S.

{\displaystyle V(t)=P(t,S)\mathbb {E} _{S}[V(S)\mid {\mathcal {F}}(t)].}

Here, {\displaystyle \mathbb {E} _{S}} is the expectation taken with respect to the forward measure. Moreover, standard arbitrage arguments show that the time T forward price {\displaystyle F_{V}(t,T)} for a payoff at time T given by V(T) must satisfy {\displaystyle F_{V}(t,T)=V(t)/P(t,T)}, thus

{\displaystyle F_{V}(t,T)=\mathbb {E} _{T}[V(T)\mid {\mathcal {F}}(t)].}

Thus it is possible to value many derivatives V dependent solely on a single bond {\displaystyle P(S,T)} analytically when working in the Hull–White model. For example, in the case of a bond put

{\displaystyle V(S)=(K-P(S,T))^{+}.}

Because {\displaystyle P(S,T)} is lognormally distributed, the general calculation used for the Black–Scholes model shows that

{\displaystyle {E}_{S}[(K-P(S,T))^{+}]=KN(-d_{2})-F(t,S,T)N(-d_{1}),}


{\displaystyle d_{1}={\frac {\log(F/K)+\sigma _{P}^{2}S/2}{\sigma _{P}{\sqrt {S}}}}}


{\displaystyle d_{2}=d_{1}-\sigma _{P}{\sqrt {S}}.}

Thus today’s value (with the P(0,S) multiplied back in and t set to 0) is:

{\displaystyle P(0,S)KN(-d_{2})-P(0,T)N(-d_{1}).}

Here {\displaystyle \sigma _{P}} is the standard deviation (relative volatility) of the log-normal distribution for {\displaystyle P(S,T)}. A fairly substantial amount of algebra shows that it is related to the original parameters via

{\displaystyle {\sqrt {S}}\sigma _{P}={\frac {\sigma }{\alpha }}(1-\exp(-\alpha (T-S))){\sqrt {\frac {1-\exp(-2\alpha S)}{2\alpha }}}.}

Note that this expectation was done in the S-bond measure, whereas we did not specify a measure at all for the original Hull–White process. This does not matter — the volatility is all that matters and is measure-independent.

Because interest rate caps/floors are equivalent to bond puts and calls respectively, the above analysis shows that caps and floors can be priced analytically in the Hull–White model. Jamshidian’s trick applies to Hull–White (as today’s value of a swaption in the Hull–White model is a monotonic function of today’s short rate). Thus knowing how to price caps is also sufficient for pricing swaptions. In the even that the underlying is a compounded backward-looking rate rather than a (forward-looking) LIBOR term rate, Turfus (2020) shows how this formula can be straightforwardly modified to take into account the additional convexity.

Swaptions can also be priced directly as described in Henrard (2003). Direct implementations are usually more efficient.

Monte-Carlo simulation, trees and lattices

However, valuing vanilla instruments such as caps and swaptions is useful primarily for calibration. The real use of the model is to value somewhat more exotic derivatives such as bermudan swaptions on a lattice, or other derivatives in a multi-currency context such as Quanto Constant Maturity Swaps, as explained for example in Brigo and Mercurio (2001). The efficient and exact Monte-Carlo simulation of the Hull–White model with time dependent parameters can be easily performed, see Ostrovski (2013) and (2016).

See also

  • Vasicek model
  • Cox–Ingersoll–Ross model
  • Black-Karasinski model


Primary references

  • John Hull and Alan White, “Using Hull–White interest rate trees,” Journal of Derivatives, Vol. 3, No. 3 (Spring 1996), pp. 26–36
  • John Hull and Alan White, “Numerical procedures for implementing term structure models I,” Journal of Derivatives, Fall 1994, pp. 7–16.
  • John Hull and Alan White, “Numerical procedures for implementing term structure models II,” Journal of Derivatives, Winter 1994, pp. 37–48.
  • John Hull and Alan White, “The pricing of options on interest rate caps and floors using the Hull–White model” in Advanced Strategies in Financial Risk Management, Chapter 4, pp. 59–67.
  • John Hull and Alan White, “One factor interest rate models and the valuation of interest rate derivative securities,” Journal of Financial and Quantitative Analysis, Vol 28, No 2, (June 1993) pp. 235–254.
  • John Hull and Alan White, “Pricing interest-rate derivative securities”, The Review of Financial Studies, Vol 3, No. 4 (1990) pp. 573–592.

Other references

  • Hull, John C. (2006). “Interest Rate Derivatives: Models of the Short Rate”. Options, Futures, and Other Derivatives (6th ed.). Upper Saddle River, N.J: Prentice Hall. pp. 657–658. ISBN 0-13-149908-4. LCCN 2005047692. OCLC 60321487.
  • Damiano Brigo, Fabio Mercurio (2001). Interest Rate Models — Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
  • Henrard, Marc (2003). “Explicit Bond Option and Swaption Formula in Heath–Jarrow–Morton One Factor Model,” International Journal of Theoretical and Applied Finance, 6(1), 57–72. Preprint SSRN.
  • Henrard, Marc (2009). Efficient swaptions price in Hull–White one factor model, arXiv, 0901.1776v1. Preprint arXiv.
  • Ostrovski, Vladimir (2013). Efficient and Exact Simulation of the Hull–White Model, Preprint SSRN.
  • Ostrovski, Vladimir (2016). Efficient and Exact Simulation of the Gaussian Affine Interest Rate Models., International Journal of Financial Engineering, Vol. 3, No. 02.,Preprint SSRN.
  • Puschkarski, Eugen. Implementation of Hull–White’s No-Arbitrage Term Structure Model, Diploma Thesis, Center for Central European Financial Markets
  • Turfus, Colin (2020). Caplet Pricing with Backward-Looking Rates., Preprint SSRN.
  • Letian Wang, Hull–White Model, Fixed Income Quant Group, DTCC (detailed numeric example and derivation)

Cox–Ingersoll–Ross model

Three trajectories of CIR processes

In mathematical finance, the Cox–Ingersoll–Ross (CIR) model describes the evolution of interest rates. It is a type of “one factor model” (short rate model) as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives. It was introduced in 1985 by John C. Cox, Jonathan E. Ingersoll and Stephen A. Ross as an extension of the Vasicek model.

The model

CIR process

The CIR model specifies that the instantaneous interest rate {\displaystyle r_{t}} follows the stochastic differential equation, also named the CIR Process:

{\displaystyle dr_{t}=a(b-r_{t})\,dt+\sigma {\sqrt {r_{t}}}\,dW_{t}}

where {\displaystyle W_{t}} is a Wiener process (modelling the random market risk factor) and {\displaystyle a}{\displaystyle b}, and {\displaystyle \sigma \,} are the parameters. The parameter {\displaystyle a} corresponds to the speed of adjustment to the mean {\displaystyle b}, and {\displaystyle \sigma \,} to volatility. The drift factor, {\displaystyle a(b-r_{t})}, is exactly the same as in the Vasicek model. It ensures mean reversion of the interest rate towards the long run value {\displaystyle b}, with speed of adjustment governed by the strictly positive parameter {\displaystyle a}.

The standard deviation factor, {\displaystyle \sigma {\sqrt {r_{t}}}}, avoids the possibility of negative interest rates for all positive values of {\displaystyle a} and {\displaystyle b}. An interest rate of zero is also precluded if the condition

{\displaystyle 2ab\geq \sigma ^{2}\,}

is met. More generally, when the rate ({\displaystyle r_{t}}) is close to zero, the standard deviation ({\displaystyle \sigma {\sqrt {r_{t}}}}) also becomes very small, which dampens the effect of the random shock on the rate. Consequently, when the rate gets close to zero, its evolution becomes dominated by the drift factor, which pushes the rate upwards (towards equilibrium).

This process can be defined as a sum of squared Ornstein–Uhlenbeck process. The CIR is an ergodic process, and possesses a stationary distribution. The same process is used in the Heston model to model stochastic volatility.


  • Future distribution
The distribution of future values of a CIR process can be computed in closed form:

{\displaystyle r_{t+T}={\frac {Y}{2c}},}
where {\displaystyle c={\frac {2a}{(1-e^{-aT})\sigma ^{2}}}}, and Y is a non-central chi-squared distribution with {\displaystyle {\frac {4ab}{\sigma ^{2}}}} degrees of freedom and non-centrality parameter {\displaystyle 2cr_{t}e^{-aT}}. Formally the probability density function is:
{\displaystyle f(r_{t+T};r_{t},a,b,\sigma )=c\,e^{-u-v}\left({\frac {v}{u}}\right)^{q/2}I_{q}(2{\sqrt {uv}}),}
where {\displaystyle q={\frac {2ab}{\sigma ^{2}}}-1}{\displaystyle u=cr_{t}e^{-aT}}{\displaystyle v=cr_{t+T}}, and {\displaystyle I_{q}(2{\sqrt {uv}})} is a modified Bessel function of the first kind of order {\displaystyle q}.
  • Asymptotic distribution
Due to mean reversion, as time becomes large, the distribution of {\displaystyle r_{\infty }} will approach a gamma distribution with the probability density of:

{\displaystyle f(r_{\infty };a,b,\sigma )={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}r_{\infty }^{\alpha -1}e^{-\beta r_{\infty }},}
where {\displaystyle \beta =2a/\sigma ^{2}} and {\displaystyle \alpha =2ab/\sigma ^{2}}.

Derivation of asymptotic distribution


  • Mean reversion,
  • Level dependent volatility ({\displaystyle \sigma {\sqrt {r_{t}}}}),
  • For given positive {\displaystyle r_{0}} the process will never touch zero, if {\displaystyle 2ab\geq \sigma ^{2}}; otherwise it can occasionally touch the zero point,
  • {\displaystyle \operatorname {E} [r_{t}\mid r_{0}]=r_{0}e^{-at}+b(1-e^{-at})}, so long term mean is {\displaystyle b},
  • {\displaystyle \operatorname {Var} [r_{t}\mid r_{0}]=r_{0}{\frac {\sigma ^{2}}{a}}(e^{-at}-e^{-2at})+{\frac {b\sigma ^{2}}{2a}}(1-e^{-at})^{2}.}


  • Ordinary least squares
The continuous SDE can be discretized as follows
{\displaystyle r_{t+\Delta t}-r_{t}=a(b-r_{t})\,\Delta t+\sigma \,{\sqrt {r_{t}\Delta t}}\varepsilon _{t},}
which is equivalent to
{\displaystyle {\frac {r_{t+\Delta t}-r_{t}}{{\sqrt {r}}_{t}}}={\frac {ab\Delta t}{{\sqrt {r}}_{t}}}-a{\sqrt {r}}_{t}\Delta t+\sigma \,{\sqrt {\Delta t}}\varepsilon _{t},}
provided {\displaystyle \varepsilon _{t}} is n.i.i.d. (0,1). This equation can be used for a linear regression.
  • Martingale estimation
  • Maximum likelihood


Stochastic simulation of the CIR process can be achieved using two variants:

  • Discretization
  • Exact

Bond pricing

Under the no-arbitrage assumption, a bond may be priced using this interest rate process. The bond price is exponential affine in the interest rate:

{\displaystyle P(t,T)=A(t,T)\exp(-B(t,T)r_{t})\!}


{\displaystyle A(t,T)=\left({\frac {2h\exp((a+h)(T-t)/2)}{2h+(a+h)(\exp((T-t)h)-1)}}\right)^{2ab/\sigma ^{2}}}
{\displaystyle B(t,T)={\frac {2(\exp((T-t)h)-1)}{2h+(a+h)(\exp((T-t)h)-1)}}}
{\displaystyle h={\sqrt {a^{2}+2\sigma ^{2}}}}


A CIR process is a special case of a basic affine jump diffusion, which still permits a closed-form expression for bond prices. Time varying functions replacing coefficients can be introduced in the model in order to make it consistent with a pre-assigned term structure of interest rates and possibly volatilities. The most general approach is in Maghsoodi (1996). A more tractable approach is in Brigo and Mercurio (2001b) where an external time-dependent shift is added to the model for consistency with an input term structure of rates. A significant extension of the CIR model to the case of stochastic mean and stochastic volatility is given by Lin Chen (1996) and is known as Chen model. A more recent extension is the so-called CIR # by Orlando, Mininni and Bufalo (2018,[1] 2019,[2][3]).

See also

  • Hull–White model
  • Vasicek model
  • Chen model


  1. ^ Orlando, Giuseppe; Mininni, Rosa Maria; Bufalo, Michele (2018). “A New Approach to CIR Short-Term Rates Modelling”. New Methods in Fixed Income Modeling. Contributions to Management Science. Springer International Publishing: 35–43. doi:10.1007/978-3-319-95285-7_2. ISBN 978-3-319-95284-0.
  2. ^ Orlando, Giuseppe; Mininni, Rosa Maria; Bufalo, Michele (1 January 2019). “A new approach to forecast market interest rates through the CIR model”. Studies in Economics and Finance. ahead-of-print (ahead-of-print). doi:10.1108/SEF-03-2019-0116. ISSN 1086-7376.
  3. ^ Orlando, Giuseppe; Mininni, Rosa Maria; Bufalo, Michele (19 August 2019). “Interest rates calibration with a CIR model”. The Journal of Risk Finance20 (4): 370–387. doi:10.1108/JRF-05-2019-0080. ISSN 1526-5943.

Ohlson O-score

The Ohlson O-score for predicting bankruptcy is a multi-factor financial formula postulated in 1980 by Dr. James Ohlson of the New York University Stern Accounting Department as an alternative to the Altman Z-score for predicting financial distress.[1]

Calculation of the O-score

The Ohlson O-score is the result of a 9-factor linear combination of coefficient-weighted business ratios which are readily obtained or derived from the standard periodic financial disclosure statements provided by publicly traded corporations. Two of the factors utilized are widely considered to be dummies as their value and thus their impact upon the formula typically is 0.[2] When using an O-score to evaluate the probability of company’s failure, then exp(O-score) is divided by 1 + exp(O-score).[3]

The calculation for Ohlson O-score appears below:[4]

{\displaystyle {\begin{aligned}T={}&-1.32-0.407\log(TA_{t}/GNP)+6.03{\frac {TL_{t}}{TA_{t}}}-1.43{\frac {WC_{t}}{TA_{t}}}+0.0757{\frac {CL_{t}}{CA_{t}}}\\[10pt]&{}-1.72X-2.37{\frac {NI_{t}}{TA_{t}}}-1.83{\frac {FFO_{t}}{TL_{t}}}+0.285Y-0.521{\frac {NI_{t}-NI_{t-1}}{|NI_{t}|+|NI_{t-1}|}}\end{aligned}}}


  • TA = total assets
  • GNP = gross national product price index level (in USD, 1968 = 100)
  • TL = total liabilities
  • WC = working capital
  • CL = current liabilities
  • CA = current assets
  • X = 1 if TL > TA, 0 otherwise
  • NI = net income
  • FFO = funds from operations
  • Y = 1 if a net loss for the last two years, 0 otherwise


The original model for the O-score was derived from the study of a pool of just over 2000 companies, whereas by comparison its predecessor the Altman Z-score considered just 66 companies. As a result, the O-score is significantly more accurate a predictor of bankruptcy within a 2-year period. The original Z-score was estimated to be over 70% accurate with its later variants reaching as high as 90% accuracy. The O-score is more accurate than this.

However, no mathematical model is 100% accurate, so while the O-score may forecast bankruptcy or solvency, factors both inside and outside of the formula can impact its accuracy. Furthermore, later bankruptcy prediction models such as the hazard based model proposed by Campbell, Hilscher, and Szilagyi in 2011[5] have proven more accurate still. For the O-score, any results larger than 0.5 suggest that the firm will default within two years.


  1. ^ “Ohlson’s O-score definition”. Retrieved 2014-06-12.
  2. ^ Stokes, Jonathan (13 February 2013). “Improving On The Altman Z-Score, Part 2: The Ohlson O-Score”. Retrieved 2014-06-12.
  3. ^ Mitchell, Karlyn; Walker, Mark D. (7 January 2008). “Bankers on Boards, Financial Constraints, and Financial Distress (Preliminary and incomplete. Please do not quote.)” (PDF). Retrieved 2014-06-12.
  4. ^ James A. Ohlson. “Financial Ratios and the Probabilistic Prediction of Bankruptcy” (PDF). Retrieved 2021-02-15.
  5. ^ Campbell, John Y.; Hilscher, Jens; Szilagyi, Jan. “Predicting financial distress and the performance of distressed stocks”. CiteSeerX