Data defines the model by dint of genetic programming, producing the best decile table.

Linear Probability, Logit, and Probit Models: How Do They Differ?
Bruce Ratner, Ph.D.

At the beginning of everyday for the regression modeler, whose tasks are to predict a continuous dependent (e.g., profit) and a binary dependent (e.g., yes-no response), the ordinary least squares (OLS) regression model and the logistic regression model, respectively, are likely to be put to use, giving promise of another workday of successful models. The essence of any prediction model is the fitness function, which quantifies the optimality (goodness or accuracy) of a solution (predictions). The fitness function of the OLS regression model is mean squared error (MSE), which is minimized by calculus. Historians generally regard calculus going back to the time of the ancient Greeks, circa 400 BC. Calculus started making great strides in Europe towards the end of the 18th century. Leibniz and Newton pulled their own "to-be-calculus" ideas together, and they are credited with the independent "invention" of calculus. The OLS regression model is celebrating 204 years of popularity, as the invention of the method of least squares was on March 6, 1805.

The first use of OLS regression with a binary dependent has an intractable past: Who, when, and why are not known. The pent-up need for a binary dependent-variable linear regression model was quite apparent, as once it was employed there was no turning back the clock. The general passion of the users’ of the new probability regression model resulted in renaming it the Linear Probability Model. The problems of the linear probability model today are well known. But, its usage came to a quick halt when the probit model was invented.

The fitness function of the logistic regression model (LRM) is the likelihood function, which is maximized by calculus (i.e., the method of maximum likelihood). [The likelihood function represents the joint probability of observing the data that have been collected. The term "joint probability" means a probability that combines the contributions of all the individuals in the study.] The logistic function has its roots spread back to the 19th century, when the Belgian mathematician Verhulst invented the function, which he named logistic, to describe population growth. The rediscovery of the function in 1920 is due to Pearl and Reed, the survival of the term logistic to Yule, and the introduction of the function in statistics to Berkson. Berkson used the logistic function in his regression model as an alternative to the normal-probability probit model, usually credited to Bliss in 1934, and sometimes to Gaddum in 1933. (However, the probit can be first traced to Fechner in 1860.) As of 1944, Berkson’s LRM was not accepted as a viable alternative to Bliss’ probit. After the ideological debate about the logistic and probit had abated in the 1960s, Berkson’s logistic gained wide acceptance. Berkson was much derided for coining the term “logit” by analogy to the probit of Bliss, who coined the term probit for “probability unit.”

The purpose of this article (actually, my seminar handouts with green-highlighted main points to serve as a voice-over) is to address How the three probability models are different?

For more information about this article, call Bruce Ratner at 516.791.3544 or 1 800 DM STAT-1; or e-mail at
Sign-up for a free GenIQ webcast: Click here.