Nowadays there is a plethora of machine learning algorithms we can try out to find the best fit for our particular problem. Some of the algorithms have clear interpretation, other work as a blackbox and we can use approaches such as LIME or SHAP to derive some interpretations.

In this article I would like to focus on interpretation of coefficients of the most basic regression model, namely **linear regression**, including the situations when dependent/independent variables have been transformed (in this case I am talking about log transformation).

### 1. level-level model

I assume the reader is familiar with linear regression (if not there is a lot of good articles and Medium posts), so I will focus solely on interpretation of the coefficients.

The basic formula for linear regression can be seen above (I omitted the residuals on purpose, to keep things simple and to the point). In the formula *y* denotes the dependent variable and *x* is the independent variable. For simplicity let’s assume that it is univariate regression, but the principles obviously hold for the multivariate case as well.

To put it into perspective, let’s say that after fitting the model we receive:

**Intercept (a)**

I will break down the interpretation of the intercept into two cases:

*x*is continuous and centered (by subtracting mean of*x*from each observation, the average of transformed*x*becomes 0) — average*y*is 3 when*x*is equal to the sample mean*x*is continuous, but not centered — average*y*is 3 when*x*= 0*x*is categorical — average*y*is 3 when*x*= 0 (this time indicating a category, more on this below)

**Coefficient (b)**

*x*is a continuous variable

Interpretation: a unit increase in *x* results in an increase in average *y* by 5 units, all other variables held constant.

*x*is a categorical variable

This requires a bit more explanation. Let’s say that *x* describes gender and can take values (‘male’, ‘female’). Now let’s convert it into a dummy variable which takes values 0 for males and 1 for females.

Interpretation: average *y *is higher by 5 units for females than for males, all other variables held constant.

### 2. log-level model

Typically we use log transformation to pull outlying data from a positively skewed distribution closer to the bulk of the data, in order to make the variable normally distributed. In case of linear regression, one additional benefit of using the log transformation is interpretability.

As before, let’s say that the formula below presents the coefficients of the fitted model.

**Intercept (a)**

Interpretation is similar as in the vanilla (level-level) case, however, we need to take exponent of the intercept for interpretation exp(3) = 20.09. The difference is that this value stands for the geometric mean of *y *(as opposed to the arithmetic mean in case of level-level model).

**Coefficient (b)**

The principles are again similar to the level-level model, when it comes to interpreting categorical/numeric variables. Analogically to the intercept, we need to take the exponent of the coefficient: exp(*b*) = exp(0.01) = 1.01. This means that a unit increase in *x *causes a 1% increase in average (geometric) *y*, all other variables held constant.

Two things worth mentioning here:

- There is a rule of thumb when it comes to interpreting coefficients of such model. If abs(b) < 0.15 it is quite safe to say that when b = 0.1 we will observe a 10% increase in
*y*for a unit change in*x*. For coefficients with larger absolute value, it is recommended to calculate the exponent. - When dealing with variables in [0, 1] range (like percentage) it is more convenient for interpretation to first multiply the variable by 100 and then fit the model. This way the interpretation is more intuitive, as we increase the variable by 1 percentage point instead of 100 percentage points (from 0 to 1 immediately).

### 3. level-log model

Let’s assume that after fitting the model we receive:

The interpretation of the intercept is the same as in case of the level-level model.

For the coefficient *b — *a 1% increase in *x* results in an approximate increase in average *y* by *b*/100 (0.05 in this case), all other variables held constant*. *To get the exact amount, we would need to take *b*× log(1.01), which in this case gives 0.0498.

### 4. log-log model

Let’s assume that after fitting the model we receive:

Once again I focus on interpretation of *b. *An increase in *x* by 1% results in 5% increase in average (geometric) *y*, all other variables held constant.* *To obtain the exact amount, we need to take

which is ~5.1%.

### Conclusions

I hope this article has given you an overview of how to interpret coefficients of linear regression, including the cases when some of the variables have been log transformed. In case you have any comments or feedback, please let me know!

### References

- https://stats.idre.ucla.edu/sas/faq/how-can-i-interpret-log-transformed-variables-in-terms-of-percent-change-in-linear-regression/
- https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqhow-do-i-interpret-a-regression-model-when-some-variables-are-log-transformed/

Interpreting the coefficients of linear regression was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.