This chapter still has a long way to go. I’d recommend exploring other portions of the draft in the meantime.
2.1 Tidymodels overhead
While the tidymodels team develops the infrastructure that users interact with directly, under the hood, we send calls out to other people’s modeling packages—or modeling engines—that provide the actual implementations that estimate parameters, generate predictions, etc. The process looks something like this:
When thinking about the time allotted to each of the three steps above, we refer to the “translate” steps in green as the tidymodels overhead. The time it takes to “translate” interfaces in steps 1) and 3) is within our control, while the time the modeling engine takes to do it’s thing in step 2) is not.
Let’s demonstrate with an example classification problem. Generating some random data:
…we’d like to model the class using the remainder of the variables in this dataset using a logistic regression. We can using the following code to do so:
The default engine for a logistic regression in tidymodels is stats::glm(). So, in the style of the above graphic, this code:
Translates the tidymodels code, which is consistent across engines, to the format that is specific to the chosen engine. In this case, there’s not a whole lot to do: it passes the preprocessor as formula, the data as data, and picks a family of stats::binomial.
Calls stats::glm() and collects its output.
Translates the output of stats::glm() back into a standardized model fit object.
Again, we can control what happens in steps 1) and 3), but step 2) belongs to the stats package.
The time that steps 1) and 3) take is relatively independent of the dimensionality of the training data. That is, regardless of whether we train on one hundred or a million data points, our code (as in, the translation) takes about the same time to run. Regardless of training set size, our code pushes around small, relational data structures to determine how to correctly interface with a given engine. The time it takes to run step 2), though, depends almost completely on the size of the data. Depending on the modeling engine, modeling 10 times as much data could result in step 2) taking twice as long, or 10x as long, or 100x as long as the original fit.
So, while the absolute time allotted to steps 1) and 3) is fixed, the portion of total time to fit a model with tidymodels that is “overhead” depends on how quick the engine code itself is. How quick is a logistic regression with glm() on 100 data points?
bench::mark(fit =glm(class ~ ., family = binomial, data = d)) %>%select(expression, median)
# A tibble: 1 × 2
expression median
* <bch:expr> <bch:tm>
1 fit 2.45ms
About a millisecond. That means that, if the tidymodels overhead is one second, we’ve made this model fit a thousand times slower!
In practice, the overhead here has hovered around a millisecond or two for the last couple years, and machine learning practitioners usually fit much more computationally expensive models than a logistic regression on 100 data points. You’ll just have to believe me on that second point. Regarding the first:
bm_logistic_reg <- bench::mark(parsnip =fit(logistic_reg(), class ~ ., d),stats =glm(class ~ ., family = binomial, data = d),check =FALSE )
Remember that the first expression calls the second one, so the increase in time from the second to the first is the “overhead.” In this case, it’s 0.866125 milliseconds, or 27.3% of the total elapsed time.
So, to fit a boosted tree model on 1,000,000 data points, step 2) might take a few seconds. Steps 1) and 3) don’t care about the size of the data, so they still take a few thousandths of a second. No biggie—the overhead is negligible. Let’s quickly back that up by fitting boosted tree models on simulated datasets of varying sizes, once with the XGBoost interface and once with parsnip’s wrapper around it.
This graph shows the gist of tidymodels’ overhead for modeling engines: as dataset size and model complexity grow larger, model fitting and prediction take up increasingly large proportions of the total evaluation time.
Section 1.1.3 showed a number of ways users can cut down on the evaluation time of their tidymodels code. Making use of parallelism, reducing the total number of model fits needed to search a given grid, and carefully constructing that grid to search over are all major parts of the story
2.2 Benchmarks
2.2.1 Linear models
2.2.2 Decision trees
2.2.3 Boosted trees
XGBoost and LightGBM – comparison timings for the same thing but from the Python interface?