in machine studying are the identical.
Coding, ready for outcomes, deciphering them, returning again to coding. Plus, some intermediate displays of 1’s progress. However, issues principally being the identical doesn’t imply that there’s nothing to be taught. Fairly quite the opposite! Two to 3 years in the past, I began a day by day behavior of writing down classes that I realized from my ML work. In trying again by means of among the classes from this month, I discovered three sensible classes that stand out:
- Preserve logging easy
- Use an experimental pocket book
- Preserve in a single day runs in thoughts
Preserve logging easy
For years, I used Weights & Biases (W&B)* as my go-to experiment logger. The truth is, I’ve as soon as been within the high 5% of all energetic customers. The stats in under determine inform me that, at the moment, I’ve skilled near 25000 fashions, used a cumulative 5000 hours of compute, and did greater than 500 hyperparameter searches. I used it for papers, for giant tasks like climate prediction with giant datasets, and for monitoring numerous small-scale experiments.
And W&B actually is a superb instrument: if you need lovely dashboards and are collaborating** with a group, W&B shines. And, till just lately, whereas reconstructing information from skilled neural networks, I ran a number of hyperparameter sweeps and W&B’s visualization capabilities have been invaluable. I might straight evaluate reconstructions throughout runs.
However I noticed that for many of my analysis tasks, W&B was overkill. I not often revisited particular person runs, and as soon as a mission was executed, the logs simply sat there, and I did nothing with them ever after. Once I then refactored the talked about information reconstruction mission, I thus explicitly eliminated the W&B integration. Not as a result of something was mistaken with it, however as a result of it wasn’t needed.
Now, my setup is far less complicated. I simply log chosen metrics to CSV and textual content information, writing on to disk. For hyperparameter searches, I depend on Optuna. Not even the distributed model with a central server — simply native Optuna, saving examine states to a pickle file. If one thing crashes, I reload and proceed. Pragmatic and ample (for my use circumstances).
The important thing perception right here is that this: logging shouldn’t be the work. It’s a help system. Spending 99% of your time deciding on what you need to log — gradients? weights? distributions? and at which frequency? — can simply distract you from the precise analysis. For me, easy, native logging covers all wants, with minimal setup effort.
Preserve experimental lab notebooks
In December 1939, William Shockley wrote down an concept into his lab pocket book: change vacuum tubes with semiconductors. Roughly 20 years later, Shockley and two colleagues at Bell Labs have been awarded Nobel Prizes for the invention of the trendy transistor.
Whereas most of us aren’t writing Nobel-worthy entries into our notebooks, we are able to nonetheless be taught from the precept. Granted, in machine studying, our laboraties don’t have chemical compounds or check tubes, as all of us envision once we take into consideration a laboratory. As an alternative, our labs typically are our computer systems; the identical system that I take advantage of to write down these strains has skilled numerous fashions over time. And these labs are inherently portably, particularly once we are growing remotely on high-performance compute clusters. Even higher, due to highly-skilled administrative stuff, these clusters are operating 24/7 — so there’s all the time time to run an experiment!
However, the query is, which experiment? Right here, a former colleague launched me to the thought of mainting a lab pocket book, and recently I’ve returned to it within the easiest type potential. Earlier than beginning long-running experiments, I write down:
what I’m testing, and why I’m testing it.
Then, after I come again later — often the following morning — I can instantly see which ends up are prepared and what I had hoped to be taught. It’s easy, but it surely adjustments the workflow. As an alternative of simply “rerun till it really works,” these devoted experiments develop into a part of a documented suggestions loop. Failures are simpler to interpret. Successes are simpler to copy.
Run experiments in a single day
That’s a small, however painful classes that I (re-)realized this month.
On a Friday night, I found a bug that may have an effect on my experiment outcomes. I patched it and reran the experiments to validate. By Saturday morning, the runs had completed — however after I inspected the outcomes, I noticed I had forgotten to incorporate a key ablation. Which meant … one other full day of ready.
In ML, in a single day time is valuable. For us programmers, it’s relaxation. For our experiments, it’s work. If we don’t have an experiment operating whereas we sleep, we’re successfully losing free compute cycles.
That doesn’t imply it’s best to run experiments only for the sake of it. However at any time when there’s a significant one to launch, beginning them within the night is the right time. Clusters are sometimes under-utilized and sources are extra rapidly out there, and — most significantly — you should have outcomes to analyse the following morning.
A easy trick is to plan this intentionally. As Cal Newport mentions in his ebook “Deep Work”, good workdays begin the night time earlier than. If tomorrow’s duties at the moment, you’ll be able to arrange the precise experiments in time.
* That ain’t bashing W&B (it might have been the identical with, e.g., MLFlow), however relatively asking customers to judge what their mission targets are, after which spend the vast majority of time on pursuing that targets with utmost focus.
** Footnote: mere collaborating is in my eyes not sufficient to warrant utilizing such shared dashboards. You must achieve extra insights from such shared instruments than the time spent setting them up.

