Lessons Learned After 8 Years of Machine Learning

a decade old now.
At the time, OpenAI felt like a (well-baked) startup among others. DeepMind was already there, but not yet fully integrated with Google. And, back then, the “deep learning triangle” – LeCun, Hinton, and Bengio – was published Deep Learning in the middle The environment*.
Today, AI is synonymous with common good. Back then, it was mainly scholars and technical scholars who knew and cared about it. Today, even children know what AI is and interact with it (bad or I mean even worse).
It's a fast-paced field, and I'm lucky to have joined it a bit later “back then” – eight years ago, when the momentum was growing but old ML was still being taught in universities: clustering, k-means, SVMs. It also coincided with the year when society began to understand that attention (and stripes) was all we could need. It was, in other words, a good time to start learning about machine learning.
As the year now draws to a close, it feels like the perfect time to take a picture. Every month I reflect on small practical lessons and publish them. About every half a year, I then look for major themes underneath: patterns that keep repeating, even if the projects change.
In this case, four threads appear throughout my notes:
- Deep work (my all time favorite)
- Over-identifying one's work
- Sports (and movement in general)
- Blogging
Deep Work
Deep Work seems to be my favorite theme – and in machine learning it's everywhere.
Machine learning jobs can have several areas of focus, but most days revolve around a combination of:
- theory (mathematics, evidence, critical thinking),
- code (pipelines, training loops, debugging),
- writing (project reports, papers, documents).
All of them require sustained attention over a long period of time.
Proof of a theory doesn't come in five-minute chunks. Coding, needless to say, punishes bugs: if you're deep into a bug and someone bugs you, you don't just “restart” — you need to rebuild, which just burns time **.
The writing is also weak. Crafting good sentences requires attention, and attention is the first thing that disappears when your day becomes a jumble of small messages.
I am fortunate to work in an environment that allows for many hours of intensive work, several times a week. This is not the norm – in fact, it may be the exception. But it is incredibly fulfilling. I he can sink into a problem for hours and come out exhausted afterwards.
Tired, but satisfied.
For me, deep work has always meant two things, and I already highlighted this half a year ago:
- Ability: the ability to concentrate deeply for long periods of time.
- The environment: with conditions that enable and protect that concentration.
Generally, a skill is easier to acquire (or re-acquire) if you don't have it. It is an environment that is difficult to change. You can train focusbut you can't manually remove meetings from your calendar, or change your company culture overnight.
However, it is useful to name these two parts. If you struggle with serious work, it may not be a lack of discipline. Sometimes, as my experience tells me, it's just that the environment you're in doesn't allow what you're trying to do.
Over-identifying one's work
Do you like your job?
Let's hope so, because most of your waking hours are spent doing it. But even if you generally love your job, there will be times when you love it more – and times when you love it less.
Like all people, I have both.
There were times when I felt energized just because I was “doing something with ML.”
Hey!
Then there were times when the lack of progress – or the setback because the idea just didn't work – dragged me down hard.
Oh no.
Over the years, I've come to believe that finding multiple identities at work is often not a smart strategy. And working with ML is fraught with idiosyncrasies: tests fail, fundamentals defeat your fancy ideas, reviewers don't get it right, pressing deadlines, data breaks, shifting priorities. If your self-esteem goes up and down with the latest training run, you might as well visit Disneyland for a roller coaster ride.
A simple analogy: imagine you are a gymnast. He has been training for years. You are flexible, strong, in control of your movements. Then you break your ankle. Suddenly, you can't even make a simple jump. You can't train the way you did years ago. If you are only athlete – if it's all you are – it will feel like you are losing yourself.
Thankfully, most people are beyond their scope. Even if they forget sometimes.
The same applies to ML. You can be an ML engineer, or a researcher, or a “theoretical person” – and be a friend, a partner, a brother, a colleague, a student, a runner, a writer. When one part passes through the ground, the other holds it still.
This is not “I don't care about my job”. It's about care without falling into it.
Sports, or movement in general
Admittedly, this is absurd.
Operations in ML are not known to contain multiple moves. The miles you do are the miles of fingers on the keyboard. Meanwhile, the whole body remains motionless.
I don't have to go into what's going on if you let me that it happened.
The good news is: it's easier than ever to object. There are many boring but effective options now:
- desks are adjustable in height
- meetings spent walking (especially when the cameras are off)
- travel pads under the desk
- short paths (preferably, between deep work blocks)
Over the years, movement has become an important part of my work day. It helps me start the day smooth – not stiff, not bent, not “pressed”. And it helps me to de-stress after intense work. Deep concentration is mentally exhausting, but it also has physical effects: shoulders rise, neck falls forward, breathing becomes shallow.
Moving resets that.
I don't treat it as “qualification.” I treat it like insurance that allows me to do my job for years to come.
Blogging
Daniel Bourke.***
If you've been reading ML content in Towards Data Science for a long time (at least five, six years), that term might sound familiar. He has published many articles on ML (while TDS was still hosted on Medium), and his unique writing style has brought ML to a wider audience.
His example inspired me to start blogging too – and with TDS. I started at the end of 2019, beginning of 2020.
In the beginning, writing these articles was easy: write an article, publish, move on. But over time, it became something else: a habit. Writing requires precision in putting your thoughts on paper. If you can't explain something coherently, you probably don't understand it as well as you think.
Over the years, I've put together a machine learning roadmap, written tutorials (like how to manage TFRecords), and, of course, I keep circling back to deep work – because it keeps proving to be important to ML practitioners.
And blogging has been rewarding in two ways.
It has been rewarding in monetary terms (until, over the years, it helped finance the computer I'm using to write this). But more importantly, it has been rewarding as a writing practice. I see blogging as a way to train my translation skills: to take something technical and put it into words that other viewers can handle.
In a fast-moving and innovative field, such translational competence is strangely stable. Models are changing. The structures are changing (Theano, anyone?). But the ability to think clearly and write clearly is a plus.
Closing thoughts
When you look back after eight years of “doing ML”, none of these themes stand out as being about a specific model or strategy.
They are about:
- Deep work, which makes progress possible.
- Not over-targeting, which makes obstacles survivable.
- Movement, which keeps your body from silently decaying.
- Blogging, which turns experience into something shared – also trains clarity.
The funny thing is: these are all “boring” courses.
But they are the ones who keep showing up.
References
* Nature's intensive reading article from LeCun, Bengio, and Hinton: the annotated reference section is a must-read in itself.
** See the easily accessible digest by the American Psychological Association at
*** Home page of Daniel Bourke and his machine learning articles:



