Machine Learning

Machine learning lessons I learned this month

) In the machine learning task it is the same.

Entering the codes, waiting for the results, interpreting them, going back to the code. Also, some presentations on personal development for management *. But, things that are very similar does not mean that there is nothing to learn. Quite the opposite! Two to three years ago, I started a daily habit of documenting the lessons I learned from my ML work. Still, to this day, each month leaves me with a few small lessons. Here are three lessons from this past month.

Connecting with people (No ML involvedWe are divided

As the Christmas holiday season approaches, the year-end gatherings begin. Often, these gatherings are conducted through informal discussions. There is not much “work” being done – which is natural, as this usually happens after work. Usually, I skip such events. At Christmas time, however, I didn't. I joined the others after working together a few weeks ago and just talked – nothing urgent, nothing serious. Socializing was great, and I was a lot of fun.

It reminded me that our work projects don't just run on code and computers. They run into working-together-with-others-for-a-long-term. Here, small moments – a joke, a quick story, a shared complaint about a fluk gpus – can once again fire up the engine and make the collaboration smoother when things get rough where it gets smoother later.

Think about it from another perspective: your partner has to live with you for years to come. You and them. If this can be “carrying” – nonno, it's not good. But, if this is “together” – yes, it's definitely good.

So, when your company or research institutes come together in your mailbox: Join.

Copilot did not act quickly

This past month, I've been setting up a new project and adapting a set of algorithms to a new problem.

One day, while I was spending time on the web, I found a study of mit ** Suggesting that help (heavy) ai – especially before Doing the work – can significantly reduce recall, reduce engagement, and weaken identification and the result. Granted, the course used essay writing for testing purposes, but coding an algorithm is an equally creative activity.

So I tried something simple: I completely disabled Copilot in VS Code.

After some weeks, my (checked and checked, very racist) works are: There is no noticeable difference with my basic duties.

Writing loops, loaders, training anatomy – I know them well. In these cases, AI suggestions did not add speed; Sometimes they even add to the conflict. Just imagine Adjusting the AI ​​output ie chase it – appropriate.

Finding that compares to how I felt a month or two ago when I had the impression that Copilot made me more efficient.

Thinking about the difference between these two times, it occurred to me that the result is obvious Domain dependency. When I'm in a new area (Say, uploading), the help helps me enter the field very quickly. In My Own fields, the benefits are marginal – and can come with hidden downsides that take years to notice.

My current take on AI assistants (which I've only used for coding with Copilot): Good steep up in a strange place. It's the basic job that defines the majority of your salary, preferably at least.

So, for the future, I would recommend something else

  • Write the past tense; Use AI only in Polish (composition, mini-exams, exercises).
  • Honestly check out the announced benefits of AI: 5 days with AI off, 5 days with. Among them, track: Tasks completed, bugs found, completion time, you can remember well and explain this code the next day.
  • Convert easily: bind hotkey to enable/disable suggestions. If you're accessing it every minute, you're probably using it too much.

Carefully balancing pragmatism

As ML aliens, we can get overly detailed. For example, what level of learning can you use for training. Or, using a fixed learning rate to compare and decay with fixed measures. Or, even if you use the exponential cosine strategy.

You see, even with a simple case of LR, one can climb quickly in many ways; What should we choose? I got into circles on a version of this recently.

In these moments, it helped me to get closer: What does it mean The end user care about? Most of the time, it's beauty, accuracy, durability, and, often, cost. They don't care which LR plan you chose – unless it affects those four. Suggesting a tedious but useful method: Choose the easiest option, and stick to it.

A few defaults include multiple cases. The basis of the Optimizer. Vanilla Lr by One Debook Milestone. Obvious governance is prohibited early. If the metrics are bad, it goes up in the lovers' choice. If they are good, go ahead. But don't throw everything away from this problem at once.


* It seems that even at DeepMind, perhaps the most successful Institute for pure research (at least previously), researchers have managed to satisfy

** Research available or arxiv at:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button