Machine Learning

Work in Data Isn't Always a Straight Line, and That's OK

In the Author Spotlight series, TDS Editors talk to members of our community about their work in data science and AI, their writing, and their sources of inspiration. Today, we are excited to share our interview Sabrine Bendimerad.

Sabrine is an applied mathematics engineer who has spent the past 10 years working as a Senior AI Engineer, managing projects from concept to production.

His journey has taken him to very different countries, from analyzing satellite images for major European companies to his current role as a medical imaging researcher at Neurospin. Today, he works with brain imaging to help stroke patients recover.

Sabrine is also a consultant and co-founder of Daiilearn. He likes to write not only about code, but also about how to build real work and how to make sure that data science projects actually get to that final stage where they have real impact.


A few months ago, he answered the pressing question facing data professionals today: “is it still worth it?” Why did you decide to tackle it, and has your position changed now?

Actually, my article “Data Science in 2026: Is It Still Relevant?” created a flood of messages on LinkedIn. I expected younger people to worry about this question, but I was surprised to see that people with more years of experience also asked questions about the future.

I've been in AI for 10 years now, and it's true that in the beginning, just knowing Python and math/maths made you a unicorn. Today, the market is full of new data scientists, and new tools based on AI agents are taking over the manual, simple tasks we used to do.

So my position is still the same or maybe even stronger today: AI and data science are still relevant, but the “typical data scientist” is a dying breed. To survive, you must evolve beyond the models in the manual. You need to master postings, LLMs, RAGs, and, most importantly, domain knowledge that facilitates data interpretation. If we create basic models in the book, then of course our tasks can be performed by agents. Jobs don't disappear; they are just different. You need to develop skills that adapt to this new market.

He has written extensively about data science and AI careers. How has your journey shaped the information you share with your readers?

From the beginning, my journey was not just about code. I realized early on that solving real-world problems is something you don't learn at university or boot camp. You learn it by being in the trenches with real teams. In my years working with satellite imagery for energy and water companies, I learned that to create a real solution, you have to think “end-to-end.” If the model lives in a notebook, it has zero impact. That's why I write a lot about MLOps – how to manage, deploy, and monitor models in production.

Moving into the medical field added a new layer to my thinking. In the utilities sector, if you make a mistake, you incur financial losses. But in the medical scene, you manage people's lives. This change taught me that AI can generate code, but it cannot understand the weight of human decision. That's why I started writing about things like RAGs, LLMs, and their impact. It's not just a fashion topic for me; it's about how hard it is to make these tools reliable enough for someone to trust 100%.

My understanding comes from this bridge: I have a background in the construction industry to produce, but I also have a research background where the methodology must be perfect. I write to share these technical skills, but also to help people navigate their journey. I want to show them the opportunities they have in this field, how to manage their way. and how to handle complex projects. I want my students to see that data work isn't always straightforward, and that's okay.

What is the most noticeable difference you see between starting now compared to your early years in the field? How different is the playbook for new hires these days?

The game has been completely rewritten. When I started, we were developers, and we spent weeks just cleaning data and setting up servers. Today, you must be an AI Orchestrator. You can build a system in days that used to take months. I wouldn't say it's more difficult now, but it's really difficult if you're trying to start a career using skills that were in vogue 10 years ago.

Young people today have many options for market readiness. We have a lot of information on YouTube and blogs. The real challenge now is to sift through the trash. The survivors are those who monitor and understand the market to adapt quickly. Of course, you need to understand the theoretical side of AI, but the real skill today is flexibility.

It is not a good idea to only want to be an expert in one particular tool. 10 years ago, we were talking about switching from R to Python or from statistics to deep learning. Today, we're talking about the transition to productive AI and agents. The basics remain the same, but you need to be flexible to quickly understand a new trend, implement it, and respond to stakeholder needs. Flexibility has always been the “secret” skill of the data scientist, whether it was 10 years ago or today.

Your articles generally balance high-quality information with practical details. What do you hope your audience will gain from reading your work?

When I write, I always remember that I am sharing information to help people build their own expertise. For example, when I write about MLOps, I try to bridge the gap between the big picture of productivity and the practical technical steps needed to get there. I still hesitate every time I start a new topic! Often, I discuss topics with my students or colleagues to see what interests them, and then link to what I see in the industry. My goal is for the reader to walk away with practical guidelines, not just an idea.

I try to reach different audiences depending on the topic. Sometimes it's a very technical article, like how to get a model out of the cloud using Docker and FastAPI, and sometimes it's a “big picture” piece that explains what “production” really means in business. I find it difficult nowadays to write only about certain tools, because they change so quickly. Instead, I try to share feedback on the things that hold me back or the real challenges I face in starting a particular project (like my article about RAG programs). I want my listeners to learn from my mistakes so they can move faster.

In your professional life, what impact has the rise of LLMs and agent AI had? Do you see the trend being positive, negative, or changing?

In my day to day, I use LLMs as an experienced colleague, someone to talk to or to quickly prototype and debug the text. With the deployment of agents I'm also starting to use vibe coding and automation of basic tasks, but in deep research I'm very guarded. I currently work with medical data, where there is literally no room for error. I might use AI to reshape thought or improve my way of doing things, but for complex tasks, I have to be in full control of my code.

I'm not against the use of LLMs and agent AI, but If you let AI do all the thinking, you lose your sense. For example, when I'm working on brain imaging, I have to hold my hand in an annoying way with my brain because the LLM doesn't understand the illness you're trying to predict. Every brain is different; The condition of the human body changes from one to another. The AI ​​agent sees the pattern, but does not understand the “why” of the disease.

I also see the impact of AI agents in the work of my interns. AI agents are a huge force in their productivity, but they can be a disaster for human learning. They can produce in an afternoon a mountain of code that used to take months, and it's hard to understand a topic if you never make mistakes that force you to understand the program. We have to keep the person at the center of the mind, or we create black boxes that we don't really control.

Finally, what developments in the field do you hope to see in the next year or so, and what topics do you hope to cover next in your writing?

I would really like to see the conversation move away from constantly chasing new tools, and move on to better science and meaningful applications of AI.

We are in a phase where new tools, frameworks, and models are emerging very quickly. While that's exciting, I think what's often missing is transparency and a deeper focus on impact. I would like to see more work that not only increases human productivity, but also contributes to areas such as health care, education, and physical accessibility.

Of course, LLMs and agent AI will continue to evolve, and I'm very interested in exploring what that means in practice. Beyond the hype, I would like to better understand and write about questions like these:

  • Are these tools really changing how we think, or how fast we do it?
  • Do they really improve the quality of our work?
  • What kind of impact do they have on different sectors?

In my upcoming writing, I would like to focus more on this thinking that integrates technological ideas with a deeper look at how AI is shaping not just our tools, but our way of working and thinking.

To learn more about Sabrine's work and stay up to date with her latest articles, you can follow her on TDS.


Portions of this Q&A have been edited for length and clarity.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button