Machine Learning

The Evolving Role of the ML Developer

In the Author Spotlight series, TDS Editors talk to members of our community about their work in data science and AI, their writing, and their sources of inspiration. Today, we are excited to share our interview Stephanie Kirmer.

Stephanie is a Staff Machine Learning Engineer, with nearly 10 years of experience in data science and ML. Previously, he was director of higher education and taught sociology and health to undergraduate students. He writes monthly posts for TDS on the topics of social media and AI/ML, and lectures around the country on ML-related topics. He will be speaking on LLM test customization techniques at ODSC East in Boston in April 2026.

Studied sociology and educational and cultural foundations. How has your background shaped your perspective on the social implications of AI?

I think my educational background has shaped my perspective on everything, including AI. I have learned to think socially through my academic work, and that means I look at events and phenomena and ask myself things like “what are the social inequalities at play here?”, “how do different types of people do this thing differently?”, and “how do institutions and groups of people influence how this thing happens?”. Those are the kinds of things a sociologist wants to know, and uses the answers to develop an understanding of what's going on around us. I form an opinion about what's going on and why, and then I honestly look for evidence to prove or disprove my opinion, and that's the social method, really.

He has worked as an ML developer at DataGrail for over two years. How has your day-to-day work changed with the rise of LLMs?

I'm actually in the process of writing a new episode about this. I think the progress of coders using LLMs is really interesting and is changing the way many people work in ML and software engineering. I use these tools to bounce ideas off, get a handle on my approach to problems or get some ideas for my approach, and cutting edge work (unit testing or boilerplate code, for example). I think there's still a lot more to be done by people in ML, especially applying the skills we've gained from experience to unusual or unique problems. And all this is not to minimize the decline and danger to LLMs in our society, of which there are many.

He asked if we could “save the AI ​​economy.” Do you believe that AI hype has created a bubble like the dot-com era, or are the underlying uses of the technology strong enough to support it?

I think it's a bubble, but that the underlying technology is not to blame. People have created a bubble, and as I explained in that article, unimaginable money has been invested under the assumption that the LLM technology will produce some kind of results that will command a corresponding profit. I think this is stupid, not because LLM technology is useless in some important ways, but because it is not $200 billion+. If Silicon Valley and the VC world were willing to accept a good return on a moderate investment, instead of seeking a large return on a large investment, I think this would be a sustainable place. But it hasn't been, and I just don't see a way out of this that doesn't involve bursting the bubble eventually.

Last year, he wrote about “Cultural Backlash Against Generative AI.” What can AI companies do to rebuild trust with a skeptical public?

This is tough, because I think the hype set the tone for the blowback. AI companies make extraordinary promises because the next quarter's numbers always need to show something surprising to keep the wheel turning. People who watch that and feel lied to naturally have a sour taste for the whole job. It's not going to happen, but if AI companies backed off the unrealistic promises and instead focused more on finding meaningful and practical ways to apply their technology to real human problems, that would help a lot. It would also help if we had a broad public education campaign about what LLMs and “AI” really are, demystifying the technology as much as we can. But, the more people learn about technology, the more realistic they'll be about what they can and can't do, so I expect the big players in the space won't be inclined to do that either.

He has covered many different topics over the past few years. How do you decide what to write about next?

I often spend a month between topics thinking about how LLMs and AI are appearing in my life, the lives of people around me, and stories, and talking to people about what they see and experience about it. Sometimes I have a certain angle from sociology (power, race, class, gender, institutions, etc) that I want to use as a framework to look at a place, or sometimes a certain event or something gives me an idea to work with. I jot down notes throughout the month and if I come across something that I feel really interested in, and want to research or think about, I'll pick that for the next month and dive deeper.

Are there any topics you haven't written about yet, and are excited to tackle in 2026?

Actually I don't live that far! When I started writing a few years ago I wrote down a huge list of ideas and topics and I'm completely exhausted, so these days I'm a month or two ahead of the page. I'd like to get ideas from readers about social issues or AI conflict themes that I'd like to get involved with.

To learn more about Stephanie's work and stay up to date with her latest articles, you can follow her on TDS or LinkedIn.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button