We accept summaries for AI in production until January 23. The conference takes place on June 4-5, 2026 in Newcastle, with talks on Friday the 5th across two streams: one focused on engineering and manufacturing systems, the other on machine learning and model development.
We often hear, “My work isn’t ready to talk about yet” or “I’m not sure anyone would be interested in that.” We want to address that hesitation immediately.
Speaking at a conference is not primarily about promoting yourself or your organization.
It is a practical tool that helps you do better work. Preparing and delivering a talk forces useful reflection, invites feedback from people facing similar challenges, and turns knowledge that lives only in your head into something your team can reuse.
If you’re wondering if your work qualifies: internal systems count, work in progress counts, partial success counts.
Submit your abstract via January 23 on the AI in Production website.
Preparing a reading clarifies your decisions
When you sit down to explain an engineering choice to an audience, you have to answer questions you may have glossed over at the time: Why did we build it this way? What limitations shaped our approach? What would we do differently now?
This is not about justifying your decisions to others. It’s about understanding them yourself. The process of turning a production system into a coherent story forces you to see patterns that you noticed too closely while building it. You identify what worked, what didn’t, and why. That clarity is valuable, whether you give the talk or not.
Many practitioners find that writing a summary or overview exposes gaps in their thinking. An implementation strategy that seemed obvious in the context becomes more difficult to explain without it. A monitoring approach that felt pragmatic reveals underlying assumptions. This friction is useful. It means that you learn something about your own work.
Speaking invites useful feedback
The audience at AI in Production will broadly consist of two streams: engineering (building, shipping, maintaining, and scaling systems) and machine learning (model development, evaluation, and applied ML).
Whether you work on infrastructure and deployment or training pipelines and model behavior, you’ll find yourself in a room with people facing similar constraints: limited resources, changing requirements, imperfect data, and operational pressures.
Sharing what you’ve tried gets feedback from people who understand the context. Someone solved a similar problem differently. Someone has fallen into the same failure mode. Someone asks a question that makes you reconsider an assumption.
This kind of peer feedback is otherwise difficult to obtain. Your team is too close to the work. Online discussions lack context. A conference talk gets your approach in front of people who can provide informed perspectives without first having to understand your entire stack or organizational structure.
Conversations help share responsibility and knowledge
In many teams, knowledge about production systems is shared by one or two people. They know why certain decisions have been made, where the edge cases are and how to interpret the monitoring dashboards. This concentration of knowledge entails risks.
Preparing a lecture is an imperative function for documentation. Explaining your system to strangers requires you to verbalize what is currently tacit. That articulation becomes something your team can use: onboarding materials, decision records, runbooks.
Speaking also distributes responsibility. When you present work publicly, it is no longer just yours. Your team shares ownership of the ideas. Others may criticize, expand, or maintain them. This is especially valuable for platform teams or infrastructure work, where the people who built something might not be the ones operating it six months later.
Converting tacit knowledge into reusable material
Much of what you know about your production systems is not written down. You understand the failure modes, the solutions and the operational peculiarities. You know which metrics matter and which are noise. You remember why you made certain decisions.
A conference talk is an excuse to capture that knowledge. The slides become a reference. The summary becomes a design document. The Q&A session shows what was not clear and what needs better documentation.
Even if the lecture itself is fleeting, the preparation process leaves artifacts behind. You’ve already done the hard work of running the system. Talking about it turns that experience into something others can learn from and you can build on.
Your work is worth sharing
When you keep AI systems in production, you solve problems worth talking about. Make models reliable under load, keep training pipelines maintainable, monitor behavior when ground truth is delayed or absent, and manage technical debt during feature dispatch.
These are the problems that professionals face every day. Your approach won’t be perfect, and that’s the point. Conversations about work in progress, about things that haven’t worked, about compromises made under duress, are often more useful than polished success stories.
We’re looking for honest stories about how people actually build and use AI systems. This can fit with the engineering flow (implementation, infrastructure, monitoring, scaling) or the machine learning flow (training, evaluation, model behavior, responsible data use). If you do work in either area, you have something to contribute.
Submit a summary
The deadline is January 23. You need a title and a summary of maximum 250 words. You don’t need a perfect story or a completed project. You need a problem you’ve worked on, some approaches you’ve tried, and some lessons you’ve learned.
Think about what would be helpful to someone six months behind you who is on a similar path. Think about what you wish someone had told you before you started. Think about the conversation you would like to have with colleagues who understand the constraints you are working under.
If you’re not sure where to start, consider writing about one decision that shaped your system, one assumption that turned out to be wrong, or one limitation that changed your design. Good abstracts often start with a specific moment or choice rather than a broad overview.
Ready to submit? The deadline is January 23. Share one decision, one lesson, or one constraint from your production work:
https://jumpingrivers.com/ai-productie/
If you have any questions about whether your work fits in with the conference, please contact [email protected]. We’re here to help make this easier.
For updates and revisions to this article, see the original post
Related
#choose #manufacturing #speaking #tool #work #bloggers


