In October, Skellig team members Chris Demers, Amy Williams, Purnendu Saptarshi, and Mario Robles II traveled to Somerville, MA, just outside Boston, to attend the biggest conference of the year for all things Tulip! The Tulip Operations Calling conference is a yearly event hosted by Tulip themselves at their headquarters where they cover upcoming features, practical solutions and use cases for Tulip, and have industry leaders share knowledge surrounding best practices and challenges. We got a chance to sit down and chat with so many amazing engineers doing so many wonderful things with Tulip. We wanted to keep the spirit of collaboration going, so we organized our conference takeaways to hopefully help others make better apps and determine new use cases for Tulip.
The conference started off with a quick overview of the most exciting features coming to Tulip. These include Functions, the AI App Composer, Model Context Protocol, AI Agents, and OpsMoto.
Functions
Functions allow us to templatize logic and make updates to that logic in one central location. They work similarly to Automations, but rather than running server side outside of an app, functions are called within a trigger and run within an app. The best part is that there’s no additional cost to run them unlike Automations since they operate within the context of an app. If that doesn’t make a ton of sense, don’t worry. I’ll cover logic within Tulip and automations vs. functions in more detail further down.
AI App Composer
It seems like everyone is incorporating AI and LLMs into the tools we use to make our lives better. Tulip is no different. The AI App Composer is a tool that allows you to take a pdf of a paper SOP or Work Instruction, feed it to the tool, and automatically generate an app out of it.
After getting some hands-on use with the tool, I’m not sure how much value it provides in its V1/Beta state. It seems limited to putting the text/instructions for a given step where they need to go correctly and making sure widgets (like a next button) are where they should be but that’s about it. We asked it to generate a few simple steps like “Make a final step with a Boolean checkbox to determine if all steps were completed correctly” and it completely ignored the instructions. It doesn’t seem to be able to generate any kind of trigger logic beyond switching between steps.
In our experience, the vast majority work done when app building is designing the trigger logic within and around a step. This is especially true if you have developed a Library App that defines your base layout and your most common step templates, (which we highly recommend doing if you haven’t yet!) so it seems like the value is limited currently. But everything related to LLMs seems to progress incredibly quickly and I have no reason to think this will be any different. I’m very excited for future updates to the tool that will undoubtedly expand its capabilities.
Model Context Protocol (MCP)
MCP is an open-source standard that allows an AI to connect to an external system/service. Think of it as an API but for AI. In the context of Tulip, this is exciting because it allows AI to interact with the data within Tulip, the Knowledge Base, Library and other data repositories. A great overview on MCP by HighByte can be found here.
AI Agents
An AI agent is a software system based on an LLM. I personally think of it as a highly specific instantiation of an LLM that is designed to pursue an objective. Agents work best when given as much instruction/context possible, so it’s best practice to use many AI Agents for as many specific objectives as necessary. The customization of an agent is done in plain conversational English. Whereas someone may typically feed an LLM a paragraph or two to prompt it, these agents have hundreds of lines of text to contextualize what data it should be looking at, how it should present data, etc. so that when an end user asks it a simple question like “What major events happened during 2nd shift?” the agent can return highly specific, relevant data and context.
Tulip will be launching 10 AI Agents in the Library. Some examples are an agent that will give a list of potential library apps and relevant knowledge base articles when a user describes an app they are trying to develop. Another agent will generate shift reports to aid in transition between different shift staff. Users are also able to use these agents as starting points to create more agents that are relevant to their operations.
OpsMoto
OpsMoto gives large manufacturers a way to get data from all their various Tulip instances and workspaces in one place. This data can then be visualized in Dashboards and used for improved decision making.
After covering the roadmap, we heard some lessons learned from Stanley Black & Decker as they’ve expanded their use of Tulip within their manufacturing operations. They found it best to take a 3 step approach to improve their operations:
- Standardize – Align standards and behaviors globally
- Digitize – Capture and contextualize data
- Optimize – Visualize and empower decisions
To standardize, they made sure the way they measure performance is consistent across all lines and sites. They found that even within the same site different lines were measured separately. However, they recognize a one-size-fits-all approach doesn’t work. Instead of framing it as standardizing and governing from the top down, they framed it as making their operations composable. While core methods are standardized, other methods are customizable for individual sites, and they are free to operate differently for everything outside of the defined methods.
The purpose of digitization is to ensure consistent data. To facilitate future digital projects, they created the SPX Digital Toolbox. This is a single repository for all digital trainings, standards, and app templates. The crucial aspect is the core standardized methods are built into the apps so sites aren’t left guessing what is needed and what can be customized.
Finally, they don’t just publish an app template and call it a day. They continue to iterate and refine what they’ve built. This reinforces what we’ve found as well: Tulip apps are made best with frequent touchpoints with users and stakeholders so updates and improvements can be made incrementally and quickly.
AstraZeneca’s Lean Digital Transformation
On day 1 we also heard how Tulip is enabling AstraZeneca’s Lean Digital Transformation. Their award-winning achievement was a suite of apps for digital changeover that they were able to scale across 17 sites and over 120 packing lines in just 3 months. Other key use cases for Tulip were focused around cleaning logbooks for execution, scheduling and analytics. Integration with enterprise systems remained a key focus. The AZ team tackled this by establishing an “External Data Management” system, which takes data from SAP and moves it into Tulip using SnapLogic – enabling users to easily obtain master data. They’ve also integrated Tulip with their Manufacturing Data Hub (Snowflake) to provide a single data plane for operations.
Like most digital transformation initiatives their rollout wasn’t without its challenges. Site specific requirements remained a key challenge, one which we also heard from Stanley Black & Decker.
They also faced challenges with the operator experience—from both initial buy in as well as overall user experience—reinforcing the need to include end users early in app development.
Looking forward – the team is looking towards implementing hands-free interfaces for complex operations where operators are unable to access a tablet. Interestingly, despite trying multiple expensive and complex solutions such as Realware headsets and wrist mounted phones they found a simple earpiece provided the best overall user experience. Really looking forward to seeing what this team does in the coming year!
Business System (ERP) Integration via API
Day 2 of the conference was heavy on the meat and potatoes of working with Tulip and was significantly more technical. The day started off with a discussion around different methods to integrate business systems with Tulip via APIs.
An ideal integration architecture starts with ensuring there is a single source of truth that contains the latest content and can be accessed via HTTP REST APIs. The best practice is to integrate transactional sources of truth in real time using HTTP connector functions instead of SQL connectors. While HTTP connectors can connect to any HTTP service and retrieve JSON or XML, SQL connectors are limited to specific databases (Oracle DB, MySQL, Microsoft SQL Server, and PostgreSQL). In terms of responsibilities, the ERP should handle high-level order management, the general ledger, and inventory/material management. Tulip should manage things like shop floor operations, including work instructions, real-time manufacturing status, and material traceability.
However, there are various reasons why the ideal method is impractical. The ERP could have regular long maintenance downtime, there could be very large BoMs that are impractical to fetch via REST API, or it could be something as simple as the System Owner won’t allow integration using the “ideal” method. The other option is to have Tulip integrate with sources of truth asynchronously via content stored in Tulip Tables or elsewhere. In this case the source of truth synchronizes data as close to real-time as possible.
In some cases, using middleware (like Mulesoft, Boomi, etc.) is the best option to exchange data between the business system and Tulip. These include situations where the business system lacks modern API endpoints, the business needs automated retries on API timeouts, or the business needs API logging for easier/better debugging.
There are also some of “worst practices” that should never be done. The first is using Tulip Automations for high frequency. Middleware is better suited for this and will most likely be much cheaper. The second is using MQTT to integrate Tulip with business systems. MQTT in the Tulip platform is built largely with integration to machines/sensors in mind and does not lend itself to integrating with business systems well. The last is using SQL connectors when HTTP connectors are available. HTTP APIs are significantly easier to work with.
Automations and Functions
The final talk dealt with Automations and Functions in detail. Before jumping into the intricacies of either, I think it’s important to discuss logic within Tulip as a whole. Prior to Automations and Functions, the only way to write logic statements in Tulip were within Triggers. Triggers exist at the App, Step and Widget level and allow users to “write” logical If/Then statements through a no-code interface. The benefit is they are incredibly straightforward and are simple to deploy. The biggest drawback is scale. As apps grow in complexity and design and the number of triggers grows to an amount that becomes difficult to manage. In the project I’m currently working on for example, we have 50+ apps with around 30-50 steps. Assuming each app has 5 triggers each, the best case scenario is there are 7,500 triggers with their own lines of code in them that need to be managed individually by clicking into each widget/step. Not the most fun task.
The first change to logic came with Automations. Automations exist outside of an app and allow the creation of logic that runs without operator or app input. These logic flows work off of events (ex: on a schedule, when a table record is added/changed, when a machine attribute is changed, etc.). They will then execute defined actions within configured control parameters. Some Automation use cases include alerting and monitoring, scheduling repeat tasks, and defect detection. The main drawback of Automations is cost. As they are constantly running there is an associated extra cost for each Automation.
Finally, we arrive at Functions. They allow for the templatization of logic that can be reused in any number of triggers in any number of apps. When a logic update is made to a function, there is no need to go into each individual trigger to update the logic. The update is made automatically. Also, there is no additional cost with Functions because they are called within triggers that exist within apps. Using my project example above, managing repeated logic amongst those 7,500 triggers becomes a breeze. Recently, I needed to update the logic we use to log exceptions within process apps. It took me over a day to sort through apps, look for these triggers and update the logic. There is also a risk I missed a step and failed to make that update somewhere. With Functions this would take me approximately 5 minutes and the risk of missing a step is eliminated. Some other use cases for functions are bundling connector calls to streamline ERP integrations, native looping logic for repeat tasks, and standardize functions like barcode scans.
Attending Operation Calling 2025 reinforced just how much Tulip has evolved and the impact it’s having on manufacturing. Interest in Tulip amongst Life Sciences manufacturers has grown so much since I first started working with the platform four years ago. This is a testament to Tulip’s rapid improvements and the value it delivers to business, quality, and operations personnel that are the backbone of drug manufacturing. I’ve never been more excited about the future of Tulip and I’m excited to see how it will continue to improve manufacturing in the years ahead.

