When beginning their AI initiatives, many firms are trapped in silos and deal with AI as a purely technical enterprise, sidelining area consultants or involving them too late. They find yourself with generic AI functions that miss {industry} nuances, produce poor suggestions, and rapidly turn out to be unpopular with customers. Against this, AI methods that deeply perceive industry-specific processes, constraints, and determination logic have the next advantages:
- Elevated effectivity — The extra area information AI incorporates, the much less handbook effort is required from human consultants.
- Improved adoption — Consultants disengage from AI methods that really feel too generic. AI should converse their language and align with actual workflows to achieve belief.
- A sustainable aggressive moat — As AI turns into a commodity, embedding proprietary experience is the best strategy to construct defensible AI methods (cf. this article to be taught concerning the constructing blocks of AI’s aggressive benefit).
Area consultants can assist you join the dots between the technicalities of an AI system and its real-life utilization and worth. Thus, they need to be key stakeholders and co-creators of your AI functions. This information is the primary a part of my collection on expertise-driven AI. Following my mental model of AI systems, it gives a structured method to embedding deep area experience into your AI.
All through the article, we are going to use the use case of provide chain optimisation (SCO) as an instance these completely different strategies. Trendy provide chains are underneath fixed pressure from geopolitical tensions, local weather disruptions, and risky demand shifts, and AI can present the type of dynamic, high-coverage intelligence wanted to anticipate delays, handle dangers, and optimise logistics. Nevertheless, with out area experience, these methods are sometimes disconnected from the realities of life. Let’s see how we will clear up this by integrating area experience throughout the completely different elements of the AI utility.
AI is just as domain-aware as the information it learns from. Uncooked knowledge isn’t sufficient — it have to be curated, refined, and contextualised by consultants who perceive its which means in the true world.
Knowledge understanding: Educating AI what issues
Whereas knowledge scientists can construct refined fashions to analyse patterns and distributions, these analyses usually keep at a theoretical, summary stage. Solely area consultants can validate whether or not the information is full, correct, and consultant of real-world circumstances.
In provide chain optimisation, for instance, cargo data could comprise lacking supply timestamps, inconsistent route particulars, or unexplained fluctuations in transit occasions. A knowledge scientist would possibly discard these as noise, however a logistics skilled might have real-world explanations of those inconsistencies. As an illustration, they is perhaps attributable to weather-related delays, seasonal port congestion, or provider reliability points. If these nuances aren’t accounted for, the AI would possibly be taught a very simplified view of provide chain dynamics, leading to deceptive threat assessments and poor suggestions.
Consultants additionally play a essential function in assessing the completeness of knowledge. AI fashions work with what they’ve, assuming that every one key elements are already current. It takes human experience and judgment to determine blind spots. For instance, in case your provide chain AI isn’t educated on customs clearance occasions or manufacturing facility shutdown histories, it received’t be capable of predict disruptions attributable to regulatory points or manufacturing bottlenecks.
✅ Implementation tip: Run joint Exploratory Knowledge Evaluation (EDA) classes with knowledge scientists and area consultants to determine lacking business-critical data, guaranteeing AI fashions work with a whole and significant dataset, not simply statistically clear knowledge.
One widespread pitfall when beginning with AI is integrating an excessive amount of knowledge too quickly, resulting in complexity, congestion of your knowledge pipelines, and blurred or noisy insights. As a substitute, begin with a few high-impact knowledge sources and increase incrementally primarily based on AI efficiency and person wants. As an illustration, an SCO system could initially use historic cargo knowledge and provider reliability scores. Over time, area consultants could determine lacking data — comparable to port congestion knowledge or real-time climate forecasts — and level engineers to these knowledge sources the place it may be discovered.
✅ Implementation tip: Begin with a minimal, high-value dataset (usually 3–5 knowledge sources), then increase incrementally primarily based on skilled suggestions and real-world AI efficiency.
AI fashions be taught by detecting patterns in knowledge, however typically, the fitting studying indicators aren’t but current in uncooked knowledge. That is the place knowledge annotation is available in — by labelling key attributes, area consultants assist the AI perceive what issues and make higher predictions. Think about an AI mannequin constructed to foretell provider reliability. The mannequin is educated on cargo data, which comprise supply occasions, delays, and transit routes. Nevertheless, uncooked supply knowledge alone doesn’t seize the complete image of provider threat — there are not any direct labels indicating whether or not a provider is “excessive threat” or “low threat.”
With out extra specific studying indicators, the AI would possibly make the flawed conclusions. It might conclude that every one delays are equally dangerous, even when some are attributable to predictable seasonal fluctuations. Or it’d overlook early warning indicators of provider instability, comparable to frequent last-minute order modifications or inconsistent stock ranges.
Area consultants can enrich the information with extra nuanced labels, comparable to provider threat classes, disruption causes, and exception-handling guidelines. By introducing these curated studying indicators, you possibly can make sure that AI doesn’t simply memorise previous developments however learns significant, decision-ready insights.
You shouldn’t rush your annotation efforts — as an alternative, take into consideration a structured annotation course of that features the next elements:
- Annotation pointers: Set up clear, standardized guidelines for labeling knowledge to make sure consistency. For instance, provider threat classes needs to be primarily based on outlined thresholds (e.g., supply delays over 5 days + monetary instability = excessive threat).
- A number of skilled evaluate: Contain a number of area consultants to scale back bias and guarantee objectivity, significantly for subjective classifications like threat ranges or disruption affect.
- Granular labelling: Seize each direct and contextual elements, comparable to annotating not simply cargo delays but additionally the trigger (customs, climate, provider fault).
- Steady refinement: Usually audit and refine annotations primarily based on AI efficiency — if predictions persistently miss key dangers, consultants ought to regulate labelling methods accordingly.
✅ Implementation tip: Outline an annotation playbook with clear labelling standards, contain not less than two area consultants per essential label for objectivity, and run common annotation evaluate cycles to make sure AI is studying from correct, business-relevant insights.
To this point, our AI fashions be taught from real-life historic knowledge. Nevertheless, uncommon, high-impact occasions — like manufacturing facility shutdowns, port closures, or regulatory shifts in our provide chain state of affairs — could also be underrepresented. With out publicity to those eventualities, AI can fail to anticipate main dangers, resulting in overconfidence in provider stability and poor contingency planning. Artificial knowledge solves this by creating extra datapoints for uncommon occasions, however skilled oversight is essential to make sure that it displays believable dangers somewhat than unrealistic patterns.
Let’s say we need to predict provider reliability in our provide chain system. The historic knowledge could have few recorded provider failures — however that’s not as a result of failures don’t occur. Quite, many firms proactively mitigate dangers earlier than they escalate. With out artificial examples, AI would possibly deduce that provider defaults are extraordinarily uncommon, resulting in misguided threat assessments.
Consultants can assist generate artificial failure eventualities primarily based on:
- Historic patterns — Simulating provider collapses triggered by financial downturns, regulatory shifts, or geopolitical tensions.
- Hidden threat indicators — Coaching AI on unrecorded early warning indicators, like monetary instability or management modifications.
- Counterfactuals — Creating “what-if” occasions, comparable to a semiconductor provider all of the sudden halting manufacturing or a chronic port strike.
✅ Actionable step: Work with area consultants to outline high-impact however low-frequency occasions and eventualities, which will be in focus once you generate artificial knowledge.
Knowledge makes area experience shine. An AI initiative that depends on clear, related, and enriched area knowledge could have an apparent aggressive benefit over one which takes the “quick-and-dirty” shortcut to knowledge. Nevertheless, remember the fact that working with knowledge will be tedious, and consultants must see the result of their efforts — whether or not it’s enhancing AI-driven threat assessments, optimising provide chain resilience, or enabling smarter decision-making. The hot button is to make knowledge collaboration intuitive, purpose-driven, and immediately tied to enterprise outcomes, so consultants stay engaged and motivated.
As soon as AI has entry to high-quality knowledge, the subsequent problem is guaranteeing it generates helpful and correct outputs. Area experience is required to:
- Outline clear AI targets aligned with enterprise priorities
- Guarantee AI appropriately interprets industry-specific knowledge
- Repeatedly validate AI’s outputs and suggestions
Let’s have a look at some widespread AI approaches and see how they will profit from an additional shot of area information.
Coaching predictive fashions from scratch
For structured issues like provide chain forecasting, predictive fashions comparable to classification and regression can assist anticipate delays and counsel optimisations. Nevertheless, to ensure these fashions are aligned with enterprise objectives, knowledge scientists and information engineers must work collectively. For instance, an AI mannequin would possibly attempt to minimise cargo delays in any respect prices, however a provide chain skilled is aware of that fast-tracking each cargo via air freight is financially unsustainable. They’ll formulate extra constraints on the mannequin, making it prioritise essential shipments whereas balancing price, threat, and lead occasions.
✅ Implementation tip: Outline clear targets and constraints with area consultants earlier than coaching AI fashions, guaranteeing alignment with actual enterprise priorities.
For an in depth overview of predictive AI methods, please confer with Chapter 4 of my e-book The Art of AI Product Management.
Navigating the LLM triad
Whereas predictive fashions educated from scratch can excel at very particular duties, they’re additionally inflexible and can “refuse” to carry out some other job. GenAI fashions are extra open-minded and can be utilized for extremely various requests. For instance, an LLM-based conversational widget in an SCO system can permit customers to work together with real-time insights utilizing pure language. As a substitute of sifting via rigid dashboards, customers can ask, “Which suppliers are susceptible to delays?” or “What various routes can be found?” The AI pulls from historic knowledge, reside logistics feeds, and exterior threat elements to offer actionable solutions, counsel mitigations, and even automate workflows like rerouting shipments.
However how can you make sure that an enormous, out-of-the-box mannequin like ChatGPT or Llama understands the nuances of your area? Let’s stroll via the LLM triad — a development of methods to include area information into your LLM system.
As you progress from left to proper, you possibly can ingrain extra area information into the LLM — nevertheless, every stage additionally provides new technical challenges (if you’re occupied with a scientific deep-dive into the LLM triad, please try chapters 5–8 of my e-book The Art of AI Product Management). Right here, let’s give attention to how area consultants can soar in at every of the levels:
- Prompting out-of-the-box LLMs would possibly look like a generic method, however with the fitting instinct and talent, area consultants can fine-tune prompts to extract the additional little bit of area information out of the LLM. Personally, I feel it is a huge a part of the fascination round prompting — it places essentially the most highly effective AI fashions immediately into the fingers of area consultants with none technical experience. Some key prompting methods embrace:
- Few-shot prompting: Incorporate examples to information the mannequin’s responses. As a substitute of simply asking “What are various transport routes?”, a well-crafted immediate contains pattern eventualities, comparable to “Instance of previous state of affairs: A earlier delay on the Port of Shenzhen was mitigated by rerouting via Ho Chi Minh Metropolis, decreasing transit time by 3 days.”
- Chain-of-thought prompting: Encourage step-by-step reasoning for advanced logistics queries. As a substitute of “Why is my cargo delayed?”, a structured immediate is perhaps “Analyse historic supply knowledge, climate reviews, and customs processing occasions to find out why cargo #12345 is delayed.”
- Offering additional background data: Connect exterior paperwork to enhance domain-specific responses. For instance, prompts might reference real-time port congestion reviews, provider contracts, or threat assessments to generate data-backed suggestions. Most LLM interfaces already mean you can conveniently connect extra recordsdata to your immediate.
2. RAG (Retrieval-Augmented Era): Whereas prompting helps information AI, it nonetheless depends on pre-trained information, which can be outdated or incomplete. RAG permits AI to retrieve real-time, company-specific knowledge, guaranteeing that its responses are grounded in present logistics reviews, provider efficiency data, and threat assessments. For instance, as an alternative of producing generic provider threat analyses, a RAG-powered AI system would pull real-time cargo knowledge, provider credit score rankings, and port congestion reviews earlier than making suggestions. Area consultants can assist choose and construction these knowledge sources and are additionally wanted on the subject of testing and evaluating RAG methods.
✅ Implementation tip: Work with area consultants to curate and construction information sources — guaranteeing AI retrieves and applies solely essentially the most related and high-quality enterprise data.
3. Positive-tuning: Whereas prompting and RAG inject area information on-the-fly, they don’t inherently embed provide domain-specific workflows, terminology, or determination logic into your LLM. Positive-tuning adapts the LLM to assume like a logistics skilled. Area consultants can information this course of by creating high-quality coaching knowledge, guaranteeing AI learns from actual provider assessments, threat evaluations, and procurement selections. They’ll refine {industry} terminology to forestall misinterpretations (e.g., AI distinguishing between “buffer inventory” and “security inventory”). In addition they align AI’s reasoning with enterprise logic, guaranteeing it considers price, threat, and compliance — not simply effectivity. Lastly, they consider fine-tuned fashions, testing AI towards real-world selections to catch biases or blind spots.
✅ Implementation tip: In LLM fine-tuning, knowledge is the essential success issue. High quality goes over amount, and fine-tuning on a small, high-quality dataset can provide you glorious outcomes. Thus, give your consultants sufficient time to determine the fitting construction and content material of the fine-tuning knowledge and plan for loads of end-to-end iterations of your fine-tuning course of.
Encoding skilled information with neuro-symbolic AI
Each machine studying algorithm will get it flawed on occasion. To mitigate errors, it helps to set the “onerous details” of your area in stone, making your AI system extra dependable and controllable. This mixture of machine studying and deterministic guidelines is known as neuro-symbolic AI.
For instance, an specific information graph can encode provider relationships, regulatory constraints, transportation networks, and threat dependencies in a structured, interconnected format.
As a substitute of relying purely on statistical correlations, an AI system enriched with information graphs can:
- Validate predictions towards domain-specific guidelines (e.g., guaranteeing that AI-generated provider suggestions adjust to regulatory necessities).
- Infer lacking data (e.g., if a provider has no historic delays however shares dependencies with high-risk suppliers, AI can assess its potential threat).
- Enhance explainability by permitting AI selections to be traced again to logical, rule-based reasoning somewhat than black-box statistical outputs.
How are you going to resolve which information needs to be encoded with guidelines (symbolic AI), and which needs to be realized dynamically from the information (neural AI)? Area consultants can assist youpick these bits of data the place hard-coding makes essentially the most sense:
- Data that’s comparatively secure over time
- Data that’s onerous to deduce from the information, for instance as a result of it’s not well-represented
- Data that’s essential for high-impact selections in your area, so you possibly can’t afford to get it flawed
Most often, this information can be saved in separate elements of your AI system, like determination timber, information graphs, and ontologies. There are additionally some strategies to combine it immediately into LLMs and different statistical fashions, comparable to Lamini’s memory fine-tuning.
Compound AI and workflow engineering
Producing insights and turning them into actions is a multi-step course of. Consultants can assist you mannequin workflows and decision-making pipelines, guaranteeing that the method adopted by your AI system aligns with their duties. For instance, the next pipeline exhibits how the AI elements we thought of up to now will be mixed right into a workflow for the mitigation of cargo dangers:
Consultants are additionally wanted to calibrate the “labor distribution” between people in AI. For instance, when modelling determination logic, they will set thresholds for automation, deciding when AI can set off workflows versus when human approval is required.
✅ Implementation tip: Contain your area consultants in mapping your processes to AI fashions and belongings, figuring out gaps vs. steps that may already be automated.
Particularly in B2B environments, the place staff are deeply embedded of their each day workflows, the person expertise have to be seamlessly built-in with present processes and job constructions to make sure effectivity and adoption. For instance, an AI-powered provide chain device should align with how logistics professionals assume, work, and make selections. Within the growth part, area consultants are the closest “friends” to your actual customers, and choosing their brains is without doubt one of the quickest methods to bridge the hole between AI capabilities and real-world usability.
✅ Implementation tip: Contain area consultants early in UX design to make sure AI interfaces are intuitive, related, and tailor-made to actual decision-making workflows.
Guaranteeing transparency and belief in AI selections
AI thinks in another way from people, which makes us people skeptical. Usually, that’s factor because it helps us keep alert to potential errors. However mistrust can be one of many largest limitations to AI adoption. When customers don’t perceive why a system makes a selected suggestion, they’re much less prone to work with it. Area consultants can outline how AI ought to clarify itself — guaranteeing customers have visibility into confidence scores, determination logic, and key influencing elements.
For instance, if an SCO system recommends rerouting a cargo, it might be irresponsible on the a part of a logistics planner to only settle for it. She must see the “why” behind the advice — is it because of provider threat, port congestion, or gasoline price spikes? The UX ought to present a breakdown of the choice, backed by extra data like historic knowledge, threat elements, and a cost-benefit evaluation.
⚠️ Mitigate overreliance on AI: Extreme dependence of your customers on AI can introduce bias, errors, and unexpected failures. Consultants ought to discover methods to calibrate AI-driven insights vs. human experience, moral oversight, and strategic safeguards to make sure resilience, adaptability, and belief in decision-making.
✅ Implementation tip: Work with area consultants to outline key explainability options — comparable to confidence scores, knowledge sources, and affect summaries — so customers can rapidly assess AI-driven suggestions.
Simplifying AI interactions with out shedding depth
AI instruments ought to make advanced selections simpler, not more durable. If customers want deep technical information to extract insights from AI, the system has failed from a UX perspective. Area consultants can assist strike a stability between simplicity and depth, guaranteeing the interface gives actionable, context-aware suggestions whereas permitting deeper evaluation when wanted.
As an illustration, as an alternative of forcing customers to manually sift via knowledge tables, AI might present pre-configured reviews primarily based on widespread logistics challenges. Nevertheless, skilled customers must also have on-demand entry to uncooked knowledge and superior settings when mandatory. The hot button is to design AI interactions which might be environment friendly for on a regular basis use however versatile for deep evaluation when required.
✅ Implementation tip: Use area skilled suggestions to outline default views, precedence alerts, and user-configurable settings, guaranteeing AI interfaces present each effectivity for routine duties and depth for deeper analysis and strategic selections.
Steady UX testing and iteration with consultants
AI UX isn’t a one-and-done course of — it must evolve with real-world person suggestions. Area consultants play a key function in UX testing, refinement, and iteration, guaranteeing that AI-driven workflows keep aligned with enterprise wants and person expectations.
For instance, your preliminary interface could floor too many low-priority alerts, resulting in alert fatigue the place customers begin ignoring AI suggestions. Provide chain consultants can determine which alerts are most respected, permitting UX designers to prioritize high-impact insights whereas decreasing noise.
✅ Implementation tip: Conduct think-aloud sessions and have area consultants verbalize their thought course of when interacting together with your AI interface. This helps AI groups uncover hidden assumptions and refine AI primarily based on how consultants really assume and make selections.
Vertical AI methods should combine area information at each stage, and consultants ought to turn out to be key stakeholders in your AI growth:
- They refine knowledge choice, annotation, and artificial knowledge.
- They information AI studying via prompting, RAG, and fine-tuning.
- They help the design of seamless person experiences that combine with each day workflows in a clear and reliable approach.
An AI system that “will get” the area of your customers won’t solely be helpful and adopted within the short- and middle-term, but additionally contribute to the aggressive benefit of your corporation.
Now that you’ve realized a bunch of strategies to include domain-specific information, you is perhaps questioning tips on how to method this in your organizational context. Keep tuned for my subsequent article, the place we are going to contemplate the sensible challenges and techniques for implementing an expertise-driven AI technique!
Word: Except famous in any other case, all photos are the writer’s.