, we frequently want to research what’s occurring with KPIs: whether or not we’re reacting to anomalies on our dashboards or simply routinely doing a numbers replace. Primarily based on my years of expertise as a KPI analyst, I’d estimate that greater than 80% of those duties are pretty normal and might be solved simply by following a easy guidelines.
Right here’s a high-level plan for investigating a KPI change (yow will discover extra particulars within the article “Anomaly Root Cause Analysis 101”):
- Estimate the top-line change within the metric to grasp the magnitude of the shift.
- Verify knowledge high quality to make sure that the numbers are correct and dependable.
- Collect context about inside and exterior occasions that may have influenced the change.
- Slice and cube the metric to establish which segments are contributing to the metric’s shift.
- Consolidate your findings in an government abstract that features hypotheses and estimates of their impacts on the primary KPI.
Since we’ve got a transparent plan to execute, such duties can doubtlessly be automated utilizing AI brokers. The code brokers we just lately discussed may very well be a very good match there, as their means to jot down and execute code will assist them to analyse knowledge effectively, with minimal back-and-forth. So, let’s attempt constructing such an agent utilizing the HuggingFace smolagents framework.
Whereas engaged on our activity, we’ll focus on extra superior options of the smolagents framework:
- Strategies for tweaking every kind of prompts to make sure the specified behaviour.
- Constructing a multi-agent system that may clarify the Kpi modifications and hyperlink them to root causes.
- Including reflection to the circulate with supplementary planning steps.
MVP for explaining KPI modifications
As regular, we’ll take an iterative method and begin with a easy MVP, specializing in the slicing and dicing step of the evaluation. We are going to analyse the modifications of a easy metric (income) break up by one dimension (nation). We are going to use the dataset from my earlier article, “Making sense of KPI changes”.
Let’s load the info first.
raw_df = pd.read_csv('absolute_metrics_example.csv', sep = 't')
df = raw_df.groupby('nation')[['revenue_before', 'revenue_after_scenario_2']].sum()
.sort_values('revenue_before', ascending = False).rename(
columns = {'revenue_after_scenario_2': 'after',
'revenue_before': 'earlier than'})
Subsequent, let’s initialise the mannequin. I’ve chosen the OpenAI GPT-4o-mini as my most popular possibility for easy duties. Nonetheless, the smolagents framework supports every kind of fashions, so you need to use the mannequin you like. Then, we simply have to create an agent and provides it the duty and the dataset.
from smolagents import CodeAgent, LiteLLMModel
mannequin = LiteLLMModel(model_id="openai/gpt-4o-mini",
api_key=config['OPENAI_API_KEY'])
agent = CodeAgent(
mannequin=mannequin, instruments=[], max_steps=10,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*",
"plotly.*"], verbosity_level=1
)
activity = """
Here's a dataframe exhibiting income by section, evaluating values
earlier than and after.
May you please assist me perceive the modifications? Particularly:
1. Estimate how the entire income and the income for every section
have modified, each in absolute phrases and as a proportion.
2. Calculate the contribution of every section to the entire
change in income.
Please spherical all floating-point numbers within the output
to 2 decimal locations.
"""
agent.run(
activity,
additional_args={"knowledge": df},
)
The agent returned fairly a believable outcome. We received detailed statistics on the metric modifications in every section and their affect on the top-line KPI.
{'total_before': 1731985.21, 'total_after':
1599065.55, 'total_change': -132919.66, 'segment_changes':
{'absolute_change': {'different': 4233.09, 'UK': -4376.25, 'France':
-132847.57, 'Germany': -690.99, 'Italy': 979.15, 'Spain':
-217.09}, 'percentage_change': {'different': 0.67, 'UK': -0.91,
'France': -55.19, 'Germany': -0.43, 'Italy': 0.81, 'Spain':
-0.23}, 'contribution_to_change': {'different': -3.18, 'UK': 3.29,
'France': 99.95, 'Germany': 0.52, 'Italy': -0.74, 'Spain': 0.16}}}
Let’s check out the code generated by the agent. It’s positive, however there’s one potential challenge. The Llm recreated the dataframe based mostly on the enter knowledge as a substitute of referencing it straight. This method just isn’t excellent (particularly when working with large datasets), as it could possibly result in errors and better token utilization.
import pandas as pd
# Creating the DataFrame from the offered knowledge
knowledge = {
'earlier than': [632767.39, 481409.27, 240704.63, 160469.75,
120352.31, 96281.86],
'after': [637000.48, 477033.02, 107857.06, 159778.76,
121331.46, 96064.77]
}
index = ['other', 'UK', 'France', 'Germany', 'Italy', 'Spain']
df = pd.DataFrame(knowledge, index=index)
# Calculating whole income earlier than and after
total_before = df['before'].sum()
total_after = df['after'].sum()
# Calculating absolute and proportion change for every section
df['absolute_change'] = df['after'] - df['before']
df['percentage_change'] = (df['absolute_change'] /
df['before']) * 100
# Calculating whole income change
total_change = total_after - total_before
# Calculating contribution of every section to the entire change
df['contribution_to_change'] = (df['absolute_change'] /
total_change) * 100
# Rounding outcomes
df = df.spherical(2)
# Printing the calculated outcomes
print("Whole income earlier than:", total_before)
print("Whole income after:", total_after)
print("Whole change in income:", total_change)
print(df)
It’s value fixing this drawback earlier than shifting on to constructing a extra complicated system.
Tweaking prompts
Because the LLM is simply following the directions given to it, we’ll tackle this challenge by tweaking the immediate.
Initially, I tried to make the duty immediate extra express, clearly instructing the LLM to make use of the offered variable.
activity = """Here's a dataframe exhibiting income by section, evaluating
values earlier than and after. The info is saved in df variable.
Please, use it and do not attempt to parse the info your self.
May you please assist me perceive the modifications?
Particularly:
1. Estimate how the entire income and the income for every section
have modified, each in absolute phrases and as a proportion.
2. Calculate the contribution of every section to the entire change in income.
Please spherical all floating-point numbers within the output to 2 decimal locations.
"""
It didn’t work. So, the subsequent step is to look at the system immediate and see why it really works this manner.
print(agent.prompt_templates['system_prompt'])
#...
# Listed below are the foundations you need to all the time observe to unravel your activity:
# 1. At all times present a 'Thought:' sequence, and a 'Code:n```py' sequence ending with '```' sequence, else you'll fail.
# 2. Use solely variables that you've got outlined.
# 3. At all times use the precise arguments for the instruments. DO NOT go the arguments as a dict as in 'reply = wiki({'question': "What's the place the place James Bond lives?"})', however use the arguments straight as in 'reply = wiki(question="What's the place the place James Bond lives?")'.
# 4. Take care to not chain too many sequential software calls in the identical code block, particularly when the output format is unpredictable. As an example, a name to look has an unpredictable return format, so shouldn't have one other software name that is determined by its output in the identical block: quite output outcomes with print() to make use of them within the subsequent block.
# 5. Name a software solely when wanted, and by no means re-do a software name that you just beforehand did with the very same parameters.
# 6. Do not identify any new variable with the identical identify as a software: as an example do not identify a variable 'final_answer'.
# 7. By no means create any notional variables in our code, as having these in your logs will derail you from the true variables.
# 8. You need to use imports in your code, however solely from the next checklist of modules: ['collections', 'datetime', 'itertools', 'math', 'numpy', 'pandas', 'queue', 'random', 're', 'stat', 'statistics', 'time', 'unicodedata']
# 9. The state persists between code executions: so if in a single step you've got created variables or imported modules, these will all persist.
# 10. Do not hand over! You are in command of fixing the duty, not offering instructions to unravel it.
# Now Start!
On the finish of the immediate, we’ve got the instruction "# 2. Use solely variables that you've got outlined!"
. This may be interpreted as a strict rule to not use another variables. So, I modified it to "# 2. Use solely variables that you've got outlined or ones offered in extra arguments! By no means attempt to copy and parse extra arguments."
modified_system_prompt = agent.prompt_templates['system_prompt']
.exchange(
'2. Use solely variables that you've got outlined!',
'2. Use solely variables that you've got outlined or ones offered in extra arguments! By no means attempt to copy and parse extra arguments.'
)
agent.prompt_templates['system_prompt'] = modified_system_prompt
This variation alone didn’t assist both. Then, I examined the duty message.
╭─────────────────────────── New run ────────────────────────────╮
│ │
│ Here's a pandas dataframe exhibiting income by section, │
│ evaluating values earlier than and after. │
│ May you please assist me perceive the modifications? │
│ Particularly: │
│ 1. Estimate how the entire income and the income for every │
│ section have modified, each in absolute phrases and as a │
│ proportion. │
│ 2. Calculate the contribution of every section to the entire │
│ change in income. │
│ │
│ Please spherical all floating-point numbers within the output to 2 │
│ decimal locations. │
│ │
│ You could have been supplied with these extra arguments, that │
│ you possibly can entry utilizing the keys as variables in your python │
│ code: │
│ {'df': earlier than after │
│ nation │
│ different 632767.39 637000.48 │
│ UK 481409.27 477033.02 │
│ France 240704.63 107857.06 │
│ Germany 160469.75 159778.76 │
│ Italy 120352.31 121331.46 │
│ Spain 96281.86 96064.77}. │
│ │
╰─ LiteLLMModel - openai/gpt-4o-mini ────────────────────────────╯
It has an instruction associated the the utilization of extra arguments "You could have been supplied with these extra arguments, you could entry utilizing the keys as variables in your python code"
. We will attempt to make it extra particular and clear. Sadly, this parameter just isn’t uncovered externally, so I needed to find it in the source code. To search out the trail of a Python bundle, we are able to use the next code.
import smolagents
print(smolagents.__path__)
Then, I discovered the brokers.py
file and modified this line to incorporate a extra particular instruction.
self.activity += f"""
You could have been supplied with these extra arguments accessible as variables
with names {",".be a part of(additional_args.keys())}. You may entry them straight.
Here's what they include (only for informational functions):
{str(additional_args)}."""
It was a little bit of hacking, however that’s typically what occurs with the LLM frameworks. Don’t overlook to reload the bundle afterwards, and we’re good to go. Let’s take a look at whether or not it really works now.
activity = """
Here's a pandas dataframe exhibiting income by section, evaluating values
earlier than and after.
Your activity shall be perceive the modifications to the income (after vs earlier than)
in several segments and supply government abstract.
Please, observe the next steps:
1. Estimate how the entire income and the income for every section
have modified, each in absolute phrases and as a proportion.
2. Calculate the contribution of every section to the entire change
in income.
Spherical all floating-point numbers within the output to 2 decimal locations.
"""
agent.logger.stage = 1 # Decrease verbosity stage
agent.run(
activity,
additional_args={"df": df},
)
Hooray! The issue has been mounted. The agent not copies the enter variables and references df
variable straight as a substitute. Right here’s the newly generated code.
import pandas as pd
# Calculate whole income earlier than and after
total_before = df['before'].sum()
total_after = df['after'].sum()
total_change = total_after - total_before
percentage_change_total = (total_change / total_before * 100)
if total_before != 0 else 0
# Spherical values
total_before = spherical(total_before, 2)
total_after = spherical(total_after, 2)
total_change = spherical(total_change, 2)
percentage_change_total = spherical(percentage_change_total, 2)
# Show outcomes
print(f"Whole Income Earlier than: {total_before}")
print(f"Whole Income After: {total_after}")
print(f"Whole Change: {total_change}")
print(f"Proportion Change: {percentage_change_total}%")
Now, we’re prepared to maneuver on to constructing the precise agent that may clear up our activity.
AI agent for KPI narratives
Lastly, it’s time to work on the AI agent that may assist us clarify KPI modifications and create an government abstract.
Our agent will observe this plan for the foundation trigger evaluation:
- Estimate the top-line KPI change.
- Slice and cube the metric to grasp which segments are driving the shift.
- Search for occasions within the change log to see whether or not they can clarify the metric modifications.
- Consolidate all of the findings within the complete government abstract.
After a variety of experimentation and a number of other tweaks, I’ve arrived at a promising outcome. Listed below are the important thing changes I made (we’ll focus on them intimately later):
- I leveraged the multi-agent setup by including one other workforce member — the change log Agent, who can entry the change log and help in explaining KPI modifications.
- I experimented with extra highly effective fashions like
gpt-4o
andgpt-4.1-mini
sincegpt-4o-mini
wasn’t enough. Utilizing stronger fashions not solely improved the outcomes, but in addition considerably decreased the variety of steps: withgpt-4.1-mini
I received the ultimate outcome after simply six steps, in comparison with 14–16 steps withgpt-4o-mini
. This means that investing in costlier fashions may be worthwhile for agentic workflows. - I offered the agent with the complicated software to analyse KPI modifications for easy metrics. The software performs all of the calculations, whereas LLM can simply interpret the outcomes. I mentioned the method to KPI modifications evaluation intimately in my previous article.
- I reformulated the immediate into a really clear step-by-step information to assist the agent keep on monitor.
- I added planning steps that encourage the LLM agent to suppose via its method first and revisit the plan each three iterations.
After all of the changes, I received the next abstract from the agent, which is fairly good.
Govt Abstract:
Between April 2025 and Could 2025, whole income declined sharply by
roughly 36.03%, falling from 1,731,985.21 to 1,107,924.43, a
drop of -624,060.78 in absolute phrases.
This decline was primarily pushed by important income
reductions within the 'new' buyer segments throughout a number of
international locations, with declines of roughly 70% in these segments.
Probably the most impacted segments embody:
- other_new: earlier than=233,958.42, after=72,666.89,
abs_change=-161,291.53, rel_change=-68.94%, share_before=13.51%,
affect=25.85, impact_norm=1.91
- UK_new: earlier than=128,324.22, after=34,838.87,
abs_change=-93,485.35, rel_change=-72.85%, share_before=7.41%,
affect=14.98, impact_norm=2.02
- France_new: earlier than=57,901.91, after=17,443.06,
abs_change=-40,458.85, rel_change=-69.87%, share_before=3.34%,
affect=6.48, impact_norm=1.94
- Germany_new: earlier than=48,105.83, after=13,678.94,
abs_change=-34,426.89, rel_change=-71.56%, share_before=2.78%,
affect=5.52, impact_norm=1.99
- Italy_new: earlier than=36,941.57, after=11,615.29,
abs_change=-25,326.28, rel_change=-68.56%, share_before=2.13%,
affect=4.06, impact_norm=1.91
- Spain_new: earlier than=32,394.10, after=7,758.90,
abs_change=-24,635.20, rel_change=-76.05%, share_before=1.87%,
affect=3.95, impact_norm=2.11
Primarily based on evaluation from the change log, the primary causes for this
development are:
1. The introduction of recent onboarding controls carried out on Could
8, 2025, which decreased new buyer acquisition by about 70% to
stop fraud.
2. A postal service strike within the UK beginning April 5, 2025,
inflicting order supply delays and elevated cancellations
impacting the UK new section.
3. A rise in VAT by 2% in Spain as of April 22, 2025,
affecting new buyer pricing and inflicting increased cart
abandonment.
These components mixed clarify the outsized damaging impacts
noticed in new buyer segments and the general income decline.
The LLM agent additionally generated a bunch of illustrative charts (they had been a part of our progress explaining software). For instance, this one reveals the impacts throughout the mixture of nation and maturity.

The outcomes look actually thrilling. Now let’s dive deeper into the precise implementation to grasp the way it works underneath the hood.
Multi-AI agent setup
We are going to begin with our change log agent. This agent will question the change log and attempt to establish potential root causes for the metric modifications we observe. Since this agent doesn’t have to do complicated operations, we implement it as a ToolCallingAgent. As a result of this agent shall be known as by one other agent, we have to outline its identify
and description
attributes.
@software
def get_change_log(month: str) -> str:
"""
Returns the change log (checklist of inside and exterior occasions that may have affected our KPIs) for the given month
Args:
month: month within the format %Y-%m-01, for instance, 2025-04-01
"""
return events_df[events_df.month == month].drop('month', axis = 1).to_dict('information')
mannequin = LiteLLMModel(model_id="openai/gpt-4.1-mini", api_key=config['OPENAI_API_KEY'])
change_log_agent = ToolCallingAgent(
instruments=[get_change_log],
mannequin=mannequin,
max_steps=10,
identify="change_log_agent",
description="Helps you discover the related data within the change log that may clarify modifications on metrics. Present the agent with all of the context to obtain information",
)
Because the supervisor agent shall be calling this agent, we gained’t have any management over the question it receives. Due to this fact, I made a decision to change the system immediate to incorporate extra context.
change_log_system_prompt = '''
You are a grasp of the change log and also you assist others to elucidate
the modifications to metrics. While you obtain a request, search for the checklist of occasions
occurred by month, then filter the related data based mostly
on offered context and return again. Prioritise essentially the most possible components
affecting the KPI and restrict your reply solely to them.
'''
modified_system_prompt = change_log_agent.prompt_templates['system_prompt']
+ 'nnn' + change_log_system_prompt
change_log_agent.prompt_templates['system_prompt'] = modified_system_prompt
To allow the first agent to delegate duties to the change log agent, we merely have to specify it within the managed_agents
area.
agent = CodeAgent(
mannequin=mannequin,
instruments=[calculate_simple_growth_metrics],
max_steps=20,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*", "plotly.*"],
verbosity_level = 2,
planning_interval = 3,
managed_agents = [change_log_agent]
)
Let’s see the way it works. First, we are able to have a look at the brand new system immediate for the first agent. It now contains details about workforce members and directions on easy methods to ask them for assist.
You may as well give duties to workforce members.
Calling a workforce member works the identical as for calling a software: merely,
the one argument you can provide within the name is 'activity'.
On condition that this workforce member is an actual human, you ought to be very verbose
in your activity, it ought to be a protracted string offering informations
as detailed as vital.
Here's a checklist of the workforce members you could name:
```python
def change_log_agent("Your question goes right here.") -> str:
"""Helps you discover the related data within the change log that
can clarify modifications on metrics. Present the agent with all of the context
to obtain information"""
```
The execution log reveals that the first agent efficiently delegated the duty to the second agent and acquired the next response.
─ Executing parsed code: ───────────────────────────────────────
# Question change_log_agent with the detailed activity description
ready
context_for_change_log = (
"We analyzed modifications in income from April 2025 to Could
2025. We discovered massive decreases "
"primarily within the 'new' maturity segments throughout international locations:
Spain_new, UK_new, Germany_new, France_new, Italy_new, and
other_new. "
"The income fell by round 70% in these segments, which
have outsized damaging affect on whole income change. "
"We need to know the 1-3 most possible causes for this
important drop in income within the 'new' buyer segments
throughout this era."
)
clarification = change_log_agent(activity=context_for_change_log)
print("Change log agent clarification:")
print(clarification)
────────────────────────────────────────────────────────────────
╭──────────────────── New run - change_log_agent ─────────────────────╮
│ │
│ You are a useful agent named 'change_log_agent'. │
│ You could have been submitted this activity by your supervisor. │
│ --- │
│ Process: │
│ We analyzed modifications in income from April 2025 to Could 2025. │
│ We discovered massive decreases primarily within the 'new' maturity segments │
│ throughout international locations: Spain_new, UK_new, Germany_new, France_new, │
│ Italy_new, and other_new. The income fell by round 70% in these │
│ segments, which have outsized damaging affect on whole income │
│ change. We need to know the 1-3 most possible causes for this │
│ important drop in income within the 'new' buyer segments throughout │
│ this era. │
│ --- │
│ You are serving to your supervisor clear up a wider activity: so be sure to │
│ not present a one-line reply, however give as a lot data as │
│ attainable to offer them a transparent understanding of the reply. │
│ │
│ Your final_answer WILL HAVE to include these elements: │
│ ### 1. Process consequence (brief model): │
│ ### 2. Process consequence (extraordinarily detailed model): │
│ ### 3. Extra context (if related): │
│ │
│ Put all these in your final_answer software, all the pieces that you just do │
│ not go as an argument to final_answer shall be misplaced. │
│ And even when your activity decision just isn't profitable, please return │
│ as a lot context as attainable, in order that your supervisor can act upon │
│ this suggestions. │
│ │
╰─ LiteLLMModel - openai/gpt-4.1-mini ────────────────────────────────╯
Utilizing the smolagents framework, we are able to simply arrange a easy multi-agent system, the place a supervisor agent coordinates and delegates duties to workforce members with particular abilities.
Iterating on the immediate
We’ve began with a really high-level immediate outlining the purpose and a obscure route, however sadly, it didn’t work persistently. LLMs are usually not good sufficient but to determine the method on their very own. So, I created an in depth step-by-step immediate describing the entire plan and together with the detailed specs of the expansion narrative software we’re utilizing.
activity = """
Here's a pandas dataframe exhibiting the income by section, evaluating values
earlier than (April 2025) and after (Could 2025).
You are a senior and skilled knowledge analyst. Your activity shall be to grasp
the modifications to the income (after vs earlier than) in several segments
and supply government abstract.
## Comply with the plan:
1. Begin by udentifying the checklist of dimensions (columns in dataframe that
are usually not "earlier than" and "after")
2. There may be a number of dimensions within the dataframe. Begin high-level
by every dimension in isolation, mix all outcomes
collectively into the checklist of segments analysed (do not forget to save lots of
the dimension used for every section).
Use the offered instruments to analyse the modifications of metrics: {tools_description}.
3. Analyse the outcomes from earlier step and hold solely segments
which have outsized affect on the KPI change (absolute of impact_norm
is above 1.25).
4. Verify what dimensions are current within the checklist of serious section,
if there are a number of ones - execute the software on their mixtures
and add to the analysed segments. If after including an extra dimension,
all subsegments present shut different_rate and impact_norm values,
then we are able to exclude this break up (although impact_norm is above 1.25),
because it would not clarify something.
5. Summarise the numerous modifications you recognized.
6. Attempt to clarify what's going on with metrics by getting information
from the change_log_agent. Please, present the agent the total context
(what segments have outsized affect, what's the relative change and
what's the interval we're ).
Summarise the knowledge from the changelog and point out
solely 1-3 essentially the most possible causes of the KPI change
(ranging from essentially the most impactful one).
7. Put collectively 3-5 sentences commentary what occurred high-level
and why (based mostly on the information acquired from the change log).
Then observe it up with extra detailed abstract:
- Prime-line whole worth of metric earlier than and after in human-readable format,
absolute and relative change
- Checklist of segments that meaningfully influenced the metric positively
or negatively with the next numbers: values earlier than and after,
absoltue and relative change, share of section earlier than, affect
and normed affect. Order the segments by absolute worth
of absolute change because it represents the facility of affect.
## Instruction on the calculate_simple_growth_metrics software:
By default, you need to use the software for the entire dataset not the section,
because it provides you with the total details about the modifications.
Right here is the steerage easy methods to interpret the output of the software
- distinction - absolutely the distinction between after and earlier than values
- difference_rate - the relative distinction (if it is shut for
all segments then the dimension just isn't informative)
- affect - the share of KPI differnce defined by this section
- segment_share_before - share of section earlier than
- impact_norm - affect normed on the share of segments, we're
in very excessive or very low numbers since they present outsized affect,
rule of thumb - impact_norm between -1.25 and 1.25 is not-informative
For those who're utilizing the software on the subset of dataframe consider,
that the outcomes will not be aplicable to the total dataset, so keep away from utilizing it
except you need to explicitly have a look at subset (i.e. change in France).
For those who determined to make use of the software on a specific section
and share these leads to the chief abstract, explicitly define
that we're diving deeper into a specific section.
""".format(tools_description = tools_description)
agent.run(
activity,
additional_args={"df": df},
)
Explaining all the pieces in such element was fairly a frightening activity, nevertheless it’s vital if we would like constant outcomes.
Planning steps
The smolagents framework allows you to add planning steps to your agentic circulate. This encourages the agent to begin with a plan and replace it after the required variety of steps. From my expertise, this reflection may be very useful for sustaining concentrate on the issue and adjusting actions to remain aligned with the preliminary plan and purpose. I undoubtedly suggest utilizing it in circumstances when complicated reasoning is required.
Setting it up is as straightforward as specifying planning_interval = 3
for the code agent.
agent = CodeAgent(
mannequin=mannequin,
instruments=[calculate_simple_growth_metrics],
max_steps=20,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*", "plotly.*"],
verbosity_level = 2,
planning_interval = 3,
managed_agents = [change_log_agent]
)
That’s it. Then, the agent offers reflections beginning with enthusiastic about the preliminary plan.
────────────────────────── Preliminary plan ──────────────────────────
Listed below are the details I do know and the plan of motion that I'll
observe to unravel the duty:
```
## 1. Info survey
### 1.1. Info given within the activity
- We've a pandas dataframe `df` exhibiting income by section, for
two time factors: earlier than (April 2025) and after (Could 2025).
- The dataframe columns embody:
- Dimensions: `nation`, `maturity`, `country_maturity`,
`country_maturity_combined`
- Metrics: `earlier than` (income in April 2025), `after` (income in
Could 2025)
- The duty is to grasp the modifications in income (after vs
earlier than) throughout completely different segments.
- Key directions and instruments offered:
- Determine all dimensions besides earlier than/after for segmentation.
- Analyze every dimension independently utilizing
`calculate_simple_growth_metrics`.
- Filter segments with outsized affect on KPI change (absolute
normed affect > 1.25).
- Look at mixtures of dimensions if a number of dimensions have
important segments.
- Summarize important modifications and have interaction `change_log_agent`
for contextual causes.
- Present a remaining government abstract together with top-line modifications
and segment-level detailed impacts.
- Dataset snippet reveals segments combining international locations (`France`,
`UK`, `Germany`, `Italy`, `Spain`, `different`) and maturity standing
(`new`, `current`).
- The mixed segments are uniquely recognized in columns
`country_maturity` and `country_maturity_combined`.
### 1.2. Info to search for
- Definitions or descriptions of the segments if unclear (e.g.,
what defines `new` vs `current` maturity).
- Possible not necessary to proceed, however may very well be requested from
enterprise documentation or change log.
- Extra particulars on the change log (accessible through
`change_log_agent`) that would present possible causes for income
modifications.
- Affirmation on dealing with mixed dimension splits - how precisely
`country_maturity_combined` is shaped and ought to be interpreted in
mixed dimension evaluation.
- Information dictionary or description of metrics if any extra KPI
in addition to income is related (unlikely given knowledge).
- Dates verify interval of study: April 2025 (earlier than) and Could
2025 (after). No have to look these up since given.
### 1.3. Info to derive
- Determine all dimension columns accessible for segmentation:
- By excluding 'earlier than' and 'after', possible candidates are
`nation`, `maturity`, `country_maturity`, and
`country_maturity_combined`.
- For every dimension, calculate change metrics utilizing the given
software:
- Absolute and relative distinction in income per section.
- Impression, section share earlier than, and normed affect for every
section.
- Determine which segments have outsized affect on KPI change
(|impact_norm| > 1.25).
- If a number of dimensions have important segments, mix
dimensions (e.g., nation + maturity) and reanalyze.
- Decide if mixed dimension splits present significant
differentiation or not, based mostly on delta price and impact_norm
consistency.
- Summarize route and magnitude of KPI modifications at top-line
stage (mixture income earlier than and after).
- Determine high segments driving optimistic and damaging modifications
based mostly on ordered absolute absolute_change.
- Collect contextual insights from the change log agent concerning
possible causes tied to important segments and the Could 2025 vs
April 2025 interval.
## 2. Plan
1. Determine all dimension columns current within the dataframe by
itemizing columns and excluding 'earlier than' and 'after'.
2. For every dimension recognized (`nation`, `maturity`,
`country_maturity`, `country_maturity_combined`):
- Use `calculate_simple_growth_metrics` on the total dataframe
grouped by that dimension.
- Extract segments with calculated metrics together with
impact_norm.
3. Combination outcomes from all single-dimension analyses and filter
segments the place |impact_norm| > 1.25.
4. Decide which dimensions these important segments belong
to.
5. If multiple dimension is represented in these important
segments, analyze the mixed dimension shaped by these
dimensions (for instance, mixture of `nation` and `maturity`
or use current mixed dimension columns).
6. Repeat metric calculation utilizing
`calculate_simple_growth_metrics` on the mixed dimension.
7. Look at if the mixed dimension splits create significant
differentiation - if all subsegments present shut difference_rate
and impact_norm, exclude the break up.
8. Put together a abstract of serious modifications:
- Prime-line KPIs earlier than and after (absolute and relative
modifications).
- Checklist of impactful segments sorted by absolute absolute_change
that influenced general income.
9. Present the checklist of segments with particulars (values earlier than,
after, absolute and relative change, share earlier than, affect,
impact_norm).
10. Utilizing this summarized data, question `change_log_agent`
with full context:
- Embody important segments, their relative modifications, and
intervals (April to Could 2025).
11. Course of the agent's response to establish 1-3 important possible
causes of the KPI modifications.
12. Draft government abstract commentary:
- Excessive-level overview of what occurred and why, based mostly on log
information.
- Detailed abstract together with top-line modifications and
segment-level metrics affect.
13. Ship the ultimate reply utilizing `final_answer` software containing
the above government abstract and data-driven insights.
Then, after every three steps, the agent revisits and updates the plan.
────────────────────────── Up to date plan ──────────────────────────
I nonetheless want to unravel the duty I used to be given:
```
Here's a pandas dataframe exhibiting the income by section,
evaluating values earlier than (April 2025) and after (Could 2025).
You are a senior and skilled knowledge analyst. Your activity shall be
perceive the modifications to the income (after vs earlier than) in
completely different segments
and supply government abstract.
<... repeating="" the="" full="" initial="" task="" ...="">
```
Listed below are the details I do know and my new/up to date plan of motion to
clear up the duty:
```
## 1. Up to date details survey
### 1.1. Info given within the activity
- We've a pandas dataframe with income by section, exhibiting
values "earlier than" (April 2025) and "after" (Could 2025).
- Columns within the dataframe embody a number of dimensions and the
"earlier than" and "after" income values.
- The purpose is to grasp income modifications by section and supply
an government abstract.
- Steering and guidelines about easy methods to analyze and interpret outcomes
from the `calculate_simple_growth_metrics` software are offered.
- The dataframe comprises columns: nation, maturity,
country_maturity, country_maturity_combined, earlier than, after.
### 1.2. Info that we've got discovered
- The scale to research are: nation, maturity,
country_maturity, and country_maturity_combined.
- Analyzed income modifications by dimension.
- Solely the "new" maturity section has important affect
(impact_norm=1.96 > 1.25), with a big damaging income change (~
-70.6%).
- Within the mixed section "country_maturity," the "new" segments
throughout international locations (Spain_new, UK_new, Germany_new, France_new,
Italy_new, other_new) all have outsized damaging impacts with
impact_norm values all above 1.9.
- The mature/current segments in these international locations have smaller
normed impacts beneath 1.25.
- Nation-level and maturity-level section dimension alone are
much less revealing than the mixed nation+maturity section
dimension which highlights the brand new segments as strongly impactful.
- Whole income dropped considerably from earlier than to after, largely
pushed by new segments shrinking drastically.
### 1.3. Info nonetheless to search for
- Whether or not splitting the info by extra dimensions past
nation and maturity (e.g., country_maturity_combined) explains
additional heterogeneous impacts or if the sample is uniform.
- Clarification/context from change log about what brought on the key
drop predominantly in new segments in all international locations.
- Confirming whether or not any nation inside the new section behaved
otherwise or mitigated losses.
### 1.4. Info nonetheless to derive
- A concise government abstract describing the top-level income
change and figuring out which segments clarify the declines.
- Clarification involving the change log agent with abstract of
possible causes for these outsized reductions in income within the
new segments throughout international locations for April-Could 2025.
## 2. Plan
### 2.1. Confirm if including the extra dimension
'country_maturity_combined' splits the impactful "new" segments
into subsegments with considerably completely different impacts or if the
change charges and normed impacts are comparatively homogeneous. If
homogeneous, we don't acquire deeper perception and may disregard
additional splitting.
### 2.2. Summarize all important segments recognized with
outsized impact_norm ≥ 1.25, together with their earlier than and after
values, absolute and relative modifications, section shares earlier than,
affect, and normalized affect, ordered by absolute worth of the
change.
### 2.3. Question the change_log_agent with the total context:
important segments are the brand new country_maturity segments with
massive damaging modifications (~ -70%), timeframe April 2025 to Could 2025,
and request high 1-3 most possible causes for the KPI income drop
in these segments.
### 2.4. Primarily based on the change log agent's response, synthesize a
3-5 sentence high-level commentary explaining what occurred
broadly and why.
### 2.5. Draft an in depth government abstract together with:
- Whole income earlier than and after in human-readable format with
absolute and relative change.
- An inventory of serious segments driving these modifications, so as
by absolute affect, with detailed numbers (earlier than, after, absolute
and relative change, section share earlier than, affect, normed affect).
### 2.6. Use the `final_answer` software to provide the finalized
government abstract report.
I actually like how the agent is inspired to reiterate on the preliminary activity and keep centered on the primary drawback. Common reflection like that is useful in actual life as effectively, as groups typically get slowed down within the course of and lose sight of the why behind what they’re doing. It’s fairly cool to see managerial finest practices being built-in into agentic frameworks.
That’s it! We’ve constructed a code agent able to analysing KPI modifications for easy metrics and explored all the important thing nuances of the method.
You could find the whole code and execution logs on GitHub.
Abstract
We’ve experimented loads with code brokers and at the moment are prepared to attract conclusions. For our experiments, we used the HuggingFace smolagents framework for code brokers — a really useful toolset that gives:
- straightforward integration with completely different LLMs (from native fashions through Ollama to public suppliers like Anthropic or OpenAI),
- excellent logging that makes it straightforward to grasp the entire thought means of the agent and debug points,
- means to construct complicated programs leveraging multi-AI agent setups or planning options with out a lot effort.
Whereas smolagents is at the moment my favorite agentic framework, it has its limitations:
- It may well lack flexibility at occasions. For instance, I needed to modify the immediate straight within the supply code to get the behaviour I needed.
- It solely helps hierarchical multi-agent set-up (the place one supervisor can delegate duties to different brokers), however doesn’t cowl sequential workflow or consensual decision-making processes.
- There’s no help for long-term reminiscence out of the field, which means you’re ranging from scratch with each activity.
Thank you numerous for studying this text. I hope this text was insightful for you.
Reference
This text is impressed by the “Building Code Agents with Hugging Face smolagents” brief course by DeepLearning.AI.