Close Menu
    Trending
    • 3 AI Tools to Help You Start a Profitable Solo Business
    • What My GPT Stylist Taught Me About Prompting Better
    • Building Machine learning model using AWS Sagemaker notebook | by Sarayavalasaravikiran | AI Simplified in Plain English | May, 2025
    • Pinterest CEO Says AI Helped Revenue Grow By 16%
    • How Not to Write an MCP Server
    • Intent-Driven Natural Language Interface: A Hybrid LLM + Intent Classification Approach | by Anil Malkani | May, 2025
    • FedEx Board Member David Steiner to Be Postmaster General
    • Time Series Forecasting Made Simple (Part 2): Customizing Baseline Models
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»Time Series Forecasting Made Simple (Part 2): Customizing Baseline Models
    Artificial Intelligence

    Time Series Forecasting Made Simple (Part 2): Customizing Baseline Models

    FinanceStarGateBy FinanceStarGateMay 9, 2025No Comments19 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    you for the type response to Half 1, it’s been encouraging to see so many readers concerned about time collection forecasting.

    In Part 1 of this series, we broke down time collection information into development, seasonality, and noise, mentioned when to make use of additive versus multiplicative fashions, and constructed a Seasonal Naive baseline forecast utilizing Day by day Temperature Information. We evaluated its efficiency utilizing MAPE (Imply Absolute Proportion Error), which got here out to twenty-eight.23%.

    Whereas the Seasonal Naive mannequin captured the broad seasonal sample, we additionally noticed that it is probably not the most effective match for this dataset, because it doesn’t account for refined shifts in seasonality or long-term developments. This highlights the necessity to transcend fundamental baselines and customise forecasting fashions to higher replicate the underlying information for improved accuracy.

    Once we utilized the Seasonal Naive baseline mannequin, we didn’t account for the development or use any mathematical formulation, we merely predicted every worth primarily based on the identical day from the earlier 12 months.

    First, let’s check out the desk under, which outlines some widespread baseline fashions and when to make use of each.

    Desk: Widespread baseline forecasting fashions, their descriptions, and when to make use of every primarily based on information patterns.

    These are a number of the mostly used baseline fashions throughout numerous industries.

    However what if the information reveals each development and seasonality? In such instances, these easy baseline fashions won’t be sufficient. As we noticed in Half 1, the Seasonal Naive mannequin struggled to completely seize the patterns within the information, leading to a MAPE of 28.23%.

    So, ought to we soar straight to ARIMA or one other complicated forecasting mannequin?

    Not essentially.

    Earlier than reaching for superior instruments, we are able to first construct our baseline mannequin primarily based on the construction of the information. This helps us construct a stronger benchmark — and infrequently, it’s sufficient to resolve whether or not a extra refined mannequin is even wanted.

    Now that we’ve examined the construction of the information, which clearly contains each development and seasonality, we are able to construct a baseline mannequin that takes each parts into consideration.

    In Half 1, we used the seasonal decompose methodology in Python to visualise the development and seasonality in our information. Now, we’ll take this a step additional by truly extracting the development and seasonal parts from that decomposition and utilizing them to construct a baseline forecast.

    Decomposition of each day temperatures exhibiting development, seasonal cycles and random fluctuations.

    However earlier than we get began, let’s see how the seasonal decompose methodology figures out the development and seasonality in our information.

    Earlier than utilizing the built-in operate, let’s take a small pattern from our temperature information and manually undergo how the seasonal_decompose methodology separates development, seasonality and residuals.

    It will assist us perceive what’s actually occurring behind the scenes.

    Pattern from Temperatures Information

    Right here, we think about a 14-day pattern from the temperature dataset to higher perceive how decomposition works step-by-step.

    We already know that this dataset follows an additive construction, which implies every noticed worth is made up of three components:

    Noticed Worth = Pattern + Seasonality + Residual.

    First, let’s have a look at how the development is calculated for this pattern.
    We’ll use a 3-day centered transferring common, which implies every worth is averaged with its fast neighbor on either side. This helps easy out day-to-day variations within the information.

    For instance, to calculate the development for February 1, 1981:
    Pattern = (20.7 + 17.9 + 18.8) / 3
    = 19.13

    This manner, we calculate the development part for all 14 days within the pattern.

    Right here’s the desk exhibiting the 3-day centered transferring common development values for every day in our 14-day pattern.

    As we are able to see, the development values for the primary and final dates are ‘NaN’ as a result of there aren’t sufficient neighboring values to calculate a centered common at these factors.

    We’ll revisit these lacking values as soon as we end computing the seasonality and residual parts.

    Earlier than we dive into seasonality, there’s one thing we mentioned earlier that we should always come again to. We talked about that utilizing a 3-day centered transferring common helps in smoothing out daily variations within the information — however what does that actually imply?
    Let’s have a look at a fast instance to make it clearer.

    We’ve already mentioned that the development displays the general path the information is transferring in.

    Temperatures are typically increased in summer time and decrease in winter, that’s the broad seasonal sample we anticipate.

    However even inside summer time, temperatures don’t keep precisely the identical day by day. Some days could be barely cooler or hotter than others. These are pure each day fluctuations, not indicators of sudden local weather shifts.

    The transferring common helps us easy out these short-term ups and downs so we are able to deal with the larger image, the underlying development throughout time.

    Since we’re working with a small pattern right here, the development might not stand out clearly simply but.

    However in case you have a look at the total decomposition plot above, you possibly can see how the development captures the general path the information is transferring in, progressively rising, falling or staying regular over time.

    Now that we’ve calculated the development, it’s time to maneuver on to the following part: seasonality.

    We all know that in an additive mannequin:
    Noticed Worth = Pattern + Seasonality + Residual

    To isolate seasonality, we begin by subtracting the development from the noticed values:
    Noticed Worth – Pattern = Seasonality + Residual

    The consequence is called the detrended collection — a mixture of the seasonal sample and any remaining random noise.

    Let’s take January 2, 1981 for instance.

    Noticed temperature: 17.9°C

    Pattern: 19.13°C

    So, the detrended worth is:

    Detrended = 17.9 – 19.1 = -1.23

    In the identical approach, we calculate the detrended values for all of the dates in our pattern.

    The desk above reveals the detrended values for every date in our 14-day pattern.

    Since we’re working with 14 consecutive days, we’ll assume a weekly seasonality and assign a Day Index (from 1 to 7) to every date primarily based on its place in that 7-day cycle.

    Now, to estimate seasonality, we take the common of the detrended values that share the identical Day Index.

    Let’s calculate the seasonality for January 2, 1981. The Day Index for this date is 2, and the opposite date in our pattern with the identical index is January 9, 1981. To estimate the seasonal impact for this index, we take the common of the detrended values from each days. This seasonal impact will then be assigned to each date with Index 2 in our cycle.

    for January 2, 1981: Detrended worth = -1.2 and
    for January 9, 1981: Detrended worth = 2.1

    Common of each values = (-1.2 + 2.1)/2
    = 0.45

    So, 0.45 is the estimated seasonality for all dates with Index 2.
    We repeat this course of for every index to calculate the total set of seasonality parts.

    Listed here are the values of seasonality for all of the dates and these seasonal values replicate the recurring sample throughout the week. For instance, days with Index 2 are usually round 0.45oC hotter than the development on common, whereas days with Index 4 are usually 1.05oC cooler.

    Be aware: Once we say that days with Index 2 are usually round +0.45°C hotter than the development on common, we imply that dates like Jan 2 and Jan 9 are usually about 0.45°C above their very own development worth, not in comparison with the general dataset development, however to the native development particular to every day.

    Now that we’ve calculated the seasonal parts for every day, you may discover one thing attention-grabbing: even the dates the place the development (and subsequently detrended worth) was lacking, like the primary and final dates in our pattern — nonetheless obtained a seasonality worth.

    It is because seasonality is assigned primarily based on the Day Index, which follows a repeating cycle (like 1 to 7 in our weekly instance).
    So, if January 1 has a lacking development however shares the identical index as, say, January 8, it inherits the identical seasonal impact that was calculated utilizing legitimate information from that index group.

    In different phrases, seasonality doesn’t rely upon the provision of development for that particular day, however quite on the sample noticed throughout all days with the identical place within the cycle.

    Now we calculate the residual, primarily based on the additive decomposition construction we all know that:
    Noticed Worth = Pattern + Seasonality + Residual
    …which implies:
    Residual = Noticed Worth – Pattern – Seasonality

    You could be questioning, if the detrended values we used to calculate seasonality already had residuals in them, how can we separate them now? The reply comes from averaging. Once we group the detrended values by their seasonal place, like Day Index, the random noise tends to cancel itself out. What we’re left with is the repeating seasonal sign. In small datasets this won’t be very noticeable, however in bigger datasets, the impact is far more clear. And now, with each development and seasonality eliminated, what stays is the residual.

    We are able to observe that residuals should not calculated for the primary and final dates, because the development wasn’t obtainable there because of the centered transferring common.

    Let’s check out the ultimate decomposition desk for our 14-day pattern. This brings collectively the noticed temperatures, the extracted development and seasonality parts, and the ensuing residuals.

    Now that we’ve calculated the development, seasonality, and residuals for our pattern, let’s come again to the lacking values we talked about earlier. Should you have a look at the decomposition plot for the total dataset, titled “Decomposition of each day temperatures exhibiting development, seasonal cycles, and random fluctuations”, you’ll discover that the development line doesn’t seem proper originally of the collection. The identical applies to residuals. This occurs as a result of calculating the development requires sufficient information earlier than and after every level, so the primary few and previous couple of values don’t have an outlined development. That’s additionally why we see lacking residuals on the edges. However in massive datasets, these lacking values make up solely a small portion and don’t have an effect on the general interpretation. You may nonetheless clearly see the development and patterns over time. In our small 14-day pattern, these gaps really feel extra noticeable, however in real-world time collection information, that is utterly regular and anticipated.

    Now that we’ve understood how seasonal_decompose works, let’s take a fast have a look at the code we used to use it to the temperature information and extract the development and seasonality parts.

    import pandas as pd
    import matplotlib.pyplot as plt
    from statsmodels.tsa.seasonal import seasonal_decompose
    
    # Load the dataset
    df = pd.read_csv("minimal each day temperatures information.csv")
    
    # Convert 'Date' to datetime and set as index
    df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
    df.set_index('Date', inplace=True)
    
    # Set a daily each day frequency and fill lacking values utilizing ahead fill
    df = df.asfreq('D')
    df['Temp'].fillna(methodology='ffill', inplace=True)
    
    # Decompose the each day collection (365-day seasonality for yearly patterns)
    decomposition = seasonal_decompose(df['Temp'], mannequin='additive', interval=365)
    
    # Plot the decomposed parts
    decomposition.plot()
    plt.suptitle('Decomposition of Day by day Minimal Temperatures (Day by day)', fontsize=14)
    plt.tight_layout()
    plt.present()

    Let’s deal with this a part of the code:

    decomposition = seasonal_decompose(df['Temp'], mannequin='additive', interval=365)

    On this line, we’re telling the operate what information to make use of (df['Temp']), which mannequin to use (additive), and the seasonal interval to contemplate (365), which matches the yearly cycle in our each day temperature information.

    Right here, we set interval=365 primarily based on the construction of the information. This implies the development is calculated utilizing a 365-day centered transferring common, which takes 182 values earlier than and after every level. The seasonality is calculated utilizing a 365-day seasonal index, the place all January 1st values throughout years are grouped and averaged, all January 2nd values are grouped, and so forth.

    When utilizing seasonal_decompose in Python, we merely present the interval, and the operate makes use of that worth to find out how each the development and seasonality must be calculated.

    In our earlier 14-day pattern, we used a 3-day centered common simply to make the maths extra comprehensible — however the underlying logic stays the identical.

    Now that we’ve explored how seasonal_decompose works and understood the way it separates a time collection into development, seasonality, and residuals, we’re able to construct a baseline forecasting mannequin.
    This mannequin shall be constructed by merely including the extracted development and seasonality parts, basically assuming that the residual (or noise) is zero.

    As soon as we generate these baseline forecasts, we’ll consider how nicely they carry out by evaluating them to the precise noticed values utilizing MAPE (Imply Absolute Proportion Error).

    Right here, we’re ignoring the residuals as a result of we’re constructing a easy baseline mannequin that serves as a benchmark. The purpose is to check whether or not extra superior algorithms are really essential.
    We’re primarily concerned about seeing how a lot of the variation within the information might be defined utilizing simply the development and seasonality parts.

    Now we’ll construct a baseline forecast by extracting the development and seasonality parts utilizing Python’s seasonal_decompose.

    Code:

    import pandas as pd
    import matplotlib.pyplot as plt
    from statsmodels.tsa.seasonal import seasonal_decompose
    from sklearn.metrics import mean_absolute_percentage_error
    
    # Load the dataset
    df = pd.read_csv("/minimal each day temperatures information.csv")
    
    # Convert 'Date' to datetime and set as index
    df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
    df.set_index('Date', inplace=True)
    
    # Set a daily each day frequency and fill lacking values utilizing ahead fill
    df = df.asfreq('D')
    df['Temp'].fillna(methodology='ffill', inplace=True)
    
    # Break up into coaching (all years besides remaining) and testing (remaining 12 months)
    practice = df[df.index.year  1e-3  # avoid division errors on near-zero values
    mape = mean_absolute_percentage_error(actual[mask], baseline_forecast[mask])
    print(f"MAPE for Baseline Mannequin on Closing Yr: {mape:.2%}")
    
    # Plot precise vs. forecast
    plt.determine(figsize=(12, 5))
    plt.plot(precise.index, precise, label='Precise', linewidth=2)
    plt.plot(precise.index, baseline_forecast, label='Baseline Forecast', linestyle='--')
    plt.title('Baseline Forecast vs. Precise (Closing Yr)')
    plt.xlabel('Date')
    plt.ylabel('Temperature (°C)')
    plt.legend()
    plt.tight_layout()
    plt.present()
    
    
    MAPE for Baseline Mannequin on Closing Yr: 21.21%
    

    Within the code above, we first cut up the information through the use of the primary 9 years because the coaching set and the ultimate 12 months because the check set.

    We then utilized seasonal_decompose to the coaching information to extract the development and seasonality parts.

    Because the seasonal sample repeats yearly, we took the final 365 seasonal values and utilized them to the check interval.

    For the development, we assumed it stays fixed and used the final noticed development worth from the coaching set throughout all dates within the check 12 months.

    Lastly, we added the development and seasonality parts to construct the baseline forecast, in contrast it with the precise values from the check set, and evaluated the mannequin utilizing Imply Absolute Proportion Error (MAPE).

    We bought a MAPE of 21.21% with our baseline mannequin. In Half 1, the seasonal naive strategy gave us 28.23%, so we’ve improved by about 7%.

    What we’ve constructed right here will not be a customized baseline mannequin — it’s a normal decomposition-based baseline.

    Let’s now see how we are able to give you our personal customized baseline for this temperature information.

    Now let’s think about the common of temperatures grouped by every day and utilizing them forecast the temperatures for remaining 12 months.

    You could be questioning how we even give you that concept for a customized baseline within the first place. Truthfully, it begins by merely trying on the information. If we are able to spot a sample, like a seasonal development or one thing that repeats over time, we are able to construct a easy rule round it.

    That’s actually what a customized baseline is about — utilizing what we perceive from the information to make an inexpensive prediction. And sometimes, even small, intuitive concepts can work surprisingly nicely.

    Now let’s use Python to calculate the common temperature for every day of the 12 months.

    Code:

    # Create a brand new column 'day_of_year' representing which day (1 to 365) every date falls on
    practice["day_of_year"] = practice.index.dayofyear
    check["day_of_year"] = check.index.dayofyear
    
    # Group the coaching information by 'day_of_year' and calculate the imply temperature for every day (averaged throughout all years)
    daily_avg = practice.groupby("day_of_year")["Temp"].imply()
    
    # Use the discovered seasonal sample to forecast check information by mapping check days to the corresponding each day common
    day_avg_forecast = check["day_of_year"].map(daily_avg)
    
    # Consider the efficiency of this seasonal baseline forecast utilizing Imply Absolute Proportion Error (MAPE)
    mape_day_avg = mean_absolute_percentage_error(check["Temp"], day_avg_forecast)
    spherical(mape_day_avg * 100, 2)

    To construct this tradition baseline, we checked out how the temperature sometimes behaves on every day of the 12 months, averaging throughout all of the coaching years. Then, we used these each day averages to make predictions for the check set. It’s a easy approach to seize the seasonal sample that tends to repeat yearly.

    This tradition baseline gave us a MAPE of 21.17%, which reveals how nicely it captures the seasonal development within the information.

    Now, let’s see if we are able to construct one other customized baseline that captures patterns within the information extra successfully and serves as a stronger benchmark.

    Now that we’ve used the day-of-year common methodology for our first customized baseline, you may begin questioning what occurs in leap years. If we merely quantity the times from 1 to 365 and take the common, we may find yourself misled, particularly round February 29.

    You could be questioning if a single date actually issues. In time collection evaluation, each second counts. It might not really feel that vital proper now since we’re working with a easy dataset, however in real-world conditions, small particulars like this may have a big effect. Many industries pay shut consideration to those patterns, and even a one-day distinction can have an effect on selections. That’s why we’re beginning with a easy dataset, to assist us perceive these concepts clearly earlier than making use of them to extra complicated issues.

    Now let’s construct a customized baseline utilizing calendar-day averages by how the temperature often behaves on every (month, day) throughout years.

    It’s a easy approach to seize the seasonal rhythm of the 12 months primarily based on the precise calendar.

    Code:

    # Extract the 'month' and 'day' from the datetime index in each coaching and check units
    practice["month"] = practice.index.month
    practice["day"] = practice.index.day
    check["month"] = check.index.month
    check["day"] = check.index.day
    
    
    # Group the coaching information by every (month, day) pair and calculate the common temperature for every calendar day
    calendar_day_avg = practice.groupby(["month", "day"])["Temp"].imply()
    
    
    # Forecast check values by mapping every check row's (month, day) to the common from coaching information
    calendar_day_forecast = check.apply(
        lambda row: calendar_day_avg.get((row["month"], row["day"]), np.nan), axis=1
    )
    
    # Consider the forecast utilizing Imply Absolute Proportion Error (MAPE)
    mape_calendar_day = mean_absolute_percentage_error(check["Temp"], calendar_day_forecast)

    Utilizing this methodology, we achieved a MAPE of 21.09%.

    Now let’s see if we are able to mix two strategies to construct a extra refined customized baseline. Now we have already created a calendar-based month-day common baseline. This time we are going to mix it with the day before today’s precise temperature. The forecasted worth shall be primarily based 70 % on the calendar day common and 30 % on the day before today’s temperature, making a extra balanced and adaptive prediction.

    # Create a column with the day before today's temperature 
    df["Prev_Temp"] = df["Temp"].shift(1)
    
    # Add the day before today's temperature to the check set
    check["Prev_Temp"] = df.loc[test.index, "Prev_Temp"]
    
    # Create a blended forecast by combining calendar-day common and former day's temperature
    # 70% weight to seasonal calendar-day common, 30% to earlier day temperature
    
    blended_forecast = 0.7 * calendar_day_forecast.values + 0.3 * check["Prev_Temp"].values
    
    # Deal with lacking values by changing NaNs with the common of calendar-day forecasts
    blended_forecast = np.nan_to_num(blended_forecast, nan=np.nanmean(calendar_day_forecast))
    
    # Consider the forecast utilizing MAPE
    mape_blended = mean_absolute_percentage_error(check["Temp"], blended_forecast)
    

    We are able to name this a blended customized baseline mannequin. Utilizing this strategy, we achieved a MAPE of 18.73%.

    Let’s take a second to summarize what we’ve utilized to this dataset up to now utilizing a easy desk.

    In Half 1, we used the seasonal naive methodology as our baseline. On this weblog, we explored how the seasonal_decompose operate in Python works and constructed a baseline mannequin by extracting its development and seasonality parts. We then created our first customized baseline utilizing a easy concept primarily based on the day of the 12 months and later improved it through the use of calendar day averages. Lastly, we constructed a blended customized baseline by combining the calendar common with the day before today’s temperature, which led to even higher forecasting outcomes.

    On this weblog, we used a easy each day temperature dataset to grasp how custom baseline models work. Because it’s a univariate dataset, it comprises solely a time column and a goal variable. Nonetheless, real-world time collection information is commonly far more complicated and sometimes multivariate, with a number of influencing elements. Earlier than we discover how one can construct customized baselines for such complicated datasets, we have to perceive one other vital decomposition methodology known as STL decomposition. We additionally want a stable grasp of univariate forecasting fashions like ARIMA and SARIMA. These fashions are important as a result of they kind the inspiration for understanding and constructing extra superior multivariate time collection fashions.

    In Half 1, I discussed that we’d discover the foundations of ARIMA on this half as nicely. Nonetheless, as I’m additionally studying and needed to maintain issues centered and digestible, I wasn’t in a position to match all the things into one weblog. To make the educational course of smoother, we’ll take it one subject at a time.

    In Half 3, we’ll discover STL decomposition and proceed constructing on what we’ve discovered up to now.

    Dataset and License
    The dataset used on this article — “Day by day Minimal Temperatures in Melbourne” — is offered on Kaggle and is shared underneath the Group Information License Settlement – Permissive, Model 1.0 (CDLA-Permissive 1.0).
    That is an open license that allows business use with correct attribution. You may learn the total license here.

    I hope you discovered this half useful and straightforward to observe.
    Thanks for studying and see you in Half 3!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article5555555555555555555Supervised vs Unsupervised Learning | The First Big Choice in ML | M003 | by Mehul Ligade | May, 2025
    Next Article FedEx Board Member David Steiner to Be Postmaster General
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    What My GPT Stylist Taught Me About Prompting Better

    May 10, 2025
    Artificial Intelligence

    How Not to Write an MCP Server

    May 9, 2025
    Artificial Intelligence

    The Dangers of Deceptive Data Part 2–Base Proportions and Bad Statistics

    May 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Nine Rules for SIMD Acceleration of Your Rust Code (Part 1)

    February 27, 2025

    3 Tips to Choose a Trustworthy Business Partner Every Time

    March 27, 2025

    Ensemble Learning Explained: Why Multiple Models Are Better Than One | by (Lydia)Chia Zen Orchard | Mar, 2025

    March 11, 2025

    Saying ‘Thank You’ to ChatGPT Costs Millions in Electricity

    April 21, 2025

    Inspired by the Masters? Bring Your Work Hustle to the Golf Course with Mind Caddie, Now $99.99.

    April 20, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    OpenAI’s new image generator aims to be practical enough for designers and advertisers

    March 25, 2025

    How to use Fast API to deploy your NLP project | by Panayiotis | Apr, 2025

    April 8, 2025

    Linear Regression | Basic Intuition | by techwithsujith | Apr, 2025

    April 23, 2025
    Our Picks

    Make Money on Autopilot With These Passive Income Ideas

    April 24, 2025

    Creating a Skin Cancer Classification App on Vipas.AI | by Vipas.AI | Feb, 2025

    February 20, 2025

    Nine Rules for SIMD Acceleration of Your Rust Code (Part 1)

    February 27, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.