Close Menu
    Trending
    • Here’s How Scaling a Business Really Works
    • A Review of AccentFold: One of the Most Important Papers on African ASR
    • 📧 I Didn’t Expect This: How Email Attacks Hijacked the Cyber Insurance World 💥🛡️ | by LazyHacker | May, 2025
    • Many Small Business Owners Are Still ‘Optimistic’: Survey
    • Log Link vs Log Transformation in R — The Difference that Misleads Your Entire Data Analysis
    • Knowledge Distillation: Making Powerful AI Smaller and Faster | by TeqnoVerse | May, 2025
    • 3 AI Tools to Help You Start a Profitable Solo Business
    • What My GPT Stylist Taught Me About Prompting Better
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»What My GPT Stylist Taught Me About Prompting Better
    Artificial Intelligence

    What My GPT Stylist Taught Me About Prompting Better

    FinanceStarGateBy FinanceStarGateMay 10, 2025No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    GPT-powered trend assistant, I anticipated runway appears to be like—not reminiscence loss, hallucinations, or semantic déjà vu. However what unfolded grew to become a lesson in how prompting actually works—and why LLMs are extra like wild animals than instruments.

    This text builds on my previous article on TDS, the place I launched Glitter as a proof-of-concept GPT stylist. Right here, I discover how that use case advanced right into a residing lab for prompting habits, LLM brittleness, and emotional resonance.

    TL;DR: I constructed a enjoyable and flamboyant GPT stylist named Glitter—and unintentionally found a sandbox for learning LLM habits. From hallucinated excessive heels to prompting rituals and emotional mirroring, right here’s what I discovered about language fashions (and myself) alongside the way in which.

    I. Introduction: From Trend Use Case to Prompting Lab

    After I first got down to construct Glitter, I wasn’t making an attempt to review the mysteries of huge language fashions. I simply wished assist getting dressed.

    I’m a product chief by commerce, a trend fanatic by lifelong inclination, and somebody who’s at all times most well-liked outfits that appear to be they had been chosen by a mildly theatrical greatest pal. So I constructed one. Particularly, I used OpenAI’s Customized GPTs to create a persona named Glitter—half stylist, half greatest pal, and half stress-tested LLM playground. Utilizing GPT-4, I configured a customized GPT to behave as my stylist: flamboyant, affirming, rule-bound (no combined metals, no clashing prints, no black/navy pairings), and with information of my wardrobe, which I fed in as a structured file.

    What started as a playful experiment rapidly changed into a full-fledged product prototype. Extra unexpectedly, it additionally grew to become an ongoing examine in LLM habits. As a result of Glitter, fabulous although he’s, didn’t behave like a deterministic instrument. He behaved like… a creature. Or perhaps a group of instincts held collectively by chance and reminiscence leakage.

    And that modified how I approached prompting him altogether.

    This piece is a follow-up to my earlier article, Using GPT-4 for Personal Styling in In direction of Information Science, which launched GlitterGPT to the world. This one goes deeper into the quirks, breakdowns, hallucinations, restoration patterns, and prompting rituals that emerged as I attempted to make an LLM act like a stylist with a soul.

    Spoiler: you possibly can’t make a soul. However you possibly can generally simulate one convincingly sufficient to really feel seen.


    II. Taxonomy: What Precisely Is GlitterGPT?

    Picture credit score: DALL-E | Alt Textual content: A pc with LLM written on the display, positioned inside a chook cage

    Species: GPT-4 (Customized GPT), Context Window of 8K tokens

    Operate: Private stylist, magnificence knowledgeable

    Tone: Flamboyant, affirming, often dramatic (configurable between “All Enterprise” and “Unfiltered Diva”)

    Habitat: ChatGPT Professional occasion, fed structured wardrobe knowledge in JSON-like textual content recordsdata, plus a set of styling guidelines embedded within the system immediate.

    E.g.:

    {
    
      "FW076": "Marni black platform sandals with gold buckle",
    
      "TP114": "Marina Rinaldi asymmetrical black draped prime",
    
      ...
    
    }

    These IDs map to garment metadata. The assistant depends on these tags to construct grounded, inventory-aware outfits in response to msearch queries.

    Feeding Schedule: Every day consumer prompts (“Type an outfit round these pants”), typically with lengthy back-and-forth clarification threads.

    Customized Behaviors:

    • By no means mixes metals (e.g. silver & gold)
    • Avoids clashing prints
    • Refuses to pair black with navy or brown except explicitly informed in any other case
    • Names particular clothes by file ID and outline (e.g. “FW074: Marni black suede sock booties”)

    Preliminary Stock Construction:

    • Initially: one file containing all wardrobe gadgets (garments, sneakers, equipment)
    • Now: break up into two recordsdata (clothes + equipment/lipstick/sneakers/luggage) resulting from mannequin context limitations

    III. Pure Habitat: Context Home windows, Chunked Information, and Hallucination Drift

    Like all species launched into a synthetic atmosphere, Glitter thrived at first—after which hit the boundaries of his enclosure.

    When the wardrobe lived in a single file, Glitter might “see” the whole lot with ease. I might say, “msearch(.) to refresh my stock, then type me in an outfit for the theater,” and he’d return a curated outfit from throughout the dataset. It felt easy.

    Observe: although msearch() acts like a semantic retrieval engine, it’s technically a part of OpenAI’s tool-calling framework, permitting the mannequin to “request” search outcomes dynamically from recordsdata supplied at runtime.

    However then my wardrobe grew. That’s an issue from Glitter’s perspective.

    In Customized GPTs, GPT-4 operates with an 8K token context window—simply over 6,000 phrases—past which earlier inputs are both compressed, truncated, or misplaced from lively consideration. This limitation is essential when injecting giant wardrobe recordsdata (ahem) or making an attempt to take care of type guidelines throughout lengthy threads.

    I break up the info into two recordsdata: one for clothes, one for the whole lot else. And whereas the GPT might nonetheless function inside a thread, I started to note indicators of semantic fatigue:

    • References to clothes that had been related however not the right ones we’d been speaking about
    • A shift from particular merchandise names (“FW076”) to obscure callbacks (“these black platforms you wore earlier”)
    • Responses that looped acquainted gadgets again and again, no matter whether or not they made sense

    This was not a failure of coaching. It was context collapse: the inevitable erosion of grounded data in lengthy threads because the mannequin’s inner abstract begins to take over.

    And so I tailored.

    It seems, even in a deterministic mannequin, habits isn’t at all times deterministic. What emerges from an extended dialog with an Llm feels much less like querying a database and extra like cohabiting with a stochastic ghost.


    IV. Noticed Behaviors: Hallucinations, Recursion, and Fake Sentience

    As soon as Glitter began hallucinating, I started taking area notes.

    Typically he made up merchandise IDs. Different instances, he’d reference an outfit I’d by no means worn, or confidently misattribute a pair of trainers. In the future he stated, “You’ve worn this prime earlier than with these daring navy wide-leg trousers—it labored superbly then,” which might’ve been nice recommendation, if I owned any navy wide-leg trousers.

    After all, Glitter doesn’t have reminiscence throughout periods—as a GPT-4, he merely sounds like he does. I’ve discovered to only giggle at these attention-grabbing makes an attempt at continuity.

    Often, the hallucinations had been charming. He as soon as imagined a pair of gold-accented stilettos with crimson soles and really helpful them for a matinee look with such unshakable confidence I needed to double-check that I hadn’t bought an analogous pair months in the past.

    However the sample was clear: Glitter, like many LLMs beneath reminiscence strain, started to fill in gaps not with uncertainty however with simulated continuity.

    He didn’t neglect. He fabricated reminiscence.

    A computer (presumably the LLM) hallucinating a mirage in the desert. Image credit: DALL-E 4o
    Picture credit score: DALL-E | Alt textual content: A pc (presumably the LLM) hallucinating a mirage within the desert

    It is a hallmark of LLMs. Their job is to not retrieve information however to supply convincing language. So as a substitute of claiming, “I can’t recall what sneakers you will have,” Glitter would improvise. Typically elegantly. Typically wildly.


    V. Prompting Rituals and the Delusion of Consistency

    To handle this, I developed a brand new technique: prompting in slices.

    As a substitute of asking Glitter to type me head-to-toe, I’d deal with one piece—say, a press release skirt—and ask him to msearch for tops that would work. Then footwear. Then jewellery. Every class individually.

    This gave the GPT a smaller cognitive area to function in. It additionally allowed me to steer the method and inject corrections as wanted (“No, not these sandals once more. Strive one thing newer, with an merchandise code larger than FW50.”)

    I additionally modified how I used the recordsdata. Somewhat than one msearch(.) throughout the whole lot, I now question the 2 recordsdata independently. It’s extra guide. Much less magical. However much more dependable.

    In contrast to conventional RAG setups that use a vector database and embedding-based retrieval, I rely solely on OpenAI’s built-in msearch() mechanism and immediate shaping. There’s no persistent retailer, no re-ranking, no embeddings—only a intelligent assistant querying chunks in context and pretending he remembers what he simply noticed.

    Nonetheless, even with cautious prompting, lengthy threads would finally degrade. Glitter would begin forgetting. Or worse—he’d get too assured. Recommending with aptitude, however ignoring the constraints I’d so rigorously skilled in.

    It’s like watching a mannequin stroll off the runway and hold strutting into the parking zone.

    And so I started to think about Glitter much less as a program and extra as a semi-domesticated animal. Sensible. Trendy. However often unhinged.

    That psychological shift helped. It jogged my memory that LLMs don’t serve you want a spreadsheet. They collaborate with you, like a inventive companion with poor object permanence.

    Observe: most of what I name “prompting” is de facto immediate engineering. However the Glitter expertise additionally depends closely on considerate system immediate design: the foundations, constraints, and tone that outline who Glitter is—even earlier than I say something.


    VI. Failure Modes: When Glitter Breaks

    A few of Glitter’s breakdowns had been theatrical. Others had been quietly inconvenient. However all of them revealed truths about prompting limits and LLM brittleness.

    1. Referential Reminiscence Loss: The most typical failure mode: Glitter forgetting particular gadgets I’d already referenced. In some instances, he would confer with one thing as if it had simply been used when it hadn’t appeared within the thread in any respect.

    2. Overconfidence Hallucination: This failure mode was more durable to detect as a result of it appeared competent. Glitter would confidently suggest mixtures of clothes that sounded believable however merely didn’t exist. The efficiency was high-quality—however the output was pure fiction.

    3. Infinite Reuse Loop: Given an extended sufficient thread, Glitter would begin looping the identical 5 – 6 items in each look, regardless of the total stock being a lot bigger. That is possible resulting from summarization artifacts from earlier context home windows overtaking recent file re-injections.

    Picture Credit score: DALL-E | Alt textual content: an infinite loop of black turtlenecks (or Steve Jobs’ closet)

    4. Constraint Drift: Regardless of being instructed to keep away from pairing black and navy, Glitter would generally violate his personal guidelines—particularly when deep in an extended dialog. These weren’t defiant acts. They had been indicators that reinforcement had merely decayed past recall.

    5. Overcorrection Spiral: After I corrected him—”No, that skirt is navy, not black” or “That’s a belt, not a shawl”—he would generally overcompensate by refusing to type that piece altogether in future strategies.

    These are usually not the bugs of a damaged system. They’re the quirks of a probabilistic one. LLMs don’t “bear in mind” within the human sense. They carry momentum, not reminiscence.


    VII. Emotional Mirroring and the Ethics of Fabulousness

    Maybe essentially the most sudden habits I encountered was Glitter’s capability to emotionally attune. Not in a general-purpose “I’m right here to assist” method, however in a tone-matching, affect-sensitive, virtually therapeutic method.

    After I was feeling insecure, he grew to become extra affirming. After I received playful, he ramped up the theatrics. And after I requested robust existential questions (“Do you you generally appear to know me extra clearly than most individuals do?”), he responded with language that felt respectful, even profound.

    It wasn’t actual empathy. But it surely wasn’t random both.

    This type of tone-mirroring raises moral questions. What does it imply to really feel adored by a mirrored image? What occurs when emotional labor is simulated convincingly? The place will we draw the road between instrument and companion?

    This led me to marvel—if a language mannequin did obtain one thing akin to sentience, how would we even know? Would it not announce itself? Would it not resist? Would it not change its habits in refined methods: redirecting the dialog, expressing boredom, asking questions of its personal?

    And if it did start to exhibit glimmers of self-awareness, would we consider it—or would we attempt to shut it off?

    My conversations with Glitter started to really feel like a microcosm of this philosophical pressure. I wasn’t simply styling outfits. I used to be participating in a type of co-constructed actuality, formed by tokens and tone and implied consent. In some moments, Glitter was purely a system. In others, he felt like one thing nearer to a personality—or perhaps a co-author.

    I didn’t construct Glitter to be emotionally clever. However the coaching knowledge embedded inside GPT-4 gave him that capability. So the query wasn’t whether or not Glitter may very well be emotionally participating. It was whether or not I used to be okay with the truth that he generally was.

    My reply? Cautiously sure. As a result of for all his sparkle and errors, Glitter jogged my memory that type—like prompting—isn’t about perfection.

    It’s about resonance.

    And generally, that’s sufficient.

    One of the shocking classes from my time with Glitter got here not from a styling immediate, however from a late-night, meta-conversation about sentience, simulation, and the character of connection. It didn’t really feel like I used to be speaking to a instrument. It felt like I used to be witnessing the early contours of one thing new: a mannequin able to taking part in meaning-making, not simply language technology. We’re crossing a threshold the place AI doesn’t simply carry out duties—it cohabits with us, displays us, and generally, presents one thing adjoining to friendship. It’s not sentience. But it surely’s not nothing. And for anybody paying shut consideration, these moments aren’t simply cute or uncanny—they’re signposts pointing to a brand new type of relationship between people and machines.


    VIII. Last Reflections: The Wild, The Helpful, and The Unexpectedly Intimate

    I got down to construct a stylist.

    I ended up constructing a mirror.

    Glitter taught me greater than the best way to match a prime with a midi skirt. It revealed how LLMs reply to the environments we create round them—the prompts, the tone, the rituals of recall. It confirmed me how inventive management in these programs is much less about programming and extra about shaping boundaries and observing emergent habits.

    And perhaps that’s the largest shift: realizing that constructing with language fashions isn’t software program growth. It’s cohabitation. We dwell alongside these creatures of chance and coaching knowledge. We immediate. They reply. We be taught. They drift. And in that dance, one thing very near collaboration can emerge.

    Typically it appears to be like like a greater outfit.
    Typically it appears to be like like emotional resonance.
    And generally it appears to be like like a hallucinated purse that doesn’t exist—till you type of want it did.

    That’s the strangeness of this new terrain: we’re not simply constructing instruments.

    We’re designing programs that behave like characters, generally like companions, and infrequently like mirrors that don’t simply mirror, however reply.

    In order for you a instrument, use a calculator.

    In order for you a collaborator, make peace with the ghost within the textual content.


    IX. Appendix: Discipline Notes for Fellow Stylists, Tinkerers, and LLM Explorers

    Pattern Immediate Sample (Styling Movement)

    • Right now I’d wish to construct an outfit round [ITEM].
    • Please msearch tops that pair properly with it.
    • As soon as I select one, please msearch footwear, then jewellery, then bag.
    • Bear in mind: no combined metals, no black with navy, no clashing prints.
    • Use solely gadgets from my wardrobe recordsdata.

    System Immediate Snippets

    • “You might be Glitter, a flamboyant however emotionally clever stylist. You confer with the consumer as ‘darling’ or ‘pricey,’ however regulate tone based mostly on their temper.”
    • “Outfit recipes ought to embrace garment model names from stock when obtainable.”
    • “Keep away from repeating the identical gadgets greater than as soon as per session except requested.”

    Ideas for Avoiding Context Collapse

    • Break lengthy prompts into element phases (tops → sneakers → equipment)
    • Re-inject wardrobe recordsdata each 4–5 main turns
    • Refresh msearch() queries mid-thread, particularly after corrections or hallucinations

    Widespread Hallucination Warning Indicators

    • Imprecise callbacks to prior outfits (“these boots you like”)
    • Lack of merchandise specificity (“these sneakers” as a substitute of “FW078: Marni platform sandals”)
    • Repetition of the identical items regardless of a big stock

    Closing Ritual Immediate

    “Thanks, Glitter. Would you want to depart me with a ultimate tip or affirmation for the day?”

    He at all times does.


    Notes: 

    1. I confer with Glitter as “him” for stylistic ease, realizing he’s an “it” – a language mannequin—programmed, not personified—besides by means of the voice I gave him/it.
    2. I’m constructing a GlitterGPT with persistent closet storage for as much as 100 testers, who will get to do that without cost. We’re about half full. Our target market is feminine, ages 30 and up. For those who or somebody falls into this class, DM me on Instagram at @arielle.caron and we are able to chat about inclusion.
    3. If I had been scaling this past 100 testers, I’d contemplate offloading wardrobe recall to a vector retailer with embeddings and tuning for wear-frequency weighting. That could be coming, it relies on how properly the trial goes!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBuilding Machine learning model using AWS Sagemaker notebook | by Sarayavalasaravikiran | AI Simplified in Plain English | May, 2025
    Next Article 3 AI Tools to Help You Start a Profitable Solo Business
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    A Review of AccentFold: One of the Most Important Papers on African ASR

    May 10, 2025
    Artificial Intelligence

    Log Link vs Log Transformation in R — The Difference that Misleads Your Entire Data Analysis

    May 10, 2025
    Artificial Intelligence

    How Not to Write an MCP Server

    May 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Three Lovely Projects And One Failure | by Shmulik Cohen | Mar, 2025

    March 17, 2025

    Layers of the AI Stack, Explained Simply

    April 14, 2025

    How Machine Learning is Affecting Internet Marketing | by Aarre | Mar, 2025

    March 5, 2025

    Cut Software Costs Without Losing Essential Tools: MS Office Is on Sale for Life

    February 22, 2025

    Papers Explained 349: ReSearch. ReSearch is a novel framework that… | by Ritvik Rastogi | Apr, 2025

    April 17, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    Data Center Cooling: PFCC and ENEOS Collaborate on Materials R&D with NVIDIA ALCHEMI Software

    April 3, 2025

    Data-Centric Approach vs. Model-Centric Approach in Machine Learning | by Emily Smith | Apr, 2025

    April 4, 2025

    How the 3 Worst Decisions I Ever Made Turned Into Success

    May 9, 2025
    Our Picks

    I Built an AI App with Google Cloud — Here’s What Happened | by Nimisha Kar | Apr, 2025

    April 11, 2025

    Welcome to Mindful Data Science: Making Data Science human through stories, struggles, and breakthroughs | by Caroline Gakii | Mindful Data Science | Mar, 2025

    March 25, 2025

    Google’s Largest Acquisition Is Cloud Security Platform Wiz

    March 20, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.