AcmeTech AI Support Chatbot Revamp

Enhancing customer support with structured LLM prompts and chain of thought reasoning

Modular Prompt Design Approach

Project Overview:

Objective: Overhaul AcmeTech’s customer support chatbot to boost answer accuracy, completeness and user satisfaction.

Approach: Designed a modular prompt stack (meta-prompt, task prompt, chain-of-thought, output schema) to guide the LLM’s reasoning and output format

Experimental Outcomes: reduce hallucinations and confusion in answers by enforcing structured, step by step reasoning; Increase in answer quality metrics

Impact: Enabled 24/7 automated support that handles ~85% of routine queries end to end, cutting manual workload and speeding up resolution

Note: ! This project does not rely on proprietary production prompts. It evaluates representative baseline configurations commonly used in early stage chatbot deployments. The goal was to isolate the impact of prompt architecture, not to audit a specific company’s internal system.

Problem Framing:

Inconsistent Answers: The existing chatbot gave generic or incomplete responses, causing customer frustration. Only about 60–70% of routine questions were resolved automatically, with 30–40% escalating to humans

Ambiguous Requirements: Support staff wanted the AI to be friendly yet concise, and to strictly follow company policy, but these goals conflicted in the original prompts. We needed a clear way to operationalize fuzzy requirements like “be more helpful” into concrete prompt instructions

Hallucinations & Errors: The LLM tended to “hallucinate” details or omit important steps. For example, a simple password reset query yielded a vague reply, leaving customers stuck.

The Approach (Modular Prompt Stack)

  1. Meta Prompt: I began with setting the assistant’s persona and rules. For example: “You are a helpful, professional AcmeTech product specialist. Always verify facts against company policy and be concise.” This higher level framing focuses the AI’s behavior.

  2. Task Prompt: Than I proceeded to embedding the user’s query plus relevant context (e.g. product name, support tickets, or knowledge base snippets). This guides the AI on exactly what to answer.

  3. Reasoning Chain: I than instruct the model to “explain step by step” or “show your reasoning.” This chain of thought prompting coaxed the model to break problems into sub steps.

    Through research I found that reasoning prompting drastically boosted performance on complex tasks

  4. Output: Finally I enforced a strict output format. An example of this would be defining a JSON equivalent theme with field like format steps, allowed for me to structurally ensure every answer aligned.

Before / After comparisons

Scenario: User forgot password and cannot log in

  • Before: The bot replied: “Please use the ‘Forgot Password’ link or contact support.” (Generic answer, no detail.)

  • After: The bot gave a clear, step by step guide: “Certainly! Here’s how to reset your password:

    1.Click Forgot Password on the login page

    2. Enter your registered email and submit

    3. Check your inbox for a reset link.

    4. Follow the link and set a new password. After these steps your account will be accessible. If you run into any issues, let me know!”** (Structured list, friendly tone, no missing info.)

  1. Scenario: User asks about product warranty period.

    • Before: “Yes, there is a 1-year warranty.” (Too brief; customer might still be confused about details.)

    • After: “Our products include a 1 year warranty. To explain:

      Step 1 The warranty covers any manufacturing defects

      Step 2 It starts from the purchase date.

      Step 3 If you need to claim, you can email support with proof of purchase. I hope that clarifies your question!”** (Explanation broken into points, helpful tone.)

Reasoning Chain Design

I would explicitly guide the Ai’s thinking process by layering prompts that mirror a human engineer’s approach:

Step 1: Interpret the query. The model first restates the user’s issue in its own words to ensure understanding

Step 2: Gather facts. The model pulls any relevant product details or policies (either from the prompt or embedded knowledge).

Step 3: Plan the answer. The model lists sub-steps needed to solve the problem. For example, for a password issue: “Identify account, send reset link, confirm reset.”

Step 4: Assemble the final answer. Based on the plan, the model writes out the solution clearly.

Potential Business Outcomes and Metrics

Improved Resolution Rate: The chatbot will resolve common queries end to end reducing the need for live agents on routine issues. This falls in line with industry data that a well tuned bot can handle 60–90% of queries

Support Cost Savings: Automating repeat questions can save an estimated 300+ agent-hours per month. (For context, a similar deployment at Varma Insurance saved 330 hours monthly) Freed from repetitive tasks, agents now focus on complex cases, improving overall support quality.

Efficiency Metrics: Average ticket handling time can drop, since the AI provides all needed details upfront. Escalation rate to human agents will potentialy decrease