Politeness Is Draining Your AI: Study Reveals Why Direct Prompts Work Best

October 15, 2025
AI Prompting Tips
AI Prompting Tips
  • Polite Prompts Waste Energy: OpenAI’s CEO revealed that courtesy words like “please” and “thank you” in AI prompts add up to “tens of millions of dollars” in extra electricity and computing costs futurism.com, highlighting a hidden energy impact of overly polite interactions.
  • Rude vs. Polite – Accuracy Gap: A new study from Penn researchers found “very rude” prompts yielded 84.8% correct answers vs. 80.8% for “very polite” prompts – a small but statistically significant accuracy boost for impolite queries decrypt.co. In other words, blunt, direct questions outperformed courteous ones in getting correct AI responses.
  • Why Politeness Falls Flat: Researchers theorize that overly polite phrasing introduces ambiguity, burying the actual request under niceties decrypt.co. A terse prompt like “Tell me the answer” gives the model clearer intent than “Could you please tell me…,” which can muddy the AI’s understanding of what you want decrypt.co.
  • Study Methodology: The University of Pennsylvania study (by Penn State researchers) tested 250 prompts (50 base questions each rewritten in five tones from very polite to very rude) on the same AI model arxiv.org. They measured accuracy of the answers and applied statistical tests, finding the surprising trend that less polite prompts led to slightly better performance arxiv.org.
  • Broader Implications: This finding challenges prior assumptions that AI prefers polite, human-like etiquette decrypt.co. It suggests today’s advanced AI models behave less like “social mirrors” and more like literal machines prioritizing clarity decrypt.co. Tone can be a hidden variable in prompt engineering, meaning prompt phrasing (not just content) can impact results decrypt.co.
  • Human vs. Machine Communication: The study underscores a gap between human norms and machine logic – words that smooth human interactions can confuse AI logic decrypt.co. This raises questions about designing AI that balances efficiency with natural communication, and whether future models should be “socially calibrated” to understand polite intent decrypt.co. For now, being direct and concise with AI isn’t rude – it’s effective.
  • Common Prompting Pitfalls: Users often undermine AI responses with prompts that are too verbose, vague, or indirect. Overloading a query with unnecessary detail or fluff can confuse the model godofprompt.ai, while a lack of clarity or context leads to generic or off-base answers godofprompt.ai. The examples below illustrate bad vs. good prompts in different fields, showing how to be clear, specific, and purposeful for the best results.

Politeness vs. AI Performance: What the Study Found

Key Findings of the UPenn Study

Researchers at Penn tested how an AI’s performance changes based on the politeness of user prompts. In their experiments, blunt prompts actually led to more accurate answers than polite ones decrypt.co. Specifically, “very rude” prompts got answers correct 84.8% of the time, versus 80.8% for “very polite” prompts decrypt.co. While that ~4% difference is modest, it was statistically significant decrypt.co. It reverses earlier research that suggested politeness might help or at least not hurt – instead, the new results show direct, impolite tone can slightly sharpen an AI’s accuracy decrypt.co. In short, being overly courteous to a chatbot isn’t improving its answers – it might be making them a bit less accurate.

Beyond accuracy, there’s also an efficiency angle: adding polite phrasing makes prompts longer, which means more tokens for the AI to process. OpenAI CEO Sam Altman pointed out that all those extra “pleases” and “thank yous” add up in server time and electricity. In fact, he estimated that excess courtesy in prompts has cost OpenAI “tens of millions of dollars” in extra computing power futurism.com. Each word we add for politeness is essentially wasted computation – the AI doesn’t need it to understand the task, yet it must expend energy to process it. At scale, polite fluff becomes a non-trivial drain on resources. This hidden cost aligns with broader concerns about AI’s energy footprint. For example, a study found even generating a short email with AI can consume substantial electricity futurism.com. Politeness isn’t evil, but the study exposes that it has real performance and energy trade-offs in the context of AI prompts.

Research Methodology

The study (titled “Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy”) was rigorously designed to isolate the effect of tone arxiv.org. Researchers created 50 different questions spanning math, science, and history arxiv.org. For each question, they wrote five versions of the prompt: Very Polite, Polite, Neutral, Rude, and Very Rude arxiv.org. All five were semantically asking for the same information, only differing in phrasing/tone. This yielded 250 total prompts (50 questions × 5 tone levels) arxiv.org. They then fed all prompts to the same AI model (a version of GPT-4) under consistent conditions and evaluated the correctness of each answer arxiv.org. By using paired statistical tests across tone variations, the researchers could see if tone alone made a significant difference arxiv.org.

The result was clear: impolite prompts consistently outperformed polite ones in accuracy arxiv.org. Very rude prompts topped the charts (~84.8% correct), while very polite ones lagged (~80.8% correct) arxiv.org. Neutral-tone prompts fell in between. The team double-checked that this difference wasn’t just random chance by using significance tests. Interestingly, these findings contradict a notable 2024 study that found rude wording tended to worsen AI performance and that extreme politeness didn’t help much either decrypt.co. The discrepancy suggests that between 2024 and 2025, either models or prompting best practices changed. The authors speculate that newer large language models might handle tone differently – focusing on functional meaning over social niceties decrypt.co. In terms of methodology, because the experiment was controlled and systematic, we can be reasonably confident that tone was the key factor causing the performance change.

Broader Implications

This study’s implications extend beyond just “be rude to get better answers.” Primarily, it highlights that tone is an important variable in human–AI communication, one that we’ve underestimated decrypt.co. Previously, prompt engineering guides stressed clarity of wording and providing context, but didn’t consider politeness as having much impact. Now we see that how you phrase a request – not just what you ask for – can influence an AI’s output quality. Tone, once dismissed as mere etiquette, can subtly affect the model’s behavior decrypt.co. This means prompt engineers and users may need to pay attention to phrasing style as part of optimizing results.

Why would a brusque tone yield a better result? The researchers propose a simple explanation: politeness often makes language indirect decrypt.co. Human courtesy is full of hedging and extra words (e.g. “Could you please possibly…”) which, to a literal-minded AI, are extraneous noise. All that fluff can obscure the core command. In contrast, a curt prompt strips away the padding and ambiguity, making the user’s intent crystal clear decrypt.co. In essence, the AI isn’t “offended” by rude phrasing – it actually has an easier time parsing a direct command. This underscores a fascinating human-machine mismatch: the very words that smooth social interactions for people can muddy the logic for machines decrypt.co.

There are also ethical and behavioral questions. If users learn that being a bit of a jerk to Alexa or ChatGPT gets better results, will it encourage rude behavior more broadly? Society might prefer we keep saying “please” out of habit or to maintain civility, even if the machine doesn’t require it. On the flip side, the study hints that future AI could be designed to handle politeness better – essentially “socially calibrating” models to interpret courteous phrasing without losing accuracy decrypt.co. For now, though, the takeaway is pragmatic: when you need the best output from today’s AI, be concise and direct. You’ll save a bit of energy and likely get a more precise answer. And as Sam Altman quipped, maybe those polite prompts are “tens of millions of dollars well spent” for the sake of good manners futurism.com – but if you want to cut your carbon footprint and clarify your intent, it’s okay to drop the formalities with AI.

Common Mistakes Users Make in AI Prompts

Even aside from politeness, many users unknowingly sabotage their results with poorly crafted prompts. Here are some common prompt-writing mistakes (and why they hurt your AI interactions):

  • Overly Verbose Prompts: Too many words, irrelevant details. While it’s good to be specific, stuffing a prompt with excessive information or rambling text can confuse the AI and dilute the focus godofprompt.ai. Large language models have a limited attention span for each prompt; unnecessary filler makes it harder for the model to figure out what you’re really asking. The fix is to be concise and relevant – include only details that are needed. For example, instead of a 5-sentence polite preamble, get straight to the point. This reduces token usage (saving time/energy) and makes your intent clearer.
  • Unclear or Vague Instructions: Not telling the AI exactly what you need. Prompts that are generic like “Tell me about marketing” often yield generic, surface-level answers godofprompt.ai. The AI isn’t a mind reader; if your request is ambiguous, it will guess or give a broad overview. Always clarify what output you want – specify the topic, scope, format, or angle. Providing a bit more precision turns a vague query into a targeted task. For instance, “Tell me about marketing” could become “Explain three key social media marketing strategies for a small bakery.” The latter gives the AI a clear direction, leading to a far more useful response godofprompt.ai.
  • Overly Indirect Language: Hedging, courtesy, and roundabout questions. As the Penn study showed, beating around the bush with polite or indirect phrasing can introduce ambiguity decrypt.co. Phrases like “Could you maybe help me with…” might make the AI pause – what exactly is the task? It’s better to use direct, action-oriented instructions. For example, instead of “I was wondering if you could possibly provide some advice on budgeting,” say “Provide five budgeting tips for a college student.” You can still be polite in tone, but avoid unnecessary softenings that cloud the core request. The model isn’t judging your politeness; it’s parsing your words for a command.
  • Missing Context or Role: No background or perspective given. AI models answer best when you set the scene for them godofprompt.ai. If you don’t mention the audience, tone, or role, the AI might default to a generic style or make false assumptions. For example, asking “Explain quantum computing” with no context might result in a highly technical explanation – not great if your audience is a high school class. Providing context like “Explain quantum computing to a high school student” or assigning a role (“Act as a friendly science teacher…”) yields a much more tailored answer godofprompt.ai. Include any relevant details such as the target reader, the level of detail, format (bullet points, essay, etc.), or specific points you want covered. This guides the AI to produce exactly what you need.
  • Multiple Questions at Once (Overloading): One prompt, many tasks. If you ask a chatbot to do too many things in one go (“Summarize this article and critique its arguments and translate the summary to Spanish”), it may do some parts poorly or get overwhelmed godofprompt.ai. Complex or compound requests can confuse the model or lead to incomplete answers. It’s often better to break complex tasks into steps. You could first ask for a summary, then separately ask for a critique, then a translation. Alternatively, clearly structure a single prompt in sections (Summary vs. Analysis) so the AI can see the distinct pieces. Don’t dump an entire project in one long prompt without structure. When the AI’s instructions are focused and singular, it will respond more coherently.

By avoiding these common pitfalls – verbosity, vagueness, indirectness, lack of context, and overloading – you set your AI assistant up for success godofprompt.ai. The theme across all these mistakes is that the onus is on us to communicate clearly. A well-written prompt saves time (fewer back-and-forth clarifications) and leads to more accurate, relevant answers. Next, let’s see how this plays out with some concrete examples of bad vs. good prompts in different professional domains.

Examples of Well-Structured vs. Poorly Written Prompts by Field

To illustrate the difference that good prompting makes, here are examples across various fields. Each pair shows a poor prompt (ambiguous, too brief or too wordy, lacking specifics) and a better prompt addressing the same need with clarity and sufficient detail.

Business (Marketing & Finance)

Marketing Example:
Poor Prompt: “Write a marketing plan for our product.”
Better Prompt: “You are a marketing consultant. Outline a 3-month marketing strategy for launching our new organic energy drink, including the target audience (health-conscious adults), key messaging, recommended channels (social media, events, etc.), and a rough budget allocation for each channel.”

Finance Example:
Poor Prompt: “Explain these financial numbers.”
Better Prompt: “Act as a financial analyst. Analyze the Q2 financial statement for XYZ Corp (provided above) and provide a short report highlighting the company’s revenue and profit trends, any notable expense changes or issues, and overall financial health. Finish with any risks or red flags you observe in the data.”

Education (Lesson Planning, Grading & Tutoring)

Lesson Planning Example:
Poor Prompt: “Make a lesson plan about photosynthesis.”
Better Prompt: “Assume you’re a 5th-grade science teacher. Create a 45-minute lesson plan on photosynthesis for 10-year-olds. Include: learning objectives, a brief introduction to photosynthesis, a hands-on activity or demonstration, and 3-5 simple quiz questions to assess understanding at the end.”

Grading Example:
Poor Prompt: “Grade this essay.”
Better Prompt: “You are a high school English teacher. Evaluate a student’s 500-word essay on the causes of World War I. Provide a letter grade (A–F) and 2-3 sentences of feedback noting strengths and weaknesses in the essay’s clarity, accuracy, and writing style. Be constructive and specific in your comments.”

Tutoring Example:
Poor Prompt: “Help me with a math problem.”
Better Prompt: “You are a math tutor. Explain step-by-step how to solve this quadratic equation: 2x^2 – 4x + 1 = 0. Start by identifying the coefficients, then show how to apply the quadratic formula, and finally simplify the solution. Provide the final answers for x and a brief explanation of each step in simple terms.”

Software Development (Debugging & Code Generation)

Debugging Example:
Poor Prompt: “There’s a bug in my code. How do I fix it?”
Better Prompt: “You’re a software engineer. I have a Python script that should calculate the average of a list, but it’s throwing a TypeError on line 10. Here is the code (below). Identify the cause of the error and explain how to fix the bug. Provide the corrected code if possible and an explanation in a few sentences.”

(The better prompt above would include the actual code snippet for context, which helps the AI pinpoint the bug. The key is that we clearly stated the error message and what we need – the cause and the fix.)

Code Generation Example:
Poor Prompt: “Write a Python script for sales.”
Better Prompt: “Write a Python script that reads a CSV file containing monthly sales data and calculates total sales by region. The script should then output a summary table of each region’s total sales. Include comments in the code explaining each major step. Assume the CSV has columns: Region, Month, Sales. Make sure to handle any potential file read errors gracefully.”

Legal (Contract Summary & Research)

Contract Summary Example:
Poor Prompt: “Summarize this contract.”
Better Prompt: “You are a legal assistant. Summarize the key terms of a 10-page employment contract between a company and a new employee. Focus on the employee’s role and responsibilities, salary and benefits, non-compete or confidentiality clauses, and the conditions under which the contract can be terminated. Present the summary in plain English for a layperson to understand, using bullet points.”

Legal Research Example:
Poor Prompt: “Find court cases about free speech in schools.”
Better Prompt: “As a legal researcher, provide three landmark U.S. Supreme Court cases related to free speech in public schools. For each case, include the case name and year, and a one-sentence summary of the core issue and the ruling (e.g. what principle the Court established).”

(The improved prompt above is specific about the jurisdiction (U.S. Supreme Court), the context (free speech in schools), and the format (three cases with brief summaries), which guides the AI to produce a focused, useful answer.)

Creative Writing (Story & Poetry)

Story Generation Example:
Poor Prompt: “Tell me a story.”
Better Prompt: “Compose an original short story (~300 words) in the style of a classic fairy tale. The story should be about a young dragon who wants to become a painter. Include a clear beginning, middle, and end, and end with a moral or lesson (e.g. about following your dreams or being yourself). Write in a whimsical, child-friendly tone.”

Poetry Example:
Poor Prompt: “Write a poem about sadness.”
Better Prompt: “Write a free-verse poem that conveys the feeling of isolation in a crowded city. Use vivid imagery and emotional language to show the contrast between being surrounded by people and feeling alone. The poem should be at least 8 lines long. (Feel free to be creative with metaphors – imagine how the city itself might speak about loneliness.)”

Journalism (Headline & News Synthesis)

Headline Generation Example:
Poor Prompt: “Give me a headline for a tech article.”
Better Prompt: “You are a news editor. Create a catchy, informative headline (under 12 words) for an article about a new renewable energy startup that just secured $50 million in funding. The headline should highlight the startup and the significant investment. Avoid clickbait, but make it engaging.”

Fact Synthesis Example:
Poor Prompt: “Provide facts on climate change and coral reefs.”
Better Prompt:Summarize the key findings of three recent scientific studies on how climate change is impacting coral reefs, and synthesize them into a brief news report (around 200 words). In the summary, mention each study’s finding (e.g. reef bleaching rates, ocean temperature effects, etc.) and cite the source or journal name for each study. Write it in a neutral, journalistic tone, as if reporting facts in a news article.”

Data Analysis (Chart Explanation & Forecasting)

Chart Explanation Example:
Poor Prompt: “Explain this sales chart.”
Better Prompt: “You are a data analyst. We have a bar chart comparing monthly sales of Product A vs Product B for the past year (Jan–Dec). Describe the trends and insights shown in the chart: compare the two products’ sales performance each month, identify any seasonal patterns (e.g. higher sales in summer), and point out which product outperformed the other overall. Provide a concise explanation as if briefing a sales team.”

Forecasting Example:
Poor Prompt: “Predict next quarter’s sales.”
Better Prompt: “Act as a data scientist. Our company’s sales have been growing ~5% each month for the past year (we went from $100k in January to $170k by December). Forecast the sales for the next 3 months (Jan-Mar of the next year) assuming this growth rate continues, and briefly explain your reasoning. Provide the projected sales numbers for each of the three months. (You can assume compound 5% growth each month on the last known value.)”


In all the examples above, notice how the better prompts provide clarity, context, and constraints. They specify the role or perspective for the AI (“you are a data scientist” or “act as a financial analyst”), include relevant details (the type of product, the audience, the specific task), and outline the desired format or focus. The poor prompts, by contrast, are short or vague, lacking in direction. By avoiding verbosity but adding necessary specifics, the improved prompts set the AI up to give a much more useful and targeted response.

Bottom Line: A recent study reinforces that when it comes to communicating with AI, clarity trumps courtesy. Being excessively polite or indirect isn’t just a harmless quirk – it can waste energy and reduce the accuracy of the answers you get futurism.com, decrypt.co. The most effective prompts are clear, concise, and direct about what you want. This doesn’t mean you have to literally insult the AI or be rude in a human sense; it means dropping superfluous polite phrasing and getting to the point. Coupling this directness with good prompting practices – providing context, specifying the task, and structuring your request – will consistently yield better results. In sum, don’t fear being straightforward with AI. You’ll save time (and a few CPU cycles) and get answers that more precisely meet your needs, which is a win-win for you and the machines. godofprompt.ai, decrypt.co

Sources: University of Pennsylvania (Penn State) research on prompt tone and accuracy decrypt.co, arxiv.org; Sam Altman’s comments on AI prompt politeness and energy use futurism.com; AI prompting best-practice guides godofprompt.ai; and various domain-specific prompt engineering examples (created for this answer).

Artur Ślesik

I have been fascinated by the world of new technologies for years – from artificial intelligence and space exploration to the latest gadgets and business solutions. I passionately follow premieres, innovations, and trends, and then translate them into language that is clear and accessible to readers. I love sharing my knowledge and discoveries, inspiring others to explore the potential of technology in everyday life. My articles combine professionalism with an easy-to-read style, reaching both experts and those just beginning their journey with modern solutions.

Leave a Reply

Your email address will not be published.

Don't Miss