The first few weeks I used ChatGPT, I thought it was overrated. I’d ask something, get a generic response, think “that’s it?” and close the tab. Then I saw a friend use it differently — he gave it context, constraints, a specific format — and got something genuinely useful in one shot. I’d been using it like a bad search engine. He was using it like a collaborator.
That’s what prompt engineering actually is. Not some mystical skill. Just knowing how to communicate with an AI clearly enough that it gives you what you actually want.
Why your prompts are probably failing
Most bad AI outputs are bad prompts in disguise. When you get something generic, off-topic, or way too long, it’s usually because the prompt was vague. AI language models do what’s called “completion” — they predict what the most likely helpful response is. If your prompt is vague, the most likely helpful response is… generic.
Specific inputs produce specific outputs. It’s really that simple, even if applying it takes some practice.
The five things every good prompt should have
I’ve broken this down over time into five elements. Not every prompt needs all five, but knowing when to include each one is the core skill.
1. Role
Tell the AI who to be. “You are an experienced tax consultant in India.” “You are a friendly high school science teacher.” “You are a no-nonsense editor who gives direct feedback.” This sets the tone and expertise level for everything that follows.
This might sound silly — like telling a calculator it’s an accountant — but it genuinely changes the output. The AI’s “character” shifts based on the role you assign.
2. Task
What do you actually want? Be specific. Not “write something about AI” but “write a 300-word introduction to a blog post about AI image generators, targeted at Indian content creators who’ve never used AI tools before.”
The task should answer: what format, what length, what purpose.
3. Context
Background information the AI needs to give you a relevant answer. If you’re asking for help writing an email to your boss, tell it what the email is about, your relationship with your boss, what outcome you want. If you’re asking for business advice, give it your industry, location, resources.
The more relevant context you give, the less generic the response.
4. Format
How should the response be structured? “In bullet points.” “As a numbered step-by-step guide.” “As a short paragraph with no more than 5 sentences.” “In a table with three columns.” The AI will follow formatting instructions if you give them.
5. Constraints
What to include and what to avoid. “Don’t use technical jargon.” “Avoid American cultural references — the audience is Indian.” “Keep it under 200 words.” “Include at least three specific examples.” Constraints sharpen the output dramatically.
Real examples: bad prompts vs good prompts
Let me show you what this looks like in practice.
For writing help:
Bad: “Write a blog post about productivity.”
Good: “You’re an experienced writer with a casual, conversational style. Write a 600-word blog introduction for an article about productivity tools for Indian freelancers in 2026. The tone should be friendly and relatable — like advice from a friend, not a business textbook. Start with a personal story or specific situation. The audience is 25-35 year olds, mostly in tier 1 and tier 2 Indian cities.”
For coding help:
Bad: “Help me with Python.”
Good: “I’m getting a TypeError in my Python script. I’m a beginner, about 3 months into learning Python. Here’s the code: [paste code]. Here’s the error message: [paste error]. Please explain what’s causing the error in simple terms, then give me the corrected code with comments explaining what changed.”
For research:
Bad: “Tell me about climate change.”
Good: “Explain the economic impact of climate change on Indian agriculture over the next 20 years. Focus specifically on water scarcity and crop yield projections. I need this for a Class 12 geography project. Summarise in 400 words with clear headings. Include two or three specific statistics.”
Useful prompt patterns to memorise
These are patterns I reuse constantly. Once you learn them, they become second nature.
“Act as [role]. I need you to [task]. Context: [background]. Output should be [format]. Avoid [constraints].”
That structure handles about 70% of prompting needs.
The “step by step” trick: Add “think step by step” or “explain your reasoning before answering” to complex questions. This forces the AI to work through the problem more carefully and usually produces more accurate results.
The “improve this” pattern: Write your own draft first, then paste it with “improve this text. Keep the core ideas but make it more [adjective]. Don’t change the main structure.” This is often better than asking the AI to write from scratch.
The “devil’s advocate” pattern: “I’m thinking about [decision]. Give me the strongest arguments against doing this.” Forces the AI to surface the counterarguments you might be missing.
Common mistakes people keep making
One-shot thinking. People ask one question, get one response, and if it’s not perfect, they give up. Prompting is a conversation. The first response is a starting point, not a final answer. Ask follow-ups. Say “now make it shorter” or “rewrite the second paragraph to be more specific.”
No context. “Write a professional email” gives the AI nothing to work with. Who is this email to? What’s it about? What outcome do you want? What’s your relationship with the recipient? Context is not optional.
Too much in one prompt. “Write me a complete business plan for a tech startup including market analysis, financial projections, marketing strategy, and team structure” is asking for too much at once. Break it down. Do one section at a time.
Not correcting the AI. If the response is wrong or goes in the wrong direction, say so. “That’s not what I meant. I want X, not Y.” AI models respond to correction — they’re not offended and they don’t need you to be polite about it.
Does prompt engineering work differently across AI tools?
Somewhat. ChatGPT, Claude, and Gemini all respond to the principles above, but they have different defaults. Claude tends to be more literal and careful — if you give it constraints, it follows them closely. ChatGPT sometimes adds extra content you didn’t ask for. Gemini can be more conservative.
The fundamentals — specificity, context, format instructions — work across all of them. If you master these on ChatGPT, you’ll be 80% of the way to using them effectively on any AI tool.
Start practising right now
Here’s a simple exercise. Take something you’ve been putting off — an email you need to write, a document to summarise, a concept you need explained. Write a bad prompt first, see what you get. Then rewrite the prompt using the five-element framework. Compare the two outputs.
That’s how you learn this. Not by reading about it, but by doing it and seeing the difference. Half an hour of practice beats any amount of theory.