What Is a Prompt?
A practical, slightly nerdy guide to prompts: what they are, types and formats (including JSON), how to write great ones, and why good prompting works with LLMs.

A prompt is the message you give an AI model to make it do something specific. It can be a single sentence, a carefully structured instruction, or a rich bundle of context, examples, and constraints. If AI is software that writes software on demand, then the prompt is your specification - the tighter the spec, the better the output.
This article unpacks prompts from first principles: what they are, the main types (including structured formats like JSON), why good prompts work at a technical level, and how to write them so your model acts like a focused collaborator instead of a distracted intern.
The Short Answer
A prompt is input text plus intent. It tells the model:
- What you want (task + desired form),
- How you want it (tone, style, length, formatting),
- With what context (data, constraints, examples),
- For whom (audience, use case, reading level).
Everything else in this guide turns those four pillars into reliable habits.
Anatomy of a Prompt
Even informal prompts hide a structure. When you’re deliberate, that structure becomes visible and repeatable.
- Instruction: The imperative verb that anchors the work. “Summarize…”, “Draft…”, “Refactor…”, “Compare…”.
- Constraints: Word counts, formats, policies, do/don’t lists, acceptance criteria.
- Context: Background facts, source snippets, domain assumptions, inputs to transform.
- Examples: One or more demonstrations of what “good” looks like.
- Output spec: A crisp description (or schema) of the final form.
A reliable mental model
Write prompts like tickets to a very fast, very literal teammate. Clarity beats cleverness.
Types of Prompts
1) Freeform (Natural Language)
You describe what you want in plain language. Great for quick ideation or low-stakes tasks. Not great when precision matters.
You are a helpful assistant. Draft a 150-word email inviting workshop participants to a 45-minute follow-up Q&A on Friday. Keep the tone upbeat but concise. End with a one-line RSVP instruction.2) Role + Objective
You assign a role to set expectations and scope. This reduces ambiguity.
You are a senior technical writer.
Task: Convert the following engineering notes into a crisp changelog entry for customers.
Constraints: 120–150 words, non-breaking changes first, action verbs, no internal codenames.
Notes:
- New OAuth scopes: read:invoices, write:invoices
- Deprecated legacy CSV export endpoint
- Minor bug fixes in retry logic3) Stepwise (Decomposed)
You explicitly break the job into stages. This helps the model maintain structure.
Goal: Produce a brief, plain-English threat model for the feature below.
Steps:
1) List assets and actors.
2) List likely threats (STRIDE-style).
3) Prioritize the top three with mitigations.
Format: Numbered sections, <= 200 words each.
Feature summary:
<your summary here>4) Few-shot (Show, Don’t Tell)
You demonstrate the desired mapping with examples the model can imitate.
Task: Rewrite in a "TL;DR + bullet points" style.
Example:
Input: "Our deployment cutover will start at 23:00 UTC…"
Output:
TL;DR: Deployment starts at 23:00 UTC; expect 10–15 min downtime.
• Phase 1: DB migration (5–7 min)
• Phase 2: API restart (2–3 min)
• Phase 3: Cache warmup (3–5 min)
Now rewrite:
"Our release window is 21:00–22:00 UTC but we may extend by 10 minutes…"5) Structured Prompts (JSON, YAML, XML)
When your downstream system expects machine-readable output, structure your prompt and require schema-valid replies. This is essential for automation.
{
"role": "product copywriter",
"task": "Write three variant headlines for the product card below.",
"constraints": {
"length": "max 60 characters",
"style": "warm, concrete, benefit-first",
"banned_words": ["revolutionary", "cutting-edge"]
},
"product": {
"name": "CloudBack",
"value_prop": "One-click encrypted backups for small teams"
},
"output_schema": {
"type": "object",
"properties": {
"headlines": {
"type": "array",
"items": { "type": "string", "maxLength": 60 }
}
},
"required": ["headlines"]
}
}6) Constrained Output (Schema First)
Ask the model to return only a specific shape. This is a lifesaver for pipelines.
Return ONLY valid JSON matching this schema:
schema:
- top_insight: string (<=140 chars)
- actions: array of 3 concise, imperative strings
- risk_level: one of ["low","medium","high"]
context: <paste your notes here>7) Tool- and API-Oriented Prompts
If your app routes model outputs to tools, make that explicit. You can combine instructions with function schemas so the model fills arguments precisely.
{
"instruction": "Propose a meeting time next week for 30 minutes.",
"tool": {
"name": "create_calendar_event",
"arguments": {
"title": "string",
"duration_minutes": "number",
"participants": "array<string>",
"time_window": "object{start_iso,end_iso}"
}
},
"context": {
"participants": ["marta@acme.com", "lee@acme.com"],
"time_window": { "start_iso": "2025-11-03T08:00:00Z", "end_iso": "2025-11-07T16:00:00Z" }
}
}How to Write a Great Prompt
Be explicit about success
Define what “done” looks like. Models are trained to be helpful and verbose; your job is to be specific and bounded.
- Replace “Write about X” with “Produce a 120–150 word overview of X for non-technical managers, ending with a single recommended next step.”
Ground the model in context
LLMs guess the next token from prior tokens. If key facts are missing, they will invent plausible ones. Paste the source context you want it to rely on and tell it to ignore external knowledge if necessary.
Use ONLY the facts in the context below. If a fact is missing, write: "Unknown based on provided context."
Context:
<paste excerpt, table, or bullet notes here>Control the format early
Specify the output shape before the content. If you need JSON, lead with that. If you need a doc section, declare the headings up front.
Name the audience
“Write for procurement managers who skim.” That one sentence reshapes tone, jargon, and structure.
Use examples strategically
One or two tight examples beat a dozen messy ones. Keep your examples close to the current task to avoid style drift.
Call out anti-goals
If there are things you don’t want (“no metaphors”, “no markdown”), say so plainly.
Iterate with a loop
Great prompts are rarely perfect on the first try. Adopt a micro-loop: draft → critique → refine. You can ask the model to self-critique against your acceptance criteria, then fix its own work.
Critique your previous answer against these acceptance criteria. List mismatches, then revise.
Acceptance criteria:
- <= 200 words
- No acronyms without expansion
- Must end with 3 concrete next stepsManage length and memory
Models have token limits; long prompts can push important details out of scope. Prefer compact, high-signal context. If your prompt grows, move background material to an appendix and tell the model exactly when to pull from it.
Why Good Prompting Works (A Bit Technical)
Large Language Models (LLMs) are probabilistic sequence models trained to predict the next token given previous tokens. Through massive pretraining - and often instruction tuning and preference optimization - they internalize patterns of instructions and responses. Good prompts work because they optimize the conditioning:
- Instruction tuning alignment: Many modern LLMs are fine-tuned on (instruction, response) pairs. Clear imperatives, roles, and constraints mirror that training distribution, raising the likelihood of an aligned answer.
- Attention as routing: Transformers use attention to weigh tokens. Headline constraints (“Return ONLY valid JSON…”) elevate those tokens’ influence during generation, guiding structure.
- Inductive bias via examples: Few-shot demonstrations create local priors. By showing “here’s the mapping,” you reduce ambiguity and stabilize style.
- Error surfaces and search: Iterating with critique-and-revise gives the model fresh conditioning signals (the “errors”), nudging it toward your acceptance region without retraining.
- Schema pressure: Declaring schemas converts the problem from open-ended prose to slot-filling within a narrow manifold, which is inherently easier for a next-token predictor.
You don’t need to be a researcher to benefit from these principles. You only need to shape the text window so the model’s most likely continuation is also your desired output.
Common Pitfalls (And How to Avoid Them)
- Vagueness: “Write something about…” → Replace with role, audience, length, and acceptance criteria.
- Context gaps: If you don’t paste facts, expect guesswork. Include source snippets; forbid external knowledge when needed.
- Format drift: If you request JSON at the end, you’ll often get prose first. Lead with the schema and “return ONLY JSON.”
- Over-constraining: Too many rules can conflict. Start minimal, then add rules to correct observed failure modes.
- Evaluation blind spots: Decide ahead of time how you’ll judge outputs (regex checks, schema validation, checklists) and tell the model those rules.
Prompts for Different Jobs
A world-class marketer, developer, or analyst doesn’t ask the same thing the same way. Tailor your prompt to the work:
- Programming: Demand compilable code blocks, ask for tests, set language version, and require comments on tradeoffs.
- Research: Ask for claims with citations, require a table of evidence gaps, and demand a final “confidence & limitations” section.
- Ops: Require checklists, SLAs, runbooks, and escalation paths.
- Sales: State ICP details, objection libraries, and personalization tokens; ask for variants for A/B tests.
Turning Prompts into Practice with PractiqAI
Prompts become skills when you practice them against clear tasks with objective checks. That’s the idea behind PractiqAI: you’re given a task with concrete conditions, you craft the prompt, a judge model verifies whether the output meets the criteria, and you iterate with feedback. Courses are grouped by difficulty and role, and completing them unlocks certificates that reflect real capability growth - because the tasks map to actual work outputs, not trivia.
In a typical flow, you’ll see the input box, send your prompt, watch the streamed response, get judged against success conditions (and subtasks for bonus points like “return only valid code”), and - after a few attempts - peek at an exemplar “perfect prompt” to compare your approach. This loop makes the anatomy of a strong prompt feel natural and measurable.
Reusable Prompt Templates
Use these as starting points and adapt them to your domain.
Task template (general)
Role: <who are you?>
Task: <what to produce?>
Audience: <who will read/use this?>
Constraints: <length, tone, policy, banned words>
Context: <pasted facts, data, excerpts>
Examples: <optional, 1–2 tight demos>
Acceptance criteria:
- <bullet 1>
- <bullet 2>
Output: <format spec: JSON schema | markdown sections | code only>JSON-first template (automation)
Return ONLY valid JSON matching this schema:
{
"type": "object",
"properties": {
"summary": { "type": "string", "maxLength": 200 },
"decisions": { "type": "array", "items": { "type": "string" } },
"next_steps": { "type": "array", "items": { "type": "string" }, "minItems": 3, "maxItems": 3 }
},
"required": ["summary","next_steps"]
}
Context (authoritative; do not use external knowledge):
<meeting notes here>Self-critique + revise loop
Critique your previous answer against these rules:
1) Follows the schema exactly (no extra fields).
2) Uses only facts in Context.
3) Contains <= 200 characters in "summary".
List violations, then output a corrected version that passes all rules.A Quick Checklist Before You Hit Enter
- Do I define task, audience, constraints, output?
- Did I paste the source context I want used (and forbid outside facts if needed)?
- Is the format specified at the top?
- Are there examples for tricky styles?
- Did I include acceptance criteria that can be checked?
- Have I kept it concise enough to fit the model’s window?
Final Thought
Prompts are not magic spells; they’re good specifications. When you learn to express intent clearly, share just enough context, and pin down the format, language models become dependable co-workers. Practice on real tasks, measure the outcomes, and iterate. That’s how prompting turns from a parlor trick into a career advantage.
Ready to train? Pick a task, write the prompt, and let the feedback loop sharpen your craft.
Paweł Brzuszkiewicz
PractiqAI Team
PractiqAI designs guided drills and feedback loops that make learning with AI feel like muscle memory training. Follow along for product notes and workflow ideas from the team.
Ready to make AI practice part of your routine?
Explore interactive drills, daily streaks, and certification paths built by the PractiqAI team.
Explore coursesLatest articles
Fresh insights from the PractiqAI team.
