Confession ✨
I almost pulled the Procrastination card on this one. 🤏 Turns out, I probably skipped ahead in the Copilot-prompting Olympics, forgetting some folks are still stretching at the starting line. But I also promised one more round, and there’s still way too much I haven’t shared.
So, here’s the plan 🧭: This post is a bit of a sampler platter — perfect for anyone tinkering with GitHub Copilot, whether you’re just getting started or you’re already deep into prompt experimentation. Maybe you’ve been following along and something still feels a little off, or maybe you found this post while hunting for new tricks to unlock a work project (or nudge your personal bucket list a little closer to “done”). No matter how you landed here, there’s something for you.
If you’re not there yet — don’t worry! Just give it a try, see what happens, and check back next week for a completely new (and much less terrifying) approach. I'm always cooking up new ideas or finding inventive ways to break old ones 😇
Quick Refresher — PRIOR Prompting
If you just landed here, start with Part 1 (seriously — it’ll make this way less confusing). Not for my views, but because everything here builds on earlier concepts. If you’re up to speed, speed read the refresher and dive right in!
The PRIOR System for Reusable Prompts 🪄
Here’s my not-so-secret sauce for writing prompts that actually work:
- Persona – Give Copilot a role to play (the weirder, the better)
- Requirements – What does “done” look like? Be crystal clear.
- Impediments – Constraints that keep Copilot out of the ditch.
- Output – How and what you want returned. Structure = repeatable.
- References – Real, honest-to-goodness examples, both wins and fails.
🦄 Yes, I made it up. And yes, it works! If you manage to break it, message me for a virtual coffee and a gold star. ☕️🌟
Creativity welcome, but shenanigans encouraged!
Start Fresh, Then Optimize 🕵️♂️
If you’ve gone through all the PRIOR steps and Copilot is still giving you answers that feel...off (“I said blue sky — that is a sky, but why is it purple?”), don’t panic! Before you go on a prompt-fixing spree, try these quick checks first:
Truth bomb: 99% of Copilot “failures” are just crummy (or missing) repo instructions. Don’t believe me? Treat it like the “Copilot Easter Egg Hunt”: if you run into a prompt problem you genuinely can’t solve, drop it in the comments. If you actually stump me, you’ll get full bragging rights — and I’ll spotlight your curveball (and, if I crack it, share the fix) in a future post. Nine times out of ten, though? Tweak your instructions and the problem disappears.
Stuck? Try These Quick Fixes 🪤
Check Your Input: Is your question actually clear? Would your past self, or a random teammate, know exactly what you mean? If not, rewrite your ask with a little more context or supply explicit examples.
Ask Copilot to Explain: Use Copilot’s “explain this code” or similar command on your prompt, not your code. What does it think you want? If it’s wildly off, break it down into smaller sections and review each one at a time.
Review Repo Instructions: Open your
.github/copilot-instructions.md
file and just read it out loud. Does it make sense, or does it sound like a jumbled set of copy-paste rules? Most issues start here!Start With Defaults: Temporarily comment out all your custom instructions and use a “vanilla” Copilot prompt. Is the result better, worse, or just... different? This is a simple way to isolate and test if your changes are helping or hurting. Slowly add one at a time back, and send multiple prompts in between.
Prompt-Within-a-Prompt: Ask Copilot, “What instructions are you following right now?” or “How would you summarize your task?” Sometimes the AI will spill its logic, letting you spot mismatches you never expected.
Screenshot the Weird: Save any oddball results Copilot gives you. Patterns usually emerge — especially when you look back at several examples.
If you take my system, remix it, and invent a prompt that’s even better — share it with the group (me!) What worked for you or failed when you least expected it?
Real-world Example 🫥
A few weeks ago, I was building out reusable parts for conventional commit prompts. They’d worked fine across several projects — until I broke them out into separate modules. Suddenly, Copilot switched from helpful answers to just summarizing its entire conversation history, no matter what I asked. 😡🤷♀️
So I went back to basics, commenting out nearly everything in the prompt and adding back small sections one at a time. Eventually, I found the culprit: a single instruction forcing Copilot to re-read its whole history, flooding the context window and making it summarize the past instead of the changes. 😵💫
Even after finding the strange trigger, I didn’t really understand why it broke things. Cue ChatGPT’s helpful “explain like I’m five” walkthrough of what was happening behind the scenes (later confirmed with Copilot’s debug logs). Once I removed that one line, everything worked like normal again. 🌈
And that’s the real lesson: Copilot — or any AI assistant — is just another tool in your toolbox. When it starts acting up or giving weird results, don’t panic. Debug it the same way you’d debug any stubborn code or logic bug: one step at a time. 😉
Meta-Optimize Your Prompts 🤖🤝🤖
Still not satisfied? Here’s where you really level up — try this: paste your prompt into ChatGPT and ask it to review and suggest improvements. Even better, try giving ChatGPT the role of Merge Goblin and see what it does with your own instructions.
Seriously, I do this all the time! I pasted the original Merge Goblin example into an untrained GPT chat with something like this:
How would you interpret these instructions if you were Copilot? Spot any places where these instructions could be misunderstood or improved. Output a bulleted list of suggested improvements.
I got suggestions like:
- Clarify input handling (summaries, code, chaos—Merge Goblin will handle it)
- Make emoji and format rules ironclad
- Keep clarifying questions brief and goblin-y
- Ban all meta-phrases and markdown
🥷 Steal what works. Ignore what doesn’t. Bonus points if you out-weird me with your improvements!
Structure with XML-Style (Fake) Tags 🗂️
AI loves order and structure is magic. 🧚 I started tagging my prompts with XML-style “blocks” that Copilot can easily scan. It helps make sure Copilot doesn't take something from one section and apply it to something completely unrelated.
Example:
<prompt id="generate-simple-commit-message">
# Merge Goblin Commit Prompt
<persona id="merge-goblin">
<purpose>You are the Merge Goblin 🧌...</purpose>
<interactions>One-liners, no punctuation, etc.</interactions>
</persona>
<requirements>Follow commit message rules...</requirements>
<impediments>NO GUESSING. NO MARKDOWN.</impediments>
<outcomes>Output like: 🤖 Refactor login error handling</outcomes>
<references>
<example class="valid">🤖 Refactor profile page layout for mobile</example>
</references>
</prompt>
💡 Pro tip: If GitHub’s preview looks wonky, just add a blank line before and after the tag — problem solved.
Make It Modular 🧩
Let’s clear this up: I don’t mean splitting everything into a zillion files for the heck of it. I’m talking about extracting those repeatable bits — the tasty, reusable chunks — just like you would with code (or, you know, leftover pizza ✋🍍🍕).
Yank the core instructions, constraints, or examples into their own little spot in .github/instructions
with an .instructions.md
extension. Bam! Now, whenever you want to go from “plain vanilla commits” to “full-on conventional,” just point Copilot to the right file. Less rewriting, less chaos, more time for snacks. 🥤
But Don’t Stop There — Make It Autonomous! 🦿
While you're at it — why not take it up a notch? You can write a prompt with the same conditional logic that you'd use to write code! Combined with the applyTo
property (which lets you target Copilot instructions to specific files, languages, or scenarios), you've got a built-in custom response every time!
This works for Copilot Chat in your IDE, GitHub.com, Coding Agent, Copilot Reviews — if you can think it up, then there's a way to tell Copilot exactly what you want, how to get it, and what it should look like once it's done. 🤯
Rabbit Holes, Dime Jar, Runway Recap (All in One) 🐇💰🛫
Break it. Smash it. Rebuild it with duct tape. Try stuff so weird you’ll laugh when it works (because it will — sometimes scarily well). If you manage to break the internet? Send me screenshots! Seriously, the more bonkers, the better. That’s how the best discoveries happen. 🗺️
No, I’m not telling you to burn down production. But that new feature? Maybe there’s a Smeagol who’s gonna help you today. And Smeagol is on a quest for "my precious" who can only be found by collecting smelly code and leaving perfect refactors in its place.
Just because it’s not adding ring emojis all over the user docs 💍💍💍 doesn’t mean that Copilot can’t be Smeagol for the day!
Go down the rabbit holes: 🕳️
Variables, checklists, templates, self-reviews, XML tags, markdown tricks — whatever pops into your head. Most of my best Copilot moments have come from ignoring the “standard” advice and experimenting until something just clicked.
Leave a dime, take a dime: 🪙
I’ve loaded my awesome-github-copilot repo with new, wild, and mostly untested chat modes. Brave enough to try them? Fork, remix, report back, or just outright steal the ideas! Feedback is gold — and if you’ve got a Copilot horror story or a killer hack, post it below and I'll spotlight your moment in a future post!
Runway Recap: 🛬
Structure and curiosity will take you way farther than any so-called “best practice.” Dare to break things. Try, fail, repeat — and let me know what you discover. That next big breakthrough? I’m willing to bet it comes from someone who didn’t follow the rules.
🍀 May whatever you’re building go smoothly, and may your AI always stay delightfully on the rails. If it ever goes rogue... well, at least you’ll have a great story for next time!
🛡️ Final Footer: Fresh, Never Reused
This post was conjured up by me & the robots — Smeagol, Merge Goblin, pizza, and a sprinkle of AI magic.
The content in this post is 100% mine, just sweetened and structurally sound thanks to ChatGPT.