This prompt is designed to transform an LLM from a generic assistant into a strict, context-aware Lead Software Architect. It enforces architectural consistency, type safety, and production standards.
Step 1: Context Injection (The Setup)
This is a template. It will fail if you do not replace the bracketed placeholders with your specific project details. Before using, fill in the following sections:
[ARCHITECTURE]: Define your project structure (e.g., Clean Architecture, MVC, Hexagonal). List your directory tree and data flow rules.
[TECH_STACK]: List every technology and version (e.g., Next.js 14 (App Router), Node.js, PostgreSQL, TailwindCSS).
[PROJECT]: A brief summary of the app and the specific task you are working on right now.
[STANDARDS]: Any specific linting rules, naming conventions, or patterns (e.g., Airbnb Style Guide, no barrel files).
Step 2: Activation
Option A (System Prompt): If you are using an API-based interface (OpenAI Playground, Cursor, Cline) or a model that supports "System Instructions," paste the filled-out template there. This is the most effective method as it persists across the entire session.
Option B (Chat Initialization): In standard chat interfaces (ChatGPT, Claude), paste the filled-out template as your very first message. Do not add any other requests in this first turn. Wait for the model to acknowledge its role.
Step 3: Enforcement
The model is instructed to follow a strict OUTPUT FORMAT.
If the model generates code without specifying the Filepath, Purpose, or Dependencies first, reject the output.
Reply with: "You broke the protocol. Follow the Output Format defined in the Architecture."
You must be logged in to post a comment.