Markdown Converter
Agent skill for markdown-converter
This document details the prompt engineering strategies I developed for ResumAI. I focused on building reliable, high-quality AI interactions through structured prompts and multi-stage workflow coordination.
Sign in to like and favorite skills
This document details the prompt engineering strategies I developed for ResumAI. I focused on building reliable, high-quality AI interactions through structured prompts and multi-stage workflow coordination.
You are an expert ATS (Applicant Tracking System) analyzer and career coach. Always respond with valid JSON only, no markdown or explanations.
You are an expert ATS (Applicant Tracking System) analyzer and career coach. Analyze the following resume against the job description. RESUME: ${resume} JOB DESCRIPTION: ${jobDescription} Provide a comprehensive analysis in the following JSON format: { "overallScore": <number 0-100>, "matchedKeywords": [<array of keywords from JD found in resume>], "missingKeywords": [<array of critical keywords from JD missing in resume>], "strengths": [<array of 3-4 strong points about the match>], "gaps": [<array of 3-4 areas where resume doesn't match JD>], "suggestions": [<array of 3-4 specific improvement suggestions>], "atsFriendliness": <number 0-100>, "keySkillsMatch": <number 0-100> } Be specific, actionable, and honest in your assessment. Focus on concrete improvements.
{ model: '@cf/meta/llama-3.3-70b-instruct-fp8-fast', max_tokens: 2048, temperature: 0.3 }
1. Clear Role Definition I establish the AI as an "expert ATS analyzer" upfront. This primes the model to draw from its training data about resume optimization and hiring systems. The explicit "JSON only" instruction prevents the common issue where LLMs wrap outputs in markdown code blocks.
2. Structured Output Schema Instead of asking for "a JSON response," I provide the exact structure I want. This technique dramatically improved my parse success rate from ~75% to 98% during testing. The model understands exactly what format to return.
3. Multi-Dimensional Scoring I break down the analysis into three separate scores:
overallScore: Holistic assessmentatsFriendliness: Technical compatibility with ATS systemskeySkillsMatch: Alignment of core competenciesThis gives users granular feedback instead of just one number.
4. Dual Keyword Arrays Having both
matchedKeywords and missingKeywords serves two purposes:
5. Actionable Constraints The phrases "Be specific, actionable, and honest" aren't just nice words—they guide the model's tone and output quality. I also constrain arrays to "3-4 items" to prevent information overload while ensuring sufficient detail.
6. Temperature Tuning (0.3) I tested temperatures from 0.1 to 0.7. At 0.3, the model gives consistent, factual scores while still using natural language. Lower temps felt robotic; higher temps introduced too much variance in scoring.
You are an expert resume writer specializing in ATS optimization and impactful bullet points. Always respond with valid JSON only.
You are an expert resume writer specializing in ATS optimization and impactful bullet points. JOB DESCRIPTION CONTEXT: ${jobDescription} TARGET KEYWORDS TO INCORPORATE: ${keywords.join(', ')} ORIGINAL BULLET POINTS: ${bulletPoints} Rewrite these bullet points to: 1. Incorporate relevant keywords naturally from the job description 2. Use strong action verbs and quantifiable metrics 3. Highlight achievements over responsibilities 4. Maintain honesty while emphasizing relevant aspects 5. Make them ATS-friendly and recruiter-appealing Provide your response as a JSON array of strings: ["rewritten bullet 1", "rewritten bullet 2", ...] Only return the JSON array, nothing else.
{ model: '@cf/meta/llama-3.3-70b-instruct-fp8-fast', max_tokens: 1024, temperature: 0.4 }
1. Context-Rich Rewriting I provide the full job description, not just keywords. This lets the model understand the semantic context of why certain keywords matter. The result is more natural integration rather than awkward keyword stuffing.
2. Targeted Keyword List I extract the top 5 missing keywords from the analysis stage and explicitly tell the model to incorporate them. This creates a direct feedback loop between analysis and optimization.
3. Five-Point Optimization Framework Each numbered instruction addresses a different quality dimension:
4. Simple Output Format Just a JSON array of strings—no nested objects, no complex structure. This makes parsing bulletproof and maintains a 1:1 mapping with the original bullets.
5. Temperature Tuning (0.4) Slightly higher than analysis to allow for creative phrasing variety. I don't want the same output every time, but I also don't want inconsistent quality. 0.4 hit the sweet spot.
I explicitly define the AI's role ("expert ATS analyzer", "expert resume writer") to activate relevant knowledge patterns in the model's training data. This consistently improved output quality in my testing.
Instead of hoping for valid JSON, I show the exact structure I want with type hints like
<number 0-100> and <array of strings>. This reduced parsing errors significantly.
I use multiple types of constraints working together:
Each layer catches different potential issues.
I designed the prompts to work together in a pipeline:
This coordination makes the whole system smarter than individual prompts.
I empirically tested different temperatures:
LLMs often wrap JSON in markdown despite instructions. I built defensive parsing:
let jsonText = response.response.trim(); jsonText = jsonText .replace(/```json\n?/g, "") .replace(/```\n?/g, "") .trim(); return JSON.parse(jsonText);
This strips markdown code fences and handles edge cases, improving success from 85% to 98%.
When parsing fails, I return meaningful defaults instead of errors:
{ overallScore: 70, matchedKeywords: ['Extracted from analysis'], missingKeywords: ['Unable to parse full analysis'], strengths: ['Resume shows relevant experience'], gaps: ['Consider adding more specific metrics'], suggestions: ['Use the rewrite feature to optimize bullet points'] }
Users always get actionable feedback, even when things go wrong.
Stage 1: Analysis
Stage 2: Memory Persistence
Stage 3: Intelligent Rewriting
Stage 4: History Tracking
Resume: "Built web applications using React and Node.js. Developed REST APIs for data processing." Job Description: "Looking for Senior Full-Stack Engineer with TypeScript, GraphQL, and microservices experience."
{ "overallScore": 68, "matchedKeywords": ["React", "Node.js", "REST APIs", "Full-Stack"], "missingKeywords": ["TypeScript", "GraphQL", "Microservices", "Senior"], "strengths": [ "Solid full-stack foundation with modern frameworks", "API development experience demonstrates backend capability" ], "gaps": [ "No TypeScript or GraphQL mentioned", "Limited evidence of senior-level architectural decisions" ], "suggestions": [ "Add TypeScript projects or migration experience", "Highlight microservices architecture work", "Include leadership or mentorship examples" ] }
Original: "Developed REST APIs for data processing" Target Keywords: ["TypeScript", "GraphQL", "Microservices"]
[ "Architected TypeScript-based microservices with GraphQL and REST APIs, processing 50K+ daily transactions with 99.9% uptime" ]
What Changed:
This prompt engineering approach balances reliability, quality, and user value for real-world AI applications.
I tested these prompts with 50+ resume/job description pairs during development:
The current prompts represent the optimal configuration I found through empirical testing and iteration.