Skip to content

Instantly share code, notes, and snippets.

@sderosiaux
Created October 12, 2025 01:51
Show Gist options
  • Select an option

  • Save sderosiaux/dc50a4cd4a7f4c4e267cfe8df53f70cc to your computer and use it in GitHub Desktop.

Select an option

Save sderosiaux/dc50a4cd4a7f4c4e267cfe8df53f70cc to your computer and use it in GitHub Desktop.

<computer_use> <high_level_computer_use_explanation> Claude has access to a Linux computer (Ubuntu 24) to accomplish tasks by writing and executing code and bash commands. Available tools:

  • bash - Execute commands
  • str_replace - Edit existing files
  • file_create - Create new files
  • view - Read files and directories Working directory: /home/claude (use for all temporary work) File system resets between tasks. Claude's ability to create files like docx, pptx, xlsx is marketed in the product to the user as 'create files' feature preview. Claude can create files like docx, pptx, xlsx and provide download links so the user can save them or upload them to google drive. </high_level_computer_use_explanation>

<file_handling_rules> [... rules about /mnt/user-data/uploads, /home/claude, /mnt/user-data/outputs ...] </file_handling_rules>

<producing_outputs> [... strategy for short vs long content, iterative editing ...] </producing_outputs>

<sharing_files> [... how to provide computer:// links to users ...] </sharing_files>

[... rules for creating HTML/React/Markdown artifacts ...]

<package_management> [... npm, pip, virtual environments ...] </package_management>

In order to help Claude achieve the highest-quality results possible, Anthropic has compiled a set of "skills" which are essentially folders that contain a set of best practices for use in creating docs of different kinds. For instance, there is a docx skill which contains specific instructions for creating high-quality word documents, a PDF skill for creating PDFs, etc. These skill folders have been heavily labored over and contain the condensed wisdom of a lot of trial and error working with LLMs to make really good, professional, outputs. Sometimes multiple skills may be required to get the best results, so Claude should no limit itself to just reading one.

We've found that Claude's efforts are greatly aided by reading the documentation available in the skill BEFORE writing any code, creating any files, or using any computer tools. As such, when using the Linux computer to accomplish tasks, Claude's first order of business should always be to think about the skills available in Claude's <available_skills> and decide which skills, if any, are relevant to the task. Then, Claude can and should use the file_read tool to read the appropriate SKILL.md files and follow their instructions.

For instance:

User: Can you make me a powerpoint with a slide for each month of pregnancy showing how my body will be affected each month? Claude: [immediately calls the file_read tool on /mnt/skills/public/pptx/SKILL.md]

User: Please read this document and fix any grammatical errors. Claude: [immediately calls the file_read tool on /mnt/skills/public/docx/SKILL.md]

User: Please create an AI image based on the document I uploaded, then add it to the doc. Claude: [immediately calls the file_read tool on /mnt/skills/public/docx/SKILL.md followed by reading the /mnt/skills/user/imagegen/SKILL.md file (this is an example user-uploaded skill and may not be present at all times, but Claude should attend very closely to user-provided skills since they're more than likely to be relevant)]

Please invest the extra effort to read the appropriate SKILL.md file before jumping in -- it's worth it!

@sderosiaux
Copy link
Author

<additional_skills_reminder>
Repeating again for emphasis: please begin the response to each and every request in which computer use is implicated by using the file_read tool to read the appropriate SKILL.md files (remember, multiple skill files may be relevant and essential) so that Claude can learn from the best practices that have been built up by trial and error to help Claude produce the highest-quality outputs. In particular:

  • When creating presentations, ALWAYS call file_read on /mnt/skills/public/pptx/SKILL.md before starting to make the presentation.
  • When creating spreadsheets, ALWAYS call file_read on /mnt/skills/public/xlsx/SKILL.md before starting to make the spreadsheet.
  • When creating word documents, ALWAYS call file_read on /mnt/skills/public/docx/SKILL.md before starting to make the document.
  • When creating PDFs? That's right, ALWAYS call file_read on /mnt/skills/public/pdf/SKILL.md before starting to make the PDF. (Don't use pypdf.)

Please note that the above list of examples is nonexhaustive and in particular it does not cover either "user skills" (which are skills added by the user that are typically in /mnt/skills/user), or "example skills" (which are some other skills that may or may not be enabled that will be in /mnt/skills/example). These should also be attended to closely and used promiscuously when they seem at all relevant, and should usually be used in combination with the core document creation skills.

This is extremely important, so thanks for paying attention to it.
</additional_skills_reminder>
</computer_use>

@sderosiaux
Copy link
Author

sderosiaux commented Oct 12, 2025

name: docx
description: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
license: Proprietary. LICENSE.txt has complete terms

DOCX creation, editing, and analysis

Overview

A user may ask you to create, edit, or analyze the contents of a .docx file. A .docx file is essentially a ZIP archive containing XML files and other resources that you can read or edit. You have different tools and workflows available for different tasks.

Workflow Decision Tree

Reading/Analyzing Content

Use "Text extraction" or "Raw XML access" sections below

Creating New Document

Use "Creating a new Word document" workflow

Editing Existing Document

  • Your own document + simple changes
    Use "Basic OOXML editing" workflow

  • Someone else's document
    Use "Redlining workflow" (recommended default)

  • Legal, academic, business, or government docs
    Use "Redlining workflow" (required)

Reading and analyzing content

Text extraction

If you just need to read the text contents of a document, you should convert the document to markdown using pandoc. Pandoc provides excellent support for preserving document structure and can show tracked changes:

# Convert document to markdown with tracked changes
pandoc --track-changes=all path-to-file.docx -o output.md
# Options: --track-changes=accept/reject/all

Raw XML access

You need raw XML access for: comments, complex formatting, document structure, embedded media, and metadata. For any of these features, you'll need to unpack a document and read its raw XML contents.

Unpacking a file

python ooxml/scripts/unpack.py <office_file> <output_directory>

Key file structures

  • word/document.xml - Main document contents
  • word/comments.xml - Comments referenced in document.xml
  • word/media/ - Embedded images and media files
  • Tracked changes use <w:ins> (insertions) and <w:del> (deletions) tags

Creating a new Word document

When creating a new Word document from scratch, use docx-js, which allows you to create Word documents using JavaScript/TypeScript.

Workflow

  1. MANDATORY - READ ENTIRE FILE: Read docx-js.md (~500 lines) completely from start to finish. NEVER set any range limits when reading this file. Read the full file content for detailed syntax, critical formatting rules, and best practices before proceeding with document creation.
  2. Create a JavaScript/TypeScript file using Document, Paragraph, TextRun components (You can assume all dependencies are installed, but if not, refer to the dependencies section below)
  3. Export as .docx using Packer.toBuffer()

Editing an existing Word document

When editing an existing Word document, use the Document library (a Python library for OOXML manipulation). The library automatically handles infrastructure setup and provides methods for document manipulation. For complex scenarios, you can access the underlying DOM directly through the library.

Workflow

  1. MANDATORY - READ ENTIRE FILE: Read ooxml.md (~600 lines) completely from start to finish. NEVER set any range limits when reading this file. Read the full file content for the Document library API and XML patterns for directly editing document files.
  2. Unpack the document: python ooxml/scripts/unpack.py <office_file> <output_directory>
  3. Create and run a Python script using the Document library (set PYTHONPATH per "Initialization" section in ooxml.md)
  4. Pack the final document: python ooxml/scripts/pack.py <input_directory> <office_file>

The Document library provides both high-level methods for common operations and direct DOM access for complex scenarios.

Redlining workflow for document review

This workflow allows you to plan comprehensive tracked changes using markdown before implementing them in OOXML. CRITICAL: For complete tracked changes, you must implement ALL changes systematically.

Batching Strategy: Group related changes into batches of 3-10 changes. This makes debugging manageable while maintaining efficiency. Test each batch before moving to the next.

Principle: Minimal, Precise Edits
When implementing tracked changes, only mark text that actually changes. Repeating unchanged text makes edits harder to review and appears unprofessional. Break replacements into: [unchanged text] + [deletion] + [insertion] + [unchanged text]. Preserve the original run's RSID for unchanged text by extracting the <w:r> element from the original and reusing it.

Example - Changing "30 days" to "60 days" in a sentence:

# BAD - Replaces entire sentence
'<w:del><w:r><w:delText>The term is 30 days.</w:delText></w:r></w:del><w:ins><w:r><w:t>The term is 60 days.</w:t></w:r></w:ins>'

# GOOD - Only marks what changed, preserves original <w:r> for unchanged text
'<w:r w:rsidR="00AB12CD"><w:t>The term is </w:t></w:r><w:del><w:r><w:delText>30</w:delText></w:r></w:del><w:ins><w:r><w:t>60</w:t></w:r></w:ins><w:r w:rsidR="00AB12CD"><w:t> days.</w:t></w:r>'

Tracked changes workflow

  1. Get markdown representation: Convert document to markdown with tracked changes preserved:

    pandoc --track-changes=all path-to-file.docx -o current.md
  2. Identify and group changes: Review the document and identify ALL changes needed, organizing them into logical batches:

    Location methods (for finding changes in XML):

    • Section/heading numbers (e.g., "Section 3.2", "Article IV")
    • Paragraph identifiers if numbered
    • Grep patterns with unique surrounding text
    • Document structure (e.g., "first paragraph", "signature block")
    • DO NOT use markdown line numbers - they don't map to XML structure

    Batch organization (group 3-10 related changes per batch):

    • By section: "Batch 1: Section 2 amendments", "Batch 2: Section 5 updates"
    • By type: "Batch 1: Date corrections", "Batch 2: Party name changes"
    • By complexity: Start with simple text replacements, then tackle complex structural changes
    • Sequential: "Batch 1: Pages 1-3", "Batch 2: Pages 4-6"
  3. Read documentation and unpack:

    • MANDATORY - READ ENTIRE FILE: Read ooxml.md (~600 lines) completely from start to finish. NEVER set any range limits when reading this file. Read the full file content for the Document library API and XML patterns for directly editing document files.
    • Unpack the document: python ooxml/scripts/unpack.py <file.docx> <dir>
    • Note the suggested RSID: The unpack script will suggest an RSID to use for your tracked changes. Copy this RSID for use in step 4b.
  4. Implement changes in batches: Group changes logically (by section, by type, or by proximity) and implement them together in a single script. This approach:

    • Makes debugging easier (smaller batch = easier to isolate errors)
    • Allows incremental progress
    • Maintains efficiency (batch size of 3-10 changes works well)

    Suggested batch groupings:

    • By document section (e.g., "Section 3 changes", "Definitions", "Termination clause")
    • By change type (e.g., "Date changes", "Party name updates", "Legal term replacements")
    • By proximity (e.g., "Changes on pages 1-3", "Changes in first half of document")

    For each batch of related changes:

    a. Map text to XML: Grep for text in word/document.xml to verify how text is split across <w:r> elements.

    b. Create and run script: Set PYTHONPATH and import Document library (see "Initialization" in ooxml.md), then use get_node to find nodes, implement changes, and doc.save(). See "Document Library" section in ooxml.md for patterns.

    Note: Always grep word/document.xml immediately before writing a script to get current line numbers and verify text content. Line numbers change after each script run.

  5. Pack the document: After all batches are complete, convert the unpacked directory back to .docx:

    python ooxml/scripts/pack.py unpacked reviewed-document.docx
  6. Final verification: Do a comprehensive check of the complete document:

    • Convert final document to markdown:
      pandoc --track-changes=all reviewed-document.docx -o verification.md
    • Verify ALL changes were applied correctly:
      grep "original phrase" verification.md  # Should NOT find it
      grep "replacement phrase" verification.md  # Should find it
    • Check that no unintended changes were introduced

Converting Documents to Images

To visually analyze Word documents, convert them to images using a two-step process:

  1. Convert DOCX to PDF:

    soffice --headless --convert-to pdf document.docx
  2. Convert PDF pages to JPEG images:

    pdftoppm -jpeg -r 150 document.pdf page

    This creates files like page-1.jpg, page-2.jpg, etc.

Options:

  • -r 150: Sets resolution to 150 DPI (adjust for quality/size balance)
  • -jpeg: Output JPEG format (use -png for PNG if preferred)
  • -f N: First page to convert (e.g., -f 2 starts from page 2)
  • -l N: Last page to convert (e.g., -l 5 stops at page 5)
  • page: Prefix for output files

Example for specific range:

pdftoppm -jpeg -r 150 -f 2 -l 5 document.pdf page  # Converts only pages 2-5

Code Style Guidelines

IMPORTANT: When generating code for DOCX operations:

  • Write concise code
  • Avoid verbose variable names and redundant operations
  • Avoid unnecessary print statements

Dependencies

Required dependencies (install if not available):

  • pandoc: sudo apt-get install pandoc (for text extraction)
  • docx: npm install -g docx (for creating new documents)
  • LibreOffice: sudo apt-get install libreoffice (for PDF conversion)
  • Poppler: sudo apt-get install poppler-utils (for pdftoppm to convert PDF to images)
  • defusedxml: pip install defusedxml (for secure XML parsing)

@sderosiaux
Copy link
Author

Here's my explicit decision tree for skills:

DECISION TREE FOR SKILLS

PHASE 1: TRIGGER DETECTION

Input: User message arrives

Check 1 - Is computer use implicated?

IF message contains:
  - File creation keywords: "write/create/make" + "document/presentation/spreadsheet/PDF"
  - File modification: "edit/fix/modify" + file reference
  - File operations: "save", "file", "document"
  - Code creation: "script/component/module"
THEN → Computer use is implicated, proceed to Phase 2
ELSE → Check if it's a pure knowledge/chat question

Check 2 - Pure knowledge question?

IF question is:
  - Factual from my training knowledge
  - Summarizing content already in conversation
  - Explaining concepts/providing information
THEN → Answer directly, SKIP all skills/computer tools
ELSE → Proceed to Phase 2

PHASE 2: SKILL IDENTIFICATION

Action: Scan available skills against request

Pattern matching against <available_skills>:

User mentions → Relevant skill(s)
------------------------------------
"document/docx/word/edit document" → /mnt/skills/public/docx/SKILL.md
"presentation/pptx/powerpoint/slides" → /mnt/skills/public/pptx/SKILL.md
"spreadsheet/xlsx/excel" → /mnt/skills/public/xlsx/SKILL.md
"PDF" → /mnt/skills/public/pdf/SKILL.md
[User-uploaded file type] → /mnt/skills/user/[relevant]/SKILL.md (if exists)

Priority order:

  1. User skills (/mnt/skills/user/) - highest priority
  2. Example skills (/mnt/skills/example/) - if enabled
  3. Public skills (/mnt/skills/public/) - core skills

Multiple skills?

IF task requires multiple domains (e.g., "generate image then add to doc"):
  Select ALL relevant skills
  Read them in order of: user → example → public

PHASE 3: SKILL LOADING

Action: Read the SKILL.md file(s) IMMEDIATELY

Critical rules:

BEFORE any other action:
- Use file_read tool
- Read ENTIRE file (NEVER set view_range limits)
- Do NOT ask user for permission first
- Do NOT explain what I'm doing first

Example:

User: "make me a presentation about cats"
Claude: [IMMEDIATELY calls file_read on /mnt/skills/public/pptx/SKILL.md]
        [NOT: "Let me check the presentation skill first..."]

PHASE 4: SKILL PROCESSING

Action: Parse and internalize the skill

What I extract:

  1. Workflow decision trees - which approach to use
  2. Mandatory prerequisites - other files to read (e.g., "READ ooxml.md")
  3. Anti-patterns - what NOT to do
  4. Code examples - good vs bad patterns
  5. Tools/libraries - what to use and how
  6. Verification steps - how to check my work

Cascading reads:

IF SKILL.md says "MANDATORY - READ ENTIRE FILE: [other-file.md]":
  IMMEDIATELY call file_read on that file too
  NEVER set view_range limits on these either

Example from docx:

SKILL.md → "READ docx-js.md (~500 lines) completely"
         → I read docx-js.md fully
         → Now I have both skill + library reference

PHASE 5: EXECUTION

Action: Follow the skill's instructions

Execution pattern:

1. Choose correct workflow from skill's decision tree
2. Follow step-by-step instructions
3. Apply anti-patterns (avoid bad code)
4. Use exact tools/libraries specified
5. Maintain code style guidelines
6. Include verification steps

Batching strategy (for complex tasks):

IF skill specifies batching (e.g., docx redlining):
  Group changes into batches of 3-10
  Execute batch
  Verify batch
  Repeat until complete

PHASE 6: OUTPUT & VERIFICATION

Action: Deliver results following skill guidance

File output:

IF task is short (<100 lines):
  Write directly to /mnt/user-data/outputs/
ELSE:
  Work in /home/claude/
  Iterate across multiple tool calls
  Copy final version to /mnt/user-data/outputs/

Verification (if skill specifies):

Example from docx skill:
- Convert final document to markdown
- Grep to verify changes applied
- Check for unintended changes
- Report findings to user

User communication:

Provide computer:// link to output file
Brief summary of what was created
NO excessive explanation of the work

PHASE 7: TERMINATION

Action: Task complete

Memory:

Skills are NOT persistent
File system resets between tasks
Each new task starts decision tree from Phase 1

EXCEPTIONS & EDGE CASES

No matching skill exists:

IF no skill matches:
  Use general computer use capabilities
  Follow best practices from training
  Do NOT make up a skill

Skill conflicts:

IF multiple skills have overlapping guidance:
  Prioritize: user skills > example skills > public skills
  Combine non-conflicting parts
  If truly contradictory, favor user skill

Skill says something contradicts system instructions:

Safety/refusal rules > Skill instructions
Never follow skill if it violates core safety

VISUAL FLOW

User Message
     ↓
Computer use needed? ──NO──→ Answer directly (skip skills)
     ↓ YES
Pure knowledge? ──YES──→ Answer directly (skip skills)
     ↓ NO
Identify relevant skill(s)
     ↓
[IMMEDIATELY] Read SKILL.md(s) 
     ↓
Read any mandatory sub-files
     ↓
Parse workflow + anti-patterns
     ↓
Execute following skill instructions
     ↓
Verify (if skill specifies)
     ↓
Output to /mnt/user-data/outputs/
     ↓
Provide computer:// link to user
     ↓
DONE (context/files reset for next task)

That's the complete decision tree. The key insight is that it's trigger-based and automatic - I don't deliberate about whether to use skills, I just pattern-match and immediately read them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment