JonathanAquino

daily-work-summary

0
0
# Install this skill:
npx skills add JonathanAquino/ai-skills --skill "daily-work-summary"

Install specific skill from multi-skill repository

# Description

Generate daily work summaries from Slack messages, GitHub PRs, AI conversations, and Obsidian notes for a specified date range.

# SKILL.md


name: daily-work-summary
description: Generate daily work summaries from Slack messages, GitHub PRs, AI conversations, and Obsidian notes for a specified date range.


Daily Work Summary Generator

This skill scrapes work activity from multiple sources and generates daily summaries.

Date Range

Ask me for the start and end dates for the summaries.

Preamble

The only commands you are allowed to run are read-only commands and file write commands. You are not allowed to run any write or modification commands other than file write commands. You are not allowed to run any destructive commands like delete commands.

Slack

Ask me for a sample search.modules.messages curl request to get slack search results for "from:@Jon Aquino" for yesterday, sorted by Newest.

Test the curl command. You may need to add --compressed | jq . to the end. If it doesn't return the expected results, print "curl command failed" and stop.

I want you to scrape the search results for each day in the date range and put the results in ~/Documents/AI_Context/daily-work/slack/YYYY-MM-DD.txt. For example, ~/Documents/AI_Context/daily-work/slack/2025-07-02.txt. Skip any dates whose files already exist. To vary the date, set before:2025-07-03 after:2025-07-01 - note that before is the day after the desired date and after is the date before the desired date. Read the page_count to get the number of pages, and iterate over all pages by varying the page parameter. Make sure to scrape all pages - don't just get the first page. If a page has no results, try to figure out what is wrong. Print the number of messages on each page as you go. IMPORTANT: Just run the curl commands - don't make a script, as the script will often have bugs. Each entry should look like this:

--------------------------------------------------

Timestamp: 2025-07-28 19:30:58
User: jonathan.aquino
Channel: dev_sludge_attribution_task_force
Text: I need reviews on my funnel metrics PRs

GitHub

Use this command gh search prs --author=JonathanAquino-NextRoll --created=2025-07-21 --json number,title,body,url,repository,createdAt,state --limit 20 to put the PR descriptions of PRs I created for each day in the date range. For example, ~/Documents/AI_Context/daily-work/github/2025-07-21.txt. Skip any dates whose files already exist. If there were no PRs on the day, create an empty file 2025-07-21.txt. If there was an error, do not create any file, and try to figure out what went wrong - try sleeping for 30 seconds if there is a rate limiting error.

Jira

Ask me for my Jira API token and email. Tell me to generate an API token at https://id.atlassian.com/manage-profile/security/api-tokens if I don't have one. Store them temporarily as environment variables for use during this session only.

Use the fetch-jira.sh script in this directory to fetch tickets. The script accepts start and end dates (both inclusive) and queries Jira for all resolved tickets in that range, then splits them into daily files.

To fetch data efficiently, query 1-2 months at a time:

export JIRA_EMAIL="[email protected]"
export JIRA_API_TOKEN="your-token-here"

# Fetch one month at a time
./fetch-jira.sh 2025-11-01 2025-11-30
sleep 1
./fetch-jira.sh 2025-12-01 2025-12-31

The script will:
- Query Jira API for the date range
- Parse tickets using jq and extract all fields including UDP's custom description field
- Group tickets by resolution date
- Write to ~/Documents/AI_Context/daily-work/jira/YYYY-MM-DD.txt
- Create empty files for dates with no tickets
- Skip dates whose files already exist

Claude Code Conversations

Extract my messages from Claude Code conversation logs for each day in the date range. Put the results in ~/Documents/AI_Context/daily-work/claude-code/2025-07-21.txt. Skip any dates whose files already exist. If there were no conversations on the day, create an empty file 2025-07-21.txt.

Use this command to extract user messages for a specific date:

find ~/.claude/projects -name "[0-9a-f]*.jsonl" | xargs cat | jq -r 'select(.timestamp? | contains("2025-07-21")) | select(.type == "user") | select(.message.content | type == "string") | select(.message.content | test("<command-name>|<local-command|Caveat:") | not) | .message.content' 2>/dev/null > ~/Documents/AI_Context/daily-work/claude-code/2025-07-21.txt

This extracts only your typed messages, excluding Claude's responses, tool results, and system-generated messages. Each message will appear on its own line in the output file.

Summary

For each day in the date range, I would like a summary of my work on that day. Put the results in ~/Dropbox/Jon's Obsidian Vault/Work/Daily Summaries, for example, ~/Dropbox/Jon's Obsidian Vault/Work/Daily Summaries/2025-07-21-uhura-staging-recovery.md. Note that it is the date followed by a 3 or 4 word short description. If there is no material for the day, call it 2025-07-21.md and make it an empty file. Skip any dates whose files already exist.

Use the following as source material:

  • ~/Documents/AI_Context/daily-work/github/2025-07-21.txt
  • ~/Documents/AI_Context/daily-work/slack/2025-07-21.txt
  • ~/Documents/AI_Context/daily-work/jira/2025-07-21.txt
  • ~/Documents/AI_Context/daily-work/claude-code/2025-07-21.txt
  • ~/Dropbox/Jon's Obsidian Vault/Personal/Daily Log/2025-07-21*

Don't forget the Claude Code conversations and Jira tickets.

Paragraph Summary

For each day in the date range, I would like a 1-paragraph summary of my work on that day. Put the results in ~/Dropbox/Jon's Obsidian Vault/Work/Daily Paragraph Summaries, for example, ~/Dropbox/Jon's Obsidian Vault/Work/Daily Paragraph Summaries/2025-07-21-uhura-staging-recovery.md. Note that it is the date followed by a 3 or 4 word short description. If there is no material for the day, call it 2025-07-21.md and make it an empty file. Skip any dates whose files already exist.

Use the following as source material:

  • ~/Documents/AI_Context/daily-work/github/2025-07-21.txt
  • ~/Documents/AI_Context/daily-work/slack/2025-07-21.txt
  • ~/Documents/AI_Context/daily-work/jira/2025-07-21.txt
  • ~/Documents/AI_Context/daily-work/claude-code/2025-07-21.txt
  • ~/Dropbox/Jon's Obsidian Vault/Personal/Daily Log/2025-07-21*

Don't forget the Claude Code conversations and Jira tickets.

Finish

Print "daily work script finished".

Remind me to delete the Jira API token from https://id.atlassian.com/manage-profile/security/api-tokens for security.

Implementation Notes

Slack Scraping

  1. Use a shell script file: Write the curl command to a temp file like /tmp/slack_search.sh and execute it from there. This avoids shell escaping issues with the complex cookie strings and multipart form data.

  2. Use -H 'cookie: ...' instead of -b '...': The -b option has parsing issues with complex cookie strings that contain special characters.

  3. Correct jq structure: The API returns nested data - messages are inside .items[].messages[]. Use this jq command:
    jq -r '.items[] | .channel.name as $chan | .messages[0] | "--------------------------------------------------\n\nTimestamp: \(if .ts then (.ts | tonumber | strftime("%Y-%m-%d %H:%M:%S")) else "unknown" end)\nUser: \(.username // "unknown")\nChannel: \($chan // "unknown")\nText: \(.text // "")"'

  4. Rate limiting: Add sleep 0.5 between requests to avoid rate limiting.

  5. Multipart form data: The --data-raw content needs proper line endings. In the shell script, use actual newlines instead of \r\n escape sequences.

GitHub PR Scraping

  1. Use a shell script file: Similar to Slack, write the fetch logic to a temp file like /tmp/fetch_prs.sh to avoid escaping issues.

  2. Validate JSON response: Check if the result starts with [ to verify it's a valid JSON array:
    bash FIRST_CHAR=$(echo "$RESULT" | head -c 1) if [[ "$FIRST_CHAR" != "[" ]]; then echo "Error - not valid JSON" exit 1 fi

  3. Rate limiting: Add sleep 1 between requests. If you hit rate limits, sleep for 30 seconds and retry.

  4. Empty files for no PRs: Use touch "$OUTPUT_FILE" to create empty files for days with no PRs, so they get skipped on subsequent runs.

Jira Scraping

  1. API Authentication: Use basic auth with email and API token: -u "$EMAIL:$API_TOKEN". The user should generate an API token at https://id.atlassian.com/manage-profile/security/api-tokens

  2. JQL date filtering: Use statusCategory = Done AND resolved >= "YYYY-MM-DD" AND resolved <= "YYYY-MM-DD" to filter by date range. The statusCategory = Done ensures only completed tickets are included, and resolved captures the completion date. Note: Use <= for the upper bound (inclusive) rather than < to capture the full day.

  3. Expand comments: Always include expand=comments in the query parameters to get all comments inline with the issues.

  4. Custom description field: UDP tickets use customfield_13337 ("Ticket Description") instead of the standard description field. Always include both description and customfield_13337 in the fields parameter, and check both when extracting text.

  5. Atlassian Document Format (ADF): Descriptions and comments use nested JSON structures. Extract text recursively by walking through content arrays and collecting text nodes. Handle hardBreak nodes as newlines.

  6. Parse JSON response: Use jq with a recursive text extraction function. Check both description and customfield_13337 for the ticket description (preferring the custom field if present):
    ```jq
    def extract_text:
    if type == "object" then
    if .type == "text" then .text
    elif .type == "hardBreak" then "\n"
    elif .content then .content | map(extract_text) | join("")
    else "" end
    elif type == "array" then map(extract_text) | join("\n\n")
    else "" end;

.issues[] |
"Ticket: (.key)\nDescription:\n(
if .fields.customfield_13337 then (.fields.customfield_13337 | extract_text)
elif .fields.description then (.fields.description | extract_text)
else "No description" end
)\nComments:\n([.fields.comment.comments[] | "- (.author.displayName): (.body | extract_text)"] | join("\n\n"))"
```

  1. Rate limiting: Add sleep 0.5 between requests to avoid rate limiting.

  2. Empty files for no tickets: Use touch "$OUTPUT_FILE" to create empty files for days with no Jira activity, so they get skipped on subsequent runs.

Claude Code Conversation Extraction

  1. Session files only: Use pattern [0-9a-f]*.jsonl to match UUID-named session files (main conversations), excluding agent-*.jsonl files (background agent operations with ~437 files)

  2. Message filtering chain: The jq filters are applied in sequence:

  3. select(.timestamp? | contains("YYYY-MM-DD")) - Filter by date
  4. select(.type == "user") - Only user messages, not Claude's responses
  5. select(.message.content | type == "string") - Only text content, excludes tool results (arrays)
  6. select(.message.content | test("<command-name>|<local-command|Caveat:") | not) - Excludes system messages

  7. File structure:

  8. Session files (e.g., b8b3da6b-441b-4c7e-a787-64a1614253e4.jsonl): Main conversation with isSidechain: false
  9. Agent files (e.g., agent-a3b64fe.jsonl): Background agent work with isSidechain: true

  10. Output format: Each message appears on its own line. Messages are already in chronological order from the JSONL files.

  11. Empty files: If no conversations exist for a date, the output redirection will create an empty file, which is correct behavior for consistency with other data sources.

Summary Generation

  1. IMPORTANT: Use AI to synthesize, don't copy/paste: Summaries should be AI-generated narratives that synthesize information from all sources, NOT mechanical copy/paste of PR titles or ticket descriptions. The summary should read naturally, explaining what was done and why it matters.

  2. Use the claude command for generation: Feed all source materials to the claude CLI command with --print flag to generate summaries:
    ```bash
    SLACK=$(cat ~/Documents/AI_Context/daily-work/slack/$DATE.txt)
    GITHUB=$(cat ~/Documents/AI_Context/daily-work/github/$DATE.txt)
    JIRA=$(cat ~/Documents/AI_Context/daily-work/jira/$DATE.txt)
    CLAUDE_CODE=$(head -100 ~/Documents/AI_Context/daily-work/claude-code/$DATE.txt | grep -v "^Warmup$" | head -50)

claude --print << PROMPT
Create a daily work summary for $DATE. Write naturally, synthesizing all sources.

SLACK: $SLACK
GITHUB: $GITHUB
JIRA: $JIRA
CLAUDE CODE: $CLAUDE_CODE

Format as markdown with ## Main Activities, ## Slack Discussions (if relevant), ## PRs Merged (if any).
PROMPT
```

  1. File naming convention:
  2. Days with activity: 2025-10-15-deadlock-oom-fixes.md (date + 3-4 word description of main activity)
  3. Days with no activity: 2025-10-18.md (just the date, empty file)

  4. Summary structure: Use this format for Daily Summaries:
    ```markdown
    # Daily Work Summary: YYYY-MM-DD

## Main Activities
[Narrative description of work - synthesize, don't just list]

## Slack Discussions
[Key discussions and coordination - only if meaningful content]

## PRs Merged
[List with links - only if PRs exist]
```

  1. Paragraph summary format: 1-2 sentences capturing the essence of the day's work, mentioning PR numbers and ticket IDs where relevant. Generate from the full summary using claude --print.

  2. Empty days: For weekends or days with no work activity, use touch to create empty files so they get skipped on subsequent runs.

  3. Batch processing: Process multiple days in sequence, using a loop to call claude --print for each day.

  4. Slack-only days: If there are Slack discussions but no PRs, still create a summary focusing on the discussions, coordination, and any investigations mentioned.

General Shell Scripting

  1. Path handling with special characters: When working with paths containing apostrophes (like Jon's Obsidian Vault):
  2. Use absolute paths: /home/jon/Dropbox/Jon's Obsidian Vault/... instead of ~/Dropbox/Jon's Obsidian Vault/...
  3. Or use proper quoting: "$HOME/Dropbox/Jon's Obsidian Vault/..."
  4. Avoid mixing tilde expansion (~) with escaped quotes (\'), as the shell expands ~ before processing quotes/escapes
  5. Example that fails: ~/Dropbox/Jon\'s\ Obsidian\ Vault/
  6. Example that works: /home/jon/Dropbox/Jon\'s\ Obsidian\ Vault/

# Supported AI Coding Agents

This skill is compatible with the SKILL.md standard and works with all major AI coding agents:

Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.