Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add olaservo/mcp-agent-skills --skill "mcp-client-ts"
Install specific skill from multi-skill repository
# Description
Build TypeScript MCP clients with composable code snippets. Includes agentic pattern with LLM integration, server-initiated requests (sampling, elicitation), and dynamic discovery. Use when creating applications that connect to MCP servers.
# SKILL.md
name: mcp-client-ts
description: Build TypeScript MCP clients with composable code snippets. Includes agentic pattern with LLM integration, server-initiated requests (sampling, elicitation), and dynamic discovery. Use when creating applications that connect to MCP servers.
TypeScript MCP Client Builder
Build MCP (Model Context Protocol) clients in TypeScript using code snippets and patterns from the official SDK.
Now includes advanced capability support: sampling handlers, elicitation, roots, dynamic discovery, resource subscriptions, and task support with persistence.
How It Works
- Browse the snippet catalog below or in
snippets/ - Copy the snippets you need into your project
- Customize the copied code for your use case
- Connect to your MCP server and use tools/resources/prompts
Quick Start Decision Tree
Which Snippet Should I Start With?
Building an agentic app where an LLM decides when to use tools?
ββ> Use client-with-llm snippet
Includes Claude integration, tool calling loop, interactive chat
Just need direct MCP server access (no LLM)?
ββ> Use client-setup snippet
Basic connection, manual tool/resource/prompt calls
Which Transport Should I Use?
Connecting to a local process (CLI tool, desktop app)?
ββ> Use StdioClientTransport
Examples: Claude Desktop, CLI applications, local dev servers
Connecting to a remote HTTP server?
ββ> Use StreamableHTTPClientTransport
Examples: Cloud-hosted MCP servers, web services
Need to support legacy SSE-only servers?
ββ> Use StreamableHTTP with SSE fallback pattern
See transport-http snippet for fallback example
Phase 1: Research
1.1 Identify Your Server
Before writing code, understand:
- What MCP server are you connecting to?
- How is the server started? (local process vs remote URL)
- Does the server require authentication?
- What tools/resources/prompts does it expose?
1.2 Browse Available Snippets
| Snippet | Description | Best For |
|---|---|---|
client-setup |
Basic Client with connection pattern | Direct MCP access without LLM |
client-with-llm |
Agentic client with Claude (API/Bedrock/Vertex/Azure) | LLM-powered tool calling apps |
transport-stdio |
StdioClientTransport examples | Local/subprocess servers |
transport-http |
StreamableHTTPClientTransport examples | Remote HTTP servers |
sampling-handler |
Handle sampling requests with tool support (MCP 2025-11-25) | Servers that need Claude completions |
elicitation-handler |
Handle user input requests (form/URL) | OAuth, confirmations, data collection |
roots-handler |
Expose filesystem directories | IDE integrations, file tools |
list-changed |
React to dynamic capability changes | Real-time tool/resource updates |
subscriptions |
Subscribe to resource updates | Live data feeds, monitoring |
logging |
Receive server log messages with level filtering | Debugging, monitoring |
completions |
Argument autocomplete with interactive picker | IDE-like experiences |
tasks |
Task support for long-running operations with streaming | Servers with async/long-running tools |
persistent-task-store |
File-based task persistence with TTL | Resumable task tracking |
Phase 2: Implement
2.1 Initialize Project
mkdir my-mcp-client && cd my-mcp-client
npm init -y
npm install @modelcontextprotocol/sdk @anthropic-ai/sdk dotenv
npm install -D typescript @types/node
npx tsc --init
mkdir src
Note: The above installs the direct Anthropic API SDK. For other providers, see the provider options in the
client-with-llmsnippet.
Update tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"rootDir": "./src",
"outDir": "./dist",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"types": ["node"]
},
"include": ["src/**/*"]
}
Update package.json:
{
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
}
}
Create .env for API keys (choose based on your provider):
# Direct API
ANTHROPIC_API_KEY=your-api-key
# AWS Bedrock
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=us-east-1
# Google Vertex AI
GOOGLE_CLOUD_PROJECT=your-project-id
GOOGLE_CLOUD_REGION=us-central1
# Azure (Foundry)
ANTHROPIC_FOUNDRY_API_KEY=your-api-key
ANTHROPIC_FOUNDRY_RESOURCE=your-resource.azure.anthropic.com
Create .gitignore:
node_modules/
dist/
build/
.env
*.log
2.2 Add Snippets
Copy snippets from this skill's snippets/ directory into your project:
snippets/
βββ client/
β βββ index.ts # Basic client setup
β βββ with-llm.ts # Agentic client with Claude
βββ transports/
βββ stdio.ts # Local process transport
βββ http.ts # Remote HTTP transport
Note: The client-with-llm snippet requires a Claude provider SDK:
npm install dotenv
# Choose ONE provider:
npm install @anthropic-ai/sdk # Direct API
npm install @anthropic-ai/bedrock-sdk # AWS Bedrock
npm install @anthropic-ai/vertex-sdk # Google Vertex AI
npm install @anthropic-ai/foundry-sdk # Azure (Foundry)
Copy snippets to your project:
cp snippets/client/index.ts /path/to/my-mcp-client/src/
cp snippets/transports/stdio.ts /path/to/my-mcp-client/src/
2.3 Customize and Connect
Each snippet includes:
- Source URL linking to official documentation
- Working code ready to customize
Modify the copied code:
1. Update client name and version
2. Configure the appropriate transport
3. Add your application logic
4. Handle tool results and resources
Phase 3: Test
3.1 Build
npm run build
3.2 Test Connection
Required: The
client-with-llmsnippet requiresANTHROPIC_API_KEY(or equivalent provider credentials) in your.envfile before testing. Without it, you'll get:Could not resolve authentication method.
# Interactive mode with a local server
node dist/index.js path/to/server.js
# Interactive mode with an npx package
node dist/index.js @modelcontextprotocol/server-everything
# Non-interactive mode (single query)
node dist/index.js @modelcontextprotocol/server-everything "add 5 and 3"
3.3 Quality Checklist
- [ ] Client connects successfully to server
- [ ] Tools are listed correctly
- [ ] Tool calls return expected results
- [ ] Resources are readable
- [ ] Error handling is robust
- [ ] Connection cleanup on exit
Available Snippets Catalog
Client
| Name | Description |
|---|---|
client-setup |
Basic MCP Client class with connection, tools, prompts, and resources methods |
client-with-llm |
Agentic client with Claude (supports API, Bedrock, Vertex, Azure) |
multi-server |
Connect to multiple MCP servers with tool aggregation and automatic routing |
Transports
| Name | Description |
|---|---|
transport-stdio |
StdioClientTransport for Node.js, Python, and npx-based servers |
transport-http |
StreamableHTTPClientTransport for remote servers with OAuth and fallback patterns |
Handlers (Server-Initiated Requests)
| Name | Description |
|---|---|
sampling-handler |
Handle sampling/createMessage requests with tool support (MCP 2025-11-25) |
elicitation-handler |
Handle elicitation/create requests for user input (form and URL modes) |
roots-handler |
Expose filesystem roots to servers with change notifications |
Features (Dynamic Discovery & Monitoring)
| Name | Description |
|---|---|
list-changed |
React to dynamic tool/prompt/resource changes with listChanged handlers |
subscriptions |
Subscribe to resource updates with reactive patterns |
logging |
Receive server log messages with level filtering |
completions |
Argument autocomplete for prompts and resources with interactive picker |
annotations |
Filter, sort, and display content with audience, priority, and timestamp annotations |
tool-annotations |
Interpret tool behavior hints for UI warnings and confirmations |
Capabilities
| Name | Description |
|---|---|
tasks |
Task support for long-running tool operations with progress streaming |
tasks-receiver |
Receiver-side task support for handling task-augmented requests (bidirectional tasks) |
Stores
| Name | Description |
|---|---|
persistent-task-store |
File-based task store with persistence and TTL cleanup |
Quick Reference
Core Imports
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
Basic Client Pattern
const client = new Client(
{ name: "my-client", version: "1.0.0" },
{ capabilities: {} }
);
const transport = new StdioClientTransport({
command: "node",
args: ["path/to/server.js"],
});
await client.connect(transport);
// Use the client
const tools = await client.listTools();
const result = await client.callTool({ name: "tool-name", arguments: {} });
// Cleanup
await client.close();
Multi-Server Pattern
import {
connectToAllServers,
aggregateTools,
callTool,
disconnectAll,
} from './multi-server.js';
// Connect to multiple servers
const clients = await connectToAllServers({
"time": { url: "http://localhost:3001/mcp" },
"weather": { url: "http://localhost:3002/mcp" },
});
// Get all tools with qualified names (server__tool)
const tools = await aggregateTools(clients);
// [{ name: "time__get-time", originalName: "get-time", serverName: "time", ... }]
// Call a tool using qualified name
const result = await callTool(clients, "weather__get-forecast", { city: "Seattle" });
// Cleanup
await disconnectAll(clients);
Server Instructions
MCP servers can provide instructions that describe how to use their tools effectively. These are returned during initialization and should be included in LLM system prompts:
// After connecting, retrieve server instructions
const instructions = client.getInstructions();
if (instructions) {
console.log("Server instructions:", instructions);
// Include in your LLM's system prompt for better tool usage
}
The client-with-llm snippet automatically includes server instructions in the Claude system prompt. If you want to add a base system prompt along with server instructions:
// Set a base system prompt (optional)
mcpClient.setSystemPrompt("You are a helpful assistant with access to MCP tools.");
// Server instructions will be appended automatically
// when processQuery() calls Claude
Related Skills
This skill focuses on building MCP clients with full protocol support. See also:
| Skill | Use When |
|---|---|
| mcp-server-ts | Building MCP servers that clients connect to |
| claude-agent-sdk-ts | Building Claude agents (limited MCP: tools + resources only) |
Why use mcp-client-ts instead of Claude Agent SDK?
- Need full MCP protocol (prompts, sampling, roots, logging, subscriptions, tasks)
- Building non-agent applications (IDEs, services, custom integrations)
- Need direct control over MCP client behavior
Reference Files
For deeper guidance, load these reference documents:
- TypeScript Client Patterns - SDK imports, transport options, API methods
MCP Documentation Server
For up-to-date protocol details, use the MCP docs server:
{
"mcpServers": {
"mcp-docs": {
"type": "http",
"url": "https://modelcontextprotocol.io/mcp"
}
}
}
Query the docs server for:
- "Example clients" - includes examples of both open source and closed source MCP clients
- Client feature support
- Protocol specification updates
External Resources
- MCP Specification - Official protocol documentation
- TypeScript SDK - Official SDK repository
- Build a Client Tutorial - Step-by-step guide
- Client Examples - Runnable examples
Building More Capable Clients
Most MCP clients only implement basic features. To build a more capable client that stands out, implement these advanced capabilities:
Capability Declaration
Declare capabilities during client initialization to enable server-initiated requests:
const client = new Client(
{ name: 'my-client', version: '1.0.0' },
{
capabilities: {
// Allow server to request LLM completions (with tool support - MCP 2025-11-25)
sampling: { tools: {} },
// Allow server to request user input
elicitation: {
form: {}, // Structured input
url: {} // URL redirects (OAuth)
},
// Expose filesystem roots
roots: {
listChanged: true // Notify on changes
},
// Enable task support for long-running operations
tasks: {
list: {}, // Support tasks/list
cancel: {}, // Support tasks/cancel
requests: { // Task-augmented request types you can receive
sampling: { createMessage: {} },
elicitation: { create: {} },
},
}
}
}
);
Server-Initiated Requests
Sampling - Servers can request LLM completions:
import { CreateMessageRequestSchema } from '@modelcontextprotocol/sdk/client/index.js';
client.setRequestHandler(CreateMessageRequestSchema, async (request) => {
// Call Claude API with request.params.messages
return { role: 'assistant', content: { type: 'text', text: response }, model: 'claude-...' };
});
Elicitation - Servers can request user input:
import { ElicitRequestSchema } from '@modelcontextprotocol/sdk/client/index.js';
client.setRequestHandler(ElicitRequestSchema, async (request) => {
// Prompt user based on request.params.requestedSchema
return { action: 'accept', content: { confirm: true } };
});
Dynamic Discovery
React to server capability changes in real-time:
const client = new Client(
{ name: 'my-client', version: '1.0.0' },
{
capabilities: {},
listChanged: {
tools: {
autoRefresh: true,
onChanged: (err, tools) => {
console.log('Tools updated:', tools?.map(t => t.name));
}
}
}
}
);
Resource Subscriptions
Subscribe to resources for live updates:
// Check if server supports subscriptions
const caps = client.getServerCapabilities();
if (caps?.resources?.subscribe) {
await client.subscribeResource({ uri: 'file:///data.json' });
client.setNotificationHandler(
{ method: 'notifications/resources/updated' },
async (notification) => {
const content = await client.readResource({ uri: notification.params.uri });
console.log('Resource updated:', content);
}
);
}
Task Support
Handle long-running tool operations with progress tracking:
import {
serverSupportsTasks,
callToolWithTaskSupport,
callToolAuto,
listTasks,
getTask,
cancelTask,
} from './capabilities/tasks.js';
// Check if server supports tasks
if (serverSupportsTasks(client)) {
// Call a tool with streaming progress updates
const result = await callToolWithTaskSupport(client, 'long-running-tool', { arg: 'value' }, {
onTaskCreated: (task) => console.log('Task started:', task.taskId),
onTaskStatusUpdate: (task) => console.log('Status:', task.status, task.statusMessage),
});
// Or auto-detect: uses task mode if tool supports it, regular mode otherwise
const tools = await client.listTools();
const tool = tools.tools.find(t => t.name === 'my-tool');
const autoResult = await callToolAuto(client, tool, { arg: 'value' });
// Task management
const tasks = await listTasks(client);
const task = await getTask(client, 'task-id');
await cancelTask(client, 'task-id');
}
For persistent task storage across restarts:
import { createPersistentTaskStore } from './stores/persistent-task-store.js';
import { join } from 'node:path';
import { homedir } from 'node:os';
const taskStore = createPersistentTaskStore({
dataDir: join(homedir(), '.my-app', 'data'),
persistenceEnabled: true,
onLog: (msg) => console.log(msg),
});
const client = new Client(
{ name: 'my-client', version: '1.0.0' },
{ capabilities: { tasks: {} }, taskStore }
);
For handling task-augmented requests when the client is the receiver (bidirectional tasks):
import {
createReceiverTaskManager,
setupReceiverTaskHandlers,
getReceiverTaskCapabilities,
emitTaskStatusNotification,
} from './capabilities/tasks-receiver.js';
// Create task manager for receiver-side tasks
const taskManager = createReceiverTaskManager({
ttl: 300000, // 5 minutes
onLog: (msg) => console.log(msg),
});
// Set up request handlers for tasks/list, tasks/get, etc.
setupReceiverTaskHandlers(client, taskManager);
// Get capability declaration for client initialization
const taskCapabilities = getReceiverTaskCapabilities({
sampling: true, // Support task-augmented sampling requests
elicitation: true, // Support task-augmented elicitation requests
});
// In your sampling handler, create receiver tasks:
client.setRequestHandler(CreateMessageRequestSchema, async (request) => {
// Check if this is a task-augmented request
if (request.params.task) {
const record = taskManager.createTask({
ttl: request.params.task.ttl,
initialStatus: 'working',
statusMessage: 'Processing sampling request...',
});
// Process asynchronously, then complete:
// taskManager.completeTask(record.task.taskId, result);
// await emitTaskStatusNotification(client, record.task);
return { task: record.task }; // Return CreateTaskResult
}
// Normal non-task flow...
});
Use Case Examples
Beyond chatbots, capable MCP clients can power:
- IDE Integrations: Expose project roots, handle file operations, provide completions
- Workflow Automation: Orchestrate multi-tool workflows, handle confirmations via elicitation
- Monitoring Dashboards: Subscribe to resources for live data, receive server logs
- AI-Powered CLI Tools: Use sampling for intelligent command suggestions
See the MCP Reference Client for a complete implementation.
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.