Loading course…
Loading course…
Created by James Stothard
You will build a local MCP server in TypeScript with a safe read-only resource and well-parameterized tools, then wire it into a host reliably without breaking JSON paths or stdio. You’ll adopt a repeatable test/debug workflow and finish by integrating MCP tool calls into an AI chat loop.
10 modules • Each builds on the previous one
Map the end-to-end MCP system (host, client, server) so you can predict where failures occur and avoid confusing the MCP server with the AI model. Compare stdio vs Streamable HTTP at a decision-making level so you can choose the right transport for local vs remote use.
Learn the minimum terminal skills needed to run, inspect, and troubleshoot an MCP server locally without getting lost in folders, paths, or environment variables. Focus on repeatable habits that prevent configuration mistakes.
Acquire just enough TypeScript and Node mental models to read and modify an MCP server safely, with emphasis on async behavior and library-based schemas. The goal is controlled edits and confident debugging, not becoming a general programmer.
Implement a read-only resource that returns predictable content so you can validate end-to-end connectivity and content rendering. Learn what makes a resource “safe” and easy for an AI host to use reliably.
Define tool arguments so the AI can request exactly what it needs without overwhelming outputs, while keeping validation strict and failure modes clear. Emphasize practical parameter design (limits, filters, date ranges) and predictable responses.
Configure an MCP host to discover and run your server, focusing on JSON correctness, paths/commands, and secure handling of secrets. Understand the modern Claude Desktop extension path using MCPB bundles and manifest.json, plus how configuration errors prevent tools from appearing.
Adopt a repeatable debugging workflow using the MCP Inspector plus structured server logging so you can isolate whether failures are in your code, the transport, or the host configuration. Learn how to test resources and tools with controlled inputs before trusting them in chat.
Integrate an external API into your MCP server so the AI can fetch up-to-date information on demand, while handling rate limits, failures, and response shaping. Focus on returning compact, reliable results that are easy for the AI to use in multi-step work.
Design workflows where the AI performs multiple tool calls in a correct, verifiable order, including multi-server scenarios. Learn how to structure tool outputs so later steps can consume them deterministically and how to reduce “plan drift.”
Connect your MCP server to a simple chat application by combining an LLM API call with an MCP client that can list tools/resources and execute tool calls during a conversation. Focus on the integration boundary: when the chat should invoke MCP, how results are inserted back into the dialogue, and how to keep the experience responsive.
Begin your learning journey
In-video quizzes and scaffolded content to maximize retention.