Skip to content

Instantly share code, notes, and snippets.

@donbr
Last active November 10, 2025 00:08
Show Gist options
  • Select an option

  • Save donbr/64ff1535eede83bda8a41e10662be68d to your computer and use it in GitHub Desktop.

Select an option

Save donbr/64ff1535eede83bda8a41e10662be68d to your computer and use it in GitHub Desktop.
santosh-refactor.md

Santosh - Assignment 15 - A2A

🧭 TL;DR Feedback – Simplifying & Aligning with A2A Client Patterns

Great job — your code works and shows deep understanding of streaming, chunk parsing, and A2A context handling. To simplify and align with the official A2A client patterns (like test_client.py), focus on these refinements:


🔹 1. Separate Concerns

  • Move all A2A-specific logic into a helper module (a2a_adapter.py).
  • Keep your LangGraph node (call_a2a_server) lightweight — just call the adapter and update state.
  • This mirrors the clean layering in the example client.

🔹 2. Use Typed Models Instead of Generic JSON

  • Use the built-in Pydantic models (e.g., chunk.root.result.artifact.parts) instead of generic .get() chains.
  • Avoid converting everything to dicts; rely on model_dump() only for debugging or logging.
  • The SDK already normalizes streaming and non-streaming responses for you.

🔹 3. Simplify the Streaming Loop

  • Follow the example’s pattern:

    async for chunk in client.send_message_streaming(request):
        data = chunk.model_dump(mode='json', exclude_none=True)
        result = data.get('result') or data.get('root', {}).get('result', {})
  • Extract text from artifact.parts[].text and stop — skip multi-path heuristics or “longest-chunk” logic for now.


🔹 4. Modularize Message Building

  • Factor your message creation into a helper:

    def build_message_params(text, task_id=None, context_id=None) -> MessageSendParams:
        ...
  • Keeps your node cleaner and matches the test_client.py structure.


🔹 5. Drop Unneeded Fallbacks

  • The non-streaming fallback inside call_a2a_server() isn’t needed for your current setup.
  • Keep it in a separate function or remove it unless you’re testing fault tolerance.

✅ In short:

Area Current Align With
A2A logic Inlined in graph node Reusable adapter (get_a2a_answer)
Response handling Manual JSON parsing Typed SDK models
Message payloads Built inline Helper function
Graph role Does everything Just orchestrates nodes
Fallbacks Inline Optional helper

Outcome: Your code becomes 40–50% smaller, easier to maintain, and fully aligned with how most of the A2A user community structures LangGraph clients today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment