{"id":12823,"date":"2026-02-19T12:49:22","date_gmt":"2026-02-19T12:49:22","guid":{"rendered":"https:\/\/www.castelis.com\/?p=12823"},"modified":"2026-02-23T09:57:02","modified_gmt":"2026-02-23T09:57:02","slug":"field-report-openclaw-multi-ai-agents","status":"publish","type":"post","link":"https:\/\/www.castelis.com\/en\/news\/custom-development\/field-report-openclaw-multi-ai-agents\/","title":{"rendered":"10 AI Agents, 4 Hours, $30: A Field Report on OpenClaw"},"content":{"rendered":"<p data-start=\"126\" data-end=\"265\">How we had a team of 10 autonomous AI agents build an invoicing application \u2014 and what it changes for the future of software development<\/p>\n<h2 data-start=\"267\" data-end=\"289\">The starting point<\/h2>\n<p data-start=\"291\" data-end=\"641\">\u201cVibe coding\u201d, the practice of coding in free-form dialogue with an LLM, has obvious limitations. It works well for a script, an isolated component, or a quick prototype. But as soon as a project moves beyond the POC stage, the lack of structure becomes costly: loss of context, architectural inconsistencies, recurring bugs, and zero traceability.<\/p>\n<p data-start=\"643\" data-end=\"807\">What if, instead of interacting with a single model, we had an entire team of specialized agents collaborate, each with its own role, memory, and responsibilities?<\/p>\n<p data-start=\"809\" data-end=\"1013\">That\u2019s the hypothesis we tested at Castelis by combining two open-source building blocks: <strong data-start=\"899\" data-end=\"911\">OpenClaw<\/strong> for multi-agent orchestration, and prompts from the <strong data-start=\"964\" data-end=\"972\">BMAD<\/strong> methodology to define our agents\u2019 roles.<\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"1020\" data-end=\"1065\">The tools: OpenClaw and BMAD in a nutshell<\/h2>\n<h3 data-start=\"1067\" data-end=\"1088\">What is OpenClaw?<\/h3>\n<p data-start=\"1090\" data-end=\"1287\">OpenClaw is a self-hosted, open-source AI agent runtime. Concretely, it\u2019s a persistent Node.js service that runs on your server and exposes your agents via Telegram, WhatsApp, Discord, or web chat.<\/p>\n<p data-start=\"1289\" data-end=\"1478\">Each agent has its own workspace (personality files, memory, tools), persistent sessions, and can communicate with other agents through an inter-agent messaging mechanism (<code data-start=\"1461\" data-end=\"1476\">sessions_send<\/code>).<\/p>\n<p>&nbsp;<\/p>\n<h3 data-start=\"1485\" data-end=\"1546\">BMAD (Breakthrough Method of Agile AI-Driven Development)<\/h3>\n<p data-start=\"1548\" data-end=\"1713\">BMAD is a framework that defines agent roles modeled after a real agile team: analyst, product manager, architect, UX designer, developer, QA, scrum master, and ops.<\/p>\n<p data-start=\"1715\" data-end=\"1830\">Each role is described in a Markdown file that serves both as a persona definition and as operational instructions.<\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"1837\" data-end=\"1859\">Our hybrid approach<\/h2>\n<p data-start=\"1861\" data-end=\"2012\">We did not use BMAD directly as a framework. Instead, we extracted its agent prompts and injected them into the <code data-start=\"1973\" data-end=\"1982\">SOUL.md<\/code> files of our OpenClaw agents.<\/p>\n<p data-start=\"2014\" data-end=\"2259\">For example, the BMAD Analyst prompt became the foundation of the <code data-start=\"2080\" data-end=\"2089\">SOUL.md<\/code> for our OpenClaw agent Mary (analyst). This allowed us to benefit from BMAD\u2019s methodological rigor while leveraging OpenClaw\u2019s persistence and inter-agent communication.<\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"2266\" data-end=\"2332\">The architecture: 10 agents, one orchestrator, zero vibe coding<\/h2>\n<p data-start=\"2334\" data-end=\"2494\">We deployed 10 agents on a minimal Debian server (2 vCPU, 2 GB RAM) with Nginx as a reverse proxy and Let\u2019s Encrypt for SSL. Infrastructure cost: close to zero.<\/p>\n<p data-start=\"2496\" data-end=\"2567\">Each agent has a clear identity, a precise role, and persistent memory:<\/p>\n<ul data-start=\"2569\" data-end=\"3239\">\n<li data-start=\"2569\" data-end=\"2719\">\n<p data-start=\"2571\" data-end=\"2719\"><strong data-start=\"2571\" data-end=\"2584\">The Pilot<\/strong> (orchestrator) assigns tasks, tracks progress, and maintains the global project context. It doesn\u2019t code or design, it coordinates.<\/p>\n<\/li>\n<li data-start=\"2720\" data-end=\"2768\">\n<p data-start=\"2722\" data-end=\"2768\"><strong data-start=\"2722\" data-end=\"2730\">Mary<\/strong> (analyst) produces business briefs.<\/p>\n<\/li>\n<li data-start=\"2769\" data-end=\"2819\">\n<p data-start=\"2771\" data-end=\"2819\"><strong data-start=\"2771\" data-end=\"2779\">John<\/strong> (PM) writes PRDs, epics, and stories.<\/p>\n<\/li>\n<li data-start=\"2820\" data-end=\"2892\">\n<p data-start=\"2822\" data-end=\"2892\"><strong data-start=\"2822\" data-end=\"2833\">Winston<\/strong> (architect) defines the technical architecture and ADRs.<\/p>\n<\/li>\n<li data-start=\"2893\" data-end=\"3001\">\n<p data-start=\"2895\" data-end=\"3001\"><strong data-start=\"2895\" data-end=\"2904\">Sally<\/strong> (UX) and <strong data-start=\"2914\" data-end=\"2922\">Lena<\/strong> (UI) handle user experience specifications and visual identity respectively.<\/p>\n<\/li>\n<li data-start=\"3002\" data-end=\"3053\">\n<p data-start=\"3004\" data-end=\"3053\"><strong data-start=\"3004\" data-end=\"3011\">Bob<\/strong> (scrum master) manages sprint planning.<\/p>\n<\/li>\n<li data-start=\"3054\" data-end=\"3141\">\n<p data-start=\"3056\" data-end=\"3141\"><strong data-start=\"3056\" data-end=\"3066\">Amelia<\/strong> (dev) implements the code and pushes to GitHub via a configured SSH key.<\/p>\n<\/li>\n<li data-start=\"3142\" data-end=\"3181\">\n<p data-start=\"3144\" data-end=\"3181\"><strong data-start=\"3144\" data-end=\"3153\">Quinn<\/strong> (QA) tests and validates.<\/p>\n<\/li>\n<li data-start=\"3182\" data-end=\"3239\">\n<p data-start=\"3184\" data-end=\"3239\"><strong data-start=\"3184\" data-end=\"3193\">Oscar<\/strong> (ops) handles deployment and server security.<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"3241\" data-end=\"3357\">The entire system follows the BMAD pipeline across four phases: <strong data-start=\"3305\" data-end=\"3356\">Analysis, Planning, Solutioning, Implementation<\/strong>.<\/p>\n<p data-start=\"3359\" data-end=\"3551\">Each agent receives tasks with a standardized context block (project, slug, path, current phase), and all produced artifacts are stored in a shared file structure through <code data-start=\"3530\" data-end=\"3550\">project-context.md<\/code>.<\/p>\n<p><img data-dominant-color=\"f3f4f1\" data-has-transparency=\"true\" style=\"--dominant-color: #f3f4f1;\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-12851 size-full has-transparency\" src=\"https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535.avif\" alt=\"Openclaw AI agents relationship chart\" width=\"1830\" height=\"884\" srcset=\"https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535.avif 1830w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-300x145.avif 300w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-1024x495.avif 1024w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-768x371.avif 768w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-1536x742.avif 1536w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-220x106.avif 220w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-230x111.avif 230w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema1-EN-e1771840507535-658x318.avif 658w\" sizes=\"auto, (max-width: 1830px) 100vw, 1830px\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"3558\" data-end=\"3572\">What worked<\/h2>\n<h3 data-start=\"3574\" data-end=\"3610\">The hub-and-spoke model holds up<\/h3>\n<p data-start=\"3612\" data-end=\"3789\">The Pilot orchestrates, agents execute, and context flows correctly between agents through shared files. Agents stay within their roles and produce artifacts in the right place.<\/p>\n<h3 data-start=\"3791\" data-end=\"3830\">Lateral communication feels natural<\/h3>\n<p data-start=\"3832\" data-end=\"4016\">Allowing agents to communicate directly, (for example: Amelia notifying Quinn that a release is ready or Quinn reporting a bug directly back to Amelia) reproduces real team dynamics.<\/p>\n<p data-start=\"4018\" data-end=\"4138\">The safeguard \u201cno re-delegation\u201d (an agent cannot delegate to a third party) prevents uncontrolled chains of delegation.<\/p>\n<h3 data-start=\"4140\" data-end=\"4189\">The QA workflow proved its value from day one<\/h3>\n<p data-start=\"4191\" data-end=\"4245\">Simple rule: no deployment without Quinn\u2019s validation.<\/p>\n<p data-start=\"4247\" data-end=\"4396\">As soon as this rule was enforced, production bugs dropped to zero. The QA agent caught issues the dev agent had missed, exactly as in a human team.<\/p>\n<h3 data-start=\"4398\" data-end=\"4438\">Persistent memory changes everything<\/h3>\n<p data-start=\"4440\" data-end=\"4572\">Each agent has its own <code data-start=\"4463\" data-end=\"4474\">MEMORY.md<\/code> and access to shared context files. Between sessions, an agent resumes exactly where it left off.<\/p>\n<p data-start=\"4574\" data-end=\"4710\">This is what differentiates this setup from simple prompt engineering: we move from ephemeral conversations to continuous collaboration.<\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"4717\" data-end=\"4773\">The pitfalls: what the documentation doesn\u2019t tell you<\/h2>\n<h3 data-start=\"4775\" data-end=\"4819\">The 30-second timeout is a silent killer<\/h3>\n<p data-start=\"4821\" data-end=\"4870\">This was the most insidious issue we encountered.<\/p>\n<p data-start=\"4872\" data-end=\"5035\">The default timeout for <code data-start=\"4896\" data-end=\"4911\">sessions_send<\/code> in OpenClaw is 30 seconds. Our agents regularly take longer than that to produce documents (briefs, PRDs, implementations).<\/p>\n<p data-start=\"5037\" data-end=\"5241\">When the timeout expires, the result is never saved in the session history. OpenClaw injects a synthetic error result, and the agent loses all the work it just produced. No visible error on the user side.<\/p>\n<p data-start=\"5243\" data-end=\"5359\">The fix: explicitly set <code data-start=\"5267\" data-end=\"5283\">timeoutSeconds<\/code> , minimum 180 seconds for standard tasks, 300 seconds for longer documents.<\/p>\n<p>&nbsp;<\/p>\n<h3 data-start=\"5366\" data-end=\"5409\">Multi-agent documentation is incomplete<\/h3>\n<p data-start=\"5411\" data-end=\"5468\">We had to read OpenClaw\u2019s source code to understand that:<\/p>\n<ul data-start=\"5470\" data-end=\"5770\">\n<li data-start=\"5470\" data-end=\"5544\">\n<p data-start=\"5472\" data-end=\"5544\"><code data-start=\"5472\" data-end=\"5492\">tools.agentToAgent<\/code> must be explicitly enabled (disabled by default),<\/p>\n<\/li>\n<li data-start=\"5545\" data-end=\"5612\">\n<p data-start=\"5547\" data-end=\"5612\">both agents (source and target) must be listed in an allowlist,<\/p>\n<\/li>\n<li data-start=\"5613\" data-end=\"5770\">\n<p data-start=\"5615\" data-end=\"5770\">and the distinction between <code data-start=\"5643\" data-end=\"5658\">sessions_send<\/code> (A2A messaging with persona and memory) and <code data-start=\"5703\" data-end=\"5719\">sessions_spawn<\/code> (ephemeral sub-agent without context) is critical.<\/p>\n<\/li>\n<\/ul>\n<p data-start=\"5772\" data-end=\"5807\">We use <code data-start=\"5779\" data-end=\"5794\">sessions_send<\/code> exclusively.<\/p>\n<p>&nbsp;<\/p>\n<h3 data-start=\"5814\" data-end=\"5860\">Context window management remains critical<\/h3>\n<p data-start=\"5862\" data-end=\"6024\">Workspace file size limits injected into the context cause silent truncations. The Pilot\u2019s <code data-start=\"5953\" data-end=\"5964\">MEMORY.md<\/code>, at 9,000 characters, exceeds the limit and gets truncated.<\/p>\n<p data-start=\"6026\" data-end=\"6226\">More fundamentally, agents eventually \u201cforget\u201d decisions made in previous sprints, leading to recurring bugs \u2014 typically deployment errors or regressions on previously validated architectural choices.<\/p>\n<p data-start=\"6228\" data-end=\"6326\">This is the single most critical optimization point for maintaining long-term project consistency.<\/p>\n<p><img data-dominant-color=\"f0efe8\" data-has-transparency=\"true\" style=\"--dominant-color: #f0efe8;\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-12853 size-full has-transparency\" src=\"https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN.avif\" alt=\"Strenghts and weaknesses of Openclaw multi AI agents system\" width=\"1830\" height=\"1043\" srcset=\"https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN.avif 1830w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-300x171.avif 300w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-1024x584.avif 1024w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-768x438.avif 768w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-1536x875.avif 1536w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-220x125.avif 220w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-230x131.avif 230w, https:\/\/www.castelis.com\/wp-content\/uploads\/2026\/02\/article-openclaw-schema2-EN-658x375.avif 658w\" sizes=\"auto, (max-width: 1830px) 100vw, 1830px\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"6333\" data-end=\"6347\">The numbers<\/h2>\n<p data-start=\"6349\" data-end=\"6564\">The first project, an invoicing application for small businesses and freelancers (Express.js + PostgreSQL backend, React + Vite frontend), was developed in roughly <strong data-start=\"6515\" data-end=\"6563\">4 hours of LLM usage for a total cost of $30<\/strong>.<\/p>\n<p data-start=\"6566\" data-end=\"6685\">The default model was Claude Haiku 4.5 for cost-efficiency, with occasional calls to Claude Opus 4.5 for complex tasks.<\/p>\n<p data-start=\"6687\" data-end=\"6805\">Nine stories were implemented in the first sprint: customer CRUD, invoice CRUD, PDF generation, and status management.<\/p>\n<p data-start=\"6807\" data-end=\"6912\">Let\u2019s be clear: the result is a functional POC demonstrating feasibility, not a production-ready product.<\/p>\n<p data-start=\"6914\" data-end=\"7019\">However, the cost-to-output ratio is remarkable and opens concrete opportunities for real-world projects.<\/p>\n<h2 data-start=\"7026\" data-end=\"7062\">Our recommendations going forward<\/h2>\n<h3 data-start=\"7064\" data-end=\"7109\">Adapt the number of agents to the context<\/h3>\n<p data-start=\"7111\" data-end=\"7311\">Ten agents are probably too many for most projects. Merging UX and UI, PM and Scrum Master, or combining roles depending on the phase, would reduce coordination overhead without sacrificing quality.<\/p>\n<p data-start=\"7313\" data-end=\"7436\">The ideal setup is not static. It should evolve with the project phase (initial development, debugging, feature expansion).<\/p>\n<h3 data-start=\"7443\" data-end=\"7478\">Define phase-specific workflows<\/h3>\n<p data-start=\"7480\" data-end=\"7586\">The linear Analysis \u2192 Planning \u2192 Solutioning \u2192 Implementation pipeline works well for initial development.<\/p>\n<p data-start=\"7588\" data-end=\"7712\">For debugging or feature additions, a lighter workflow with fewer agents and shorter feedback loops would be more efficient.<\/p>\n<h3 data-start=\"7719\" data-end=\"7758\">Treat agent documentation like code<\/h3>\n<p data-start=\"7760\" data-end=\"7834\">Any change to <code data-start=\"7774\" data-end=\"7785\">AGENTS.md<\/code>, <code data-start=\"7787\" data-end=\"7797\">TOOLS.md<\/code>, or <code data-start=\"7802\" data-end=\"7811\">SOUL.md<\/code> alters agent behavior.<\/p>\n<p data-start=\"7836\" data-end=\"7930\">It should be versioned in Git, reviewed, and deployed with the same rigor as application code.<\/p>\n<h3 data-start=\"7937\" data-end=\"7968\">Invest in memory management<\/h3>\n<p data-start=\"7970\" data-end=\"8095\">Break memory files into smaller chunks, implement periodic summarization mechanisms, and actively monitor context truncation.<\/p>\n<p data-start=\"8097\" data-end=\"8154\">This is the key to moving from POC to real project usage.<\/p>\n<p>&nbsp;<\/p>\n<h2 data-start=\"8161\" data-end=\"8204\">Next steps: observability and scaling up<\/h2>\n<p data-start=\"8206\" data-end=\"8285\">Two improvement axes clearly emerge to move from POC to industrial-grade usage.<\/p>\n<h3 data-start=\"8287\" data-end=\"8321\">Add an LLM observability layer<\/h3>\n<p data-start=\"8323\" data-end=\"8504\">Today, our visibility into agent behavior is limited to OpenClaw gateway logs (14 MB per day of activity) and manual inspection of generated artifacts. That\u2019s insufficient at scale.<\/p>\n<p data-start=\"8506\" data-end=\"8772\">At Castelis, we already use Langfuse in production for LangChain-based AI projects. It\u2019s an open-source, self-hostable LLM observability platform that traces every LLM call (prompts, responses, token usage, latency), tracks costs in real time, and detects anomalies.<\/p>\n<p data-start=\"8774\" data-end=\"8994\">The natural next step is to extend this instrumentation to our OpenClaw multi-agent setup: trace inter-agent exchanges, measure consumption per agent and per project phase, identify looping agents or quality degradation.<\/p>\n<p data-start=\"8996\" data-end=\"9134\">In a system with 10 autonomous agents, this visibility isn\u2019t optional, it\u2019s a requirement for viability. Without it, you\u2019re flying blind.<\/p>\n<p data-start=\"9136\" data-end=\"9295\">Both communities are actively exploring OpenClaw\/Langfuse integration via OpenTelemetry, and a Langfuse maintainer is ready to release an official integration.<\/p>\n<p data-start=\"9297\" data-end=\"9411\">The convergence between multi-agent orchestration and LLM observability is no longer a matter of <em data-start=\"9394\" data-end=\"9398\">if<\/em>, but <em data-start=\"9404\" data-end=\"9410\">when<\/em>.<\/p>\n<p>&nbsp;<\/p>\n<h3 data-start=\"9418\" data-end=\"9456\">Test more robust models than Haiku<\/h3>\n<p data-start=\"9458\" data-end=\"9636\">Our POC relied primarily on Claude Haiku 4.5, chosen for its cost-performance ratio. It\u2019s well-suited for repetitive, well-scoped tasks (CRUD operations, formatting, deployment).<\/p>\n<p data-start=\"9638\" data-end=\"9762\">But for high-complexity tasks, architecture analysis, subtle bug resolution, design decisions \u2014 its limits become apparent.<\/p>\n<p data-start=\"9764\" data-end=\"9868\">Recurring deployment errors were partly due to the model\u2019s reasoning limitations on multi-step problems.<\/p>\n<p data-start=\"9870\" data-end=\"10048\">The next step is to test a more granular routing strategy: lightweight models for execution tasks, more powerful models (Claude Sonnet 4.5 or Opus 4.6) for reasoning-heavy tasks.<\/p>\n<p data-start=\"10050\" data-end=\"10143\">The additional API cost would likely be offset by reduced time spent fixing avoidable errors.<\/p>\n<h2 data-start=\"10150\" data-end=\"10209\">What this signals for the software development lifecycle<\/h2>\n<p data-start=\"10211\" data-end=\"10483\">This experiment goes beyond a technical exercise. It illustrates a deeper evolution of the SDLC: moving from AI as a point-assistance tool (copilot, code completion) to AI as a team of autonomous agents executing an end-to-end engineering pipeline under human supervision.<\/p>\n<p data-start=\"10485\" data-end=\"10552\">Multi-agent AI isn\u2019t prompt engineering, it\u2019s systems engineering.<\/p>\n<p data-start=\"10554\" data-end=\"10805\">Configuration, timeouts, memory management, inter-agent coordination, these are infrastructure problems, not prompt problems. And that\u2019s precisely what makes the approach viable for organizations that require rigor, traceability, and reproducibility.<\/p>\n<p data-start=\"10807\" data-end=\"10958\">For companies, the question is no longer \u201cCan AI write code?\u201d but \u201cHow do we structure a team of AI agents to produce reliable, maintainable outcomes?\u201d<\/p>\n<p data-start=\"10960\" data-end=\"11053\" data-is-last-node=\"\" data-is-only-node=\"\">At Castelis, we\u2019re continuing to explore that question and the early results are promising.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>How we had a team of 10 autonomous AI agents build an invoicing application \u2014 and what it changes for the future of software development The starting point \u201cVibe coding\u201d, the practice of coding in free-form dialogue with an LLM, has obvious limitations. It works well for a script, an isolated component, or a quick [&hellip;]<\/p>\n","protected":false},"author":26,"featured_media":12810,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[60,73],"tags":[],"class_list":["post-12823","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-custom-development","category-artificial-intelligence"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/posts\/12823","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/comments?post=12823"}],"version-history":[{"count":5,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/posts\/12823\/revisions"}],"predecessor-version":[{"id":12850,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/posts\/12823\/revisions\/12850"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/media\/12810"}],"wp:attachment":[{"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/media?parent=12823"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/categories?post=12823"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.castelis.com\/en\/wp-json\/wp\/v2\/tags?post=12823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}