Post

Building Convergence – A Journey from Network Observability to AI-Driven Automation Part 14: Protocol Participation Deployment — NetClaw BGP Peering with RR1

Building Convergence – A Journey from Network Observability to AI-Driven Automation Part 14: Protocol Participation Deployment — NetClaw BGP Peering with RR1

▶️ Watch the video

In Part 13, we deployed the observability stack — OTEL Collector polling SNMP, VictoriaMetrics storing metrics, Loki aggregating syslog, Grafana rendering dashboards. NetClaw can now see the network through telemetry.

But seeing isn’t the same as participating.

What if the AI was in the routing table? What if it could peer with a route reflector and see the entire SP fabric from the inside?

That’s what Protocol MCP does. And it changes the demo from “AI reads show commands” to “AI is a routing peer that can prove the network works.”


The Architecture

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
NetClaw Host (192.168.220.1)                    RR1 (192.168.220.11)
     │                                               │
     │  GRE tunnel (transport via clab-mgmt L2)      │
     │  outer src: 192.168.220.1                     │
     │  outer dst: 192.168.220.11                    │
     │                                               │
     ├─ gre-rr1: 10.255.255.1/30                    ├─ Tunnel0: 10.255.255.2/30
     │  (host interface, global table)               │  (global routing table)
     │                                               │
     └─ eBGP: 10.255.255.1 ──────────────────────── 10.255.255.2
        AS 65099                                     AS 65000
                                                         │
                                                         │ iBGP reflection
                                                         │
                                                    ├── PE1 (AS 65000)
                                                    ├── PE2 (AS 65000)
                                                    └── PE3 (AS 65000)

The GRE tunnel is the bridge between “management plane” (where NetClaw lives) and “routing plane” (where the SP fabric lives). One tunnel, one BGP session, full fabric visibility.


Step 1: Document the Design in Nautobot

Everything starts here. Not on the router. Not on the host. In Nautobot.

Before a single packet crosses the GRE tunnel, the entire design exists in the source of truth: the tunnel interface, the IP addresses, the BGP peering, the autonomous system. Nautobot is the brain. NetClaw is the hands.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
You:  I need to set up a GRE tunnel and BGP peering between NetClaw
      and RR1 for Protocol MCP integration. Document the full design
      in Nautobot:

      1. Create autonomous system ASN 65099, description "NetClaw Protocol Agent"
      2. On device RR1, create interface Tunnel0 with IP 10.255.255.2/30,
         description "GRE to NetClaw Protocol MCP"
      3. Create a BGP peer group "NETCLAW-PEERS" on RR1's BGP routing
         instance (AS 65000)
      4. Create a BGP peering: RR1 (10.255.255.2) peering with 10.255.255.1
         (AS 65099), peer group NETCLAW-PEERS

      Context: GRE tunnel bridges from clab-mgmt (192.168.220.0/24) into
      the global routing table. Outer IPs: host 192.168.220.1, RR1
      192.168.220.11. Inner IPs: host 10.255.255.1/30, RR1 10.255.255.2/30.
      BGP runs over the tunnel so RR1 reflects routes to PE1/PE2/PE3.

NetClaw: [calls nautobot_create_autonomous_system(asn=65099, description="NetClaw Protocol Agent")]
         ✅ AS 65099 created

         [calls nautobot_create_interface(device="RR1", name="Tunnel0", type="virtual",
           ip_address="10.255.255.2/30", description="GRE to NetClaw Protocol MCP")]
         ✅ Tunnel0 created on RR1 with IP 10.255.255.2/30

         [calls nautobot_create_bgp_peer_group(name="NETCLAW-PEERS", routing_instance="RR1")]
         ✅ Peer group created on RR1's BGP instance

         [calls nautobot_create_bgp_peering(local_device="RR1", peer_ip="10.255.255.1", peer_as=65099)]
         ✅ Peering documented

         Design is in Nautobot. Ready to push to the device.

Now the source of truth knows: RR1 should have a Tunnel0 interface at 10.255.255.2/30, peering with AS 65099 at 10.255.255.1. If anyone runs reconciliation later, this peering is expected — not flagged as rogue.


Step 2: Push the Design to RR1

The design exists in Nautobot. Now we tell NetClaw to make the device match:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
You:  Push the GRE tunnel and BGP config to RR1 that matches what we
      just documented in Nautobot. Use pyATS to apply:

      interface Tunnel0
       ip address 10.255.255.2 255.255.255.252
       tunnel source Ethernet0/0
       tunnel destination 192.168.220.1
      !
      route-map NETCLAW-IN permit 10
       set local-preference 50
      !
      route-map NETCLAW-OUT permit 10
      !
      router bgp 65000
       neighbor 10.255.255.1 remote-as 65099
       neighbor 10.255.255.1 description NetClaw-Protocol-MCP
       neighbor 10.255.255.1 update-source Tunnel0
       address-family ipv4 unicast
        neighbor 10.255.255.1 activate
        neighbor 10.255.255.1 route-map NETCLAW-IN in
        neighbor 10.255.255.1 route-map NETCLAW-OUT out

      After applying, verify with "show ip interface brief | include Tunnel"
      and "show ip bgp summary" on RR1.

NetClaw: [calls pyats_configure(device="RR1", config="...")]
         ✅ Configuration applied to RR1

         [calls pyats_run_command(device="RR1", command="show ip interface brief | include Tunnel")]
         Tunnel0    10.255.255.2    YES manual up    up

         [calls pyats_run_command(device="RR1", command="show ip bgp summary")]
         Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
         10.255.255.1    4 65099       0       0        1    0    0 never    Active

         ✅ Tunnel0 is up/up. BGP neighbor is in Active state (waiting for
         the NetClaw side to come up).

The device now matches the SoT. The BGP session is in Active state because the host-side tunnel doesn’t exist yet — that’s next.


Step 3: Create the Host-Side GRE Tunnel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
You:  Create the GRE tunnel on this Linux host to complete the tunnel
      to RR1:

      sudo ip tunnel add gre-rr1 mode gre remote 192.168.220.11 local 192.168.220.1
      sudo ip addr add 10.255.255.1/30 dev gre-rr1
      sudo ip link set gre-rr1 up

      Then verify with "ping -c 3 10.255.255.2" to confirm end-to-end.

NetClaw: [executes shell commands]
         ✅ GRE tunnel gre-rr1 created
         ✅ IP 10.255.255.1/30 assigned
         ✅ Interface up

         [executes ping -c 3 10.255.255.2]
         PING 10.255.255.2: 3 packets transmitted, 3 received, 0% packet loss
         rtt min/avg/max = 0.8/1.1/1.4 ms

         ✅ Tunnel is up. RR1 is reachable at 10.255.255.2.

Step 4: Verify the BGP Session

With both sides of the tunnel up, the BGP session establishes automatically (the Protocol MCP speaker starts with the gateway and peers with 10.255.255.2):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
You:  Check the Protocol MCP BGP session status and show me the RIB.
      Use only the built-in Protocol MCP tools.

NetClaw: [calls bgp_get_peers()]
         ✅ BGP session with 10.255.255.2 (AS 65000) — Established
            Prefixes received: 14
            Uptime: 32 seconds

         [calls bgp_get_rib()]
         14 routes in Loc-RIB:

         | Prefix          | Next-Hop     | AS Path | LOCAL_PREF |
         |-----------------|--------------|---------|------------|
         | 10.0.0.2/32     | 10.255.255.2 | 65000   | 50         |
         | 10.0.0.3/32     | 10.255.255.2 | 65000   | 50         |
         | 10.0.0.4/32     | 10.255.255.2 | 65000   | 50         |
         | ...             | ...          | ...     | ...        |

         We're seeing all SP loopbacks reflected by RR1. The full
         provider routing table is visible via the GRE tunnel.

That’s the entire SP fabric — visible through one GRE tunnel and one eBGP session. Because the peering is in the global table, RR1 reflects everything to PE1/PE2/PE3.


Step 5: Route Injection and Withdrawal

This is where it gets interesting. NetClaw can now change the network, not just observe it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
You:  Inject 192.168.99.0/24 via BGP to RR1. Then verify PE1 and PE2
      received it.

NetClaw: [calls bgp_inject_route(network="192.168.99.0/24", local_pref=100)]
         ✅ Route injected: 192.168.99.0/24, next-hop self, LOCAL_PREF 100

         [calls pyats_run_command(device="PE1", command="show ip bgp 192.168.99.0/24")]
         PE1 BGP table:
           192.168.99.0/24, next-hop 10.255.255.1 (via RR1), AS path: 65099
           Received from RR1 via iBGP, LOCAL_PREF 50

         [calls pyats_run_command(device="PE2", command="show ip bgp 192.168.99.0/24")]
         PE2 BGP table:
           192.168.99.0/24, next-hop 10.255.255.1 (via RR1), AS path: 65099
           Received from RR1 via iBGP, LOCAL_PREF 50

         ✅ Route propagated to both PEs via RR1 reflection.
         Note: LOCAL_PREF is 50 on the PEs (set by NETCLAW-IN route-map).

And withdrawal:

1
2
3
4
5
6
7
8
9
You:  Withdraw 192.168.99.0/24. Verify it's gone from PE1.

NetClaw: [calls bgp_withdraw_route(network="192.168.99.0/24")]
         ✅ Route withdrawn

         [calls pyats_run_command(device="PE1", command="show ip bgp 192.168.99.0/24")]
         PE1: % Network not in table

         ✅ Route successfully withdrawn from the SP fabric.

Inject. Verify propagation. Withdraw. Verify removal. All from natural language. All logged in GAIT.


Why setcap Matters

The Protocol MCP server uses scapy to construct and transmit raw BGP OPEN/UPDATE/KEEPALIVE packets. On Linux, non-root processes can’t open raw sockets. The install script handles this automatically:

1
sudo setcap cap_net_raw,cap_net_admin+eip /usr/bin/python3.12

Two capabilities: cap_net_raw for raw sockets, cap_net_admin for interface binding. Principle of least privilege — no root needed. The install script resolves the venv Python’s real binary path and applies this during setup.


Rollback

If something goes wrong, tell NetClaw:

1
2
3
4
5
6
7
8
9
10
11
12
13
You:  Remove the Protocol MCP BGP peering from RR1. Delete the Tunnel0
      interface. Then remove the host-side GRE tunnel.

NetClaw: [calls pyats_configure(device="RR1", config="
           router bgp 65000
            no neighbor 10.255.255.1
           no interface Tunnel0
           no route-map NETCLAW-IN
           no route-map NETCLAW-OUT")]
         ✅ RR1 cleaned up

         [executes: sudo ip tunnel del gre-rr1]
         ✅ Host tunnel removed

The GAIT trail records what was injected, so you always know what to clean up.


Best part of labbing is Learning

1. Be explicit with LLMs. When I said “configure BGP peering with RR1,” DeepSeek installed ExaBGP and configured it in the VRF. When I gave it the exact config block to push, it did exactly that. The more specific your prompt, the less room for creative interpretation.

2. It took 4 sessions to get right. I’m not going to pretend this worked on the first try. The first session installed ExaBGP. The second broke the config. The third spiraled through Ansible templates. The fourth worked — because by then the infrastructure was correct and the prompts were explicit. The ratio tells the story: sessions 1-3 were 60-74% shell commands (improvising). Session 4 was 85% MCP tool calls (following the plan). Specificity wins.


All code for this post is in mcp-servers/protocol-mcp/ and the Protocol MCP registration in config/openclaw.json.

Need a real lab environment?

I run a small KVM-based lab VPS platform designed for Containerlab and EVE-NG workloads — without cloud pricing nonsense.

Visit localedgedatacenter.com →
This post is licensed under CC BY 4.0 by the author.