PEER SIP trunk not working (Log In trunk works fine)


Interesting, i sent them this packet collection, they seem oblivious weather they supported it or not, i question what it meant and they did’nt seem to have any idea… especially as outgoing calls can be made on there trunk (with sound both ways), however no SIP packets are seen from to the UCM.

This isnt looking good for the client as i can have the UCM working on one of my own trunks but not fibernetics (ive not rulled out the firewall yet, as i have a feeling its deciding what packet to forward and what not to, as it works in freePBX fine, i cant believe Grandstream wouldnt work with Fibernetics but freePBX would, packet collection shows no packets from fibernetcis to the UCM, however another trunk provider works fine).

I am trying to avoid the “well works with mine” situations as that doesnt go well with clients and doesnt resolve the problem.

Its starting to get to the point of testing outside the LAN prove whats at fault.


But, if I understand you correctly, the UCM is processing calls. However, see if the call will last more than 32 seconds. That will be a good indicator of whether the NAT is correct.


If i dial out of the UCM through this trunk provider (PEER) the call connects, i see SIP packets to that provider, that provider then uses a different IP address (externally) to push RTP packets BACK to the UCM so the NAT is working with regards the UCM and its connections (through the firewall).

What isnt working is the provider is not pushing SIP messages back to the external IP address of the UCM. The provider indicates they do, however as the RTP packets are NOT coming from the SIP trunk IP address, i have a feeling the Cisco Miraki (for what ever reason) isnt moving packets from a specific external IP address to the internet IP address through to the internal UCM, however as my test trunk AND the RTP packets are NOT the SIP trunk provder IP address i bet thats why its working (fairly educated guesswork and the angle i will be going at this week).

I believe the Cisco Miraki has some config somewhere about that specific IP address outside and the old PBX IP address inside that appears to be outside the current the rule my client has been looking at.


Ask providers to share capture form their side.
If it prove that they sent correctly then you need check router.


We have a capture from Cisco and there appliance, it shows on the WAN side that the ISP is sending an OPTIONS and the firewall is pushing it to LAN, it also shows the 2 OPTIONS requests from the UCM that is then pushed to the WAN of which the ISP responds with 415 Unsupported Media Type.

Ive contacted Grandstream, as the 21 OPTIONS messges from the ISP are not showing up in the packet collection on the UCM, however other SIP packets are (from the same ISP), so either the router is being choosy about which packets it forwards or the Grandstream is not logging the OPTIONS requests.

I have asked Grandstream what happens to a trunk in the UCM when it sends an OPTIONS and received a 415 Unspoorted Media, does it then close down that trunk, it can still MAKE calls but doesnt received ANY SIP packets back in.

Cisco says there firewall works ok (and i have packets to confirm this), ISP says they are ok (and the firewall confirms this), so it looks like the UCM is ignoring ONLY the SIP OPTIONS messages from the ISP and some of the SIP responses like it no longer wants to hear anything on that trunk.

I notice the freePBX has GUEST CALLS allowed, the Grandstream does not, i wonder if this is a fix (and bad one but i need to get it working, this is the 3rd failed attempt to migrate).


To resolved the above… it appears the Cisco Miraki MX firewall is damaged… we managed to fix the issue by restarting it AFTER changing the rule, however the rule before the reboot was pushing packets to both the old IP address (that was removed) and the new one just added, this was fixed after a reboot.

Cisco at this point did confirm the bug i found in there packet collection from the WAN and LAN side of the Miraki MX firewall. So to clear this up (at this point), the firewall was damaged and not pushing packets to the correct internal server. The migration is planned for Tuesday now.

However i have found another issue with the firewall i still need to clear up to do with backup…


Thanks for update. Why 90% problems are router related …


Tried to migrate yesterday, the Cisco firewall kinda failed, did a Cloud GUI reboot, it never came back up, have to hard power it, i forced Cisco to replace it (which took abit of work, but a client was dead in the water by this time), its supposed to be onsite this morning…