Ashburn (IAD) Voice - Operational
Ashburn (IAD) Voice
Atlanta (ATL) Voice - Operational
Atlanta (ATL) Voice
Grand Rapids (GRR) Voice - Operational
Grand Rapids (GRR) Voice
Las Vegas (LAS) Voice - Operational
Las Vegas (LAS) Voice
Phoenix (PHX) Voice - Operational
Phoenix (PHX) Voice
Messaging (SMS/MMS) - Operational
Messaging (SMS/MMS)
Native Fax - Operational
Native Fax
API - Operational
API
Call Recording (IAD) - Operational
Call Recording (IAD)
Call Recording (PHX) - Operational
Call Recording (PHX)
Call Recording (ATL) - Operational
Call Recording (ATL)
Device Provisioning (NDP) - Operational
Device Provisioning (NDP)
Manager Portal - Operational
Manager Portal
Manager Portal Pro - Operational
Manager Portal Pro
SNAPmobile - Operational
SNAPmobile
SNAPmobile Web - Operational
SNAPmobile Web
VoIPMonitor (QoS) - Operational
VoIPMonitor (QoS)
CloudieConnect - Operational
CloudieConnect
CloudieAI - Operational
Notice history
Jul 2025
- CompletedJuly 19, 2025 at 4:30 AMCompletedJuly 19, 2025 at 4:30 AMMaintenance has completed successfully
- In progressJuly 19, 2025 at 3:30 AMIn progressJuly 19, 2025 at 3:30 AMMaintenance is now in progress
- PlannedJuly 19, 2025 at 3:30 AMPlannedJuly 19, 2025 at 3:30 AM
On Friday, July 18, 2025, at 11:30 PM ET, we’ll be performing routine maintenance on the CloudieConnect system to help ensure continued performance and reliability.
During this time, CloudieConnect users may experience a brief service interruption (up to 5 minutes) during which login access and the ability to make or receive calls may be temporarily unavailable.
All other services will remain fully operational and unaffected.
We appreciate your patience throughout this process. If you need additional support, please contact support@oit.co.
- ResolvedResolved**What Occurred:** At 2:37 PM ET, all calls on the GRR server began to fail. **What was Affected:** * Inbound and Outbound Calls on GRR **When It Began**: 7/8/2025 2:37 PM ET **Current Status**: This incident is now Resolved. **Next Steps:** Major Incident Report will be available within 48 business hours.
- UpdateUpdateWe implemented a fix and are currently monitoring the result.
- MonitoringMonitoring**What Occurred:** At 2:37 PM ET, inbound and outbound calls to GRR began receiving a 503 error, causing them to fail. **What Is Affected:** Inbound and outbound calls to the GRR server **When It Began:** 2:37 PM ET on July 8, 2025 **Next Update:** July 9, 2025 at 4:00 PM ET **Current Status:** We have restarted the NMS service on the GRR server and are showing calls processing successfully again. **Next Steps:** * We will continue to monitor to ensure that calls continue to process successfully. We will also continue investigating the root cause of the failure. * Devices that were registered on GRR may need to be restarted after the NMS service restart took place. **Next Update:** July 9, 2025 at 4:00 PM ET
Jun 2025
- ResolvedResolved
At 2:26 PM ET, we received a notification from our NOC monitoring that the NMS service, which handles registration and call processing on GRR crashed. Active calls on this server did drop, and devices on GRR failed over to ATL successfully.
What was Affected:
Device Registration on GRR
Active Calls on GRR
When It Began: 06/05/2025 2:26 PM ET
Current Status: This incident is now resolved.
Next Steps:
Major Incident Report is now available
MIR report: https://voipdocs.io/announcements/-2025-06-05-registration-failure-on-grr-
- UpdateUpdate
- MonitoringMonitoring
At 2:26 PM ET, we received a notification from our NOC monitoring that the NMS service which handles registration and call processing on GRR crashed. Active calls on this server did drop, and devices on GRR failed over to ATL successfully.
What Is Affected:
Device Registration on GRR
Active Calls on GRR
When It Began: 2:26 PM ET
Current Status: The NMS service on GRR is back online as of 2:29 PM ET, and registration for all devices returned to GRR by 2:39 PM ET. At this time, all services on GRR are functioning as expected.
Next Steps: We will be monitoring GRR for 24 hours per our major incident policy while we continue to investigate the root cause. If additional outages occur, we will manually redirect devices & calls on GRR to ATL until the problem is resolved.
Next Update: 6/6/25 3:00 PM ET
- UpdateUpdate
MIR now available for Loss of Communication to GRR Server
At 10:30am ET, our monitoring system alerted us to the GRR server being unreachable for two minutes, resulting in device registrations and inbound calls to failover to other servers. Shortly after, the same datacenter lost communication again.
What Was Affected: GRR Server
When It Began: 6/04/2025 10:30 AM ET
Resolution:
The degraded circuit was immediately removed from the routing profile to prevent further instability
Enhanced monitoring was implemented during the recovery period to ensure sustained stability
The Major Incident Report is now available: https://voipdocs.io/announcements/-2025-06-04-loss-of-communication-to-grr-server
- ResolvedResolved
At 10:30am ET, our monitoring system alerted us to the GRR server being unreachable for two minutes, resulting in device registrations and inbound calls to failover to other servers. Shortly after, the same datacenter lost communication again.
What was Affected: GRR Server
When It Began: 10:30 AM ET
Current Status:
We continue to see stability on GRR's connections after 24 hours of monitoring.
This incident is now considered Resolved.
Next Steps:
Major Incident Report will be available within 48 hours.
- UpdateUpdate
Current Status: At this time we continue to see stability on GRR's connections. The datacenter will be performing maintenance and repair to the MPLS configuration tonight at 2AM and is expected to be completed before 8AM. No downtime or service interruptions are expected. Our NOC will be monitoring throughout the monitoring phase.
Next Steps:
Continue to monitor
Maintenance: Tonight, 2:00 - 8:00 AM ET
Next Update: 1:15 PM ET 6/25/2025
Degraded Services: None
Operational Services: All
- MonitoringMonitoring
What Is Affected: GRR Server
When It Began: 10:30 AM ET
Current Status: We have determined that the loss of traffic was due to a degraded circuit. The offending circuit was removed from the routing profile. Traffic has remained stable since the change. Moving to the monitoring stage.
Next Steps: Continue to monitor
Next Update: 6/5/2025 1:15 PM ET
Degraded Services: None
Operational Services: All
- InvestigatingInvestigating
At 10:30am ET, our monitoring system alerted us to the GRR server being unreachable for two minutes, resulting in device registrations and inbound calls to failover to other servers. Shortly after, the same datacenter lost communication again. We are investigating quickly and will determine if we need to close GRR temporarily. Updates to follow
What Is Affected: GRR Voice
When It Began: 10:30 AM ET
Current Status: We are investigating quickly and will determine if we need to close GRR temporarily.
Next Steps: Updates to Follow
Next Update: 12:15 PM ET
May 2025
- UpdateUpdate**MIR now available for Websocket Connection Failure on GRR and LAS** At 11:06AM ET we were alerted by our monitoring tools that WSS connections on GRR, LAS, IAD, and PHX were failing. SNAPmobile Web uses WSS for the SIP connection. **What was Affected:** All SNAPmobile Web Connections on GRR, LAS, IAD, & PHX **When It Began**: 11:06AM ET 05/06/2025 **Resolution:** * **Our vendor resolved an SSL issue with the affected FQDNs on 05/31/2025** * **Major Incident Report** is now available:
- UpdateUpdate
At 11:06AM ET we were alerted by our monitoring tools that WSS connections on GRR, LAS, IAD, and PHX were failing. SNAPmobile Web uses WSS for the SIP connection.
What Is Affected: All SNAPmobile Web Connections on GRR, LAS, IAD, & PHX
When It Began: 11:06AM ET
Current Status:
This incident is considered Resolved.
Our vendor resolved an SSL issue with the affected FQDNs
Next Steps:
Major Incident Report will be available within 48 hours.
Next Update: N/A
Offline Services: N/A
Degraded Services: N/A
- ResolvedResolvedThis incident has been resolved.
- MonitoringMonitoring
What Is Affected: All SNAPmobile Web Connections on GRR, LAS, IAD, & PHX
When It Began: 11:06AM ET
Current Status:
After working with vendor support, we have successfully applied a fix to resolve WSS connections failing on LAS, GRR, IAD, & PHX.
Per our internal policy, we will keep all WSS traffic redirected to ATL while we continue to monitor for 24 hours. This will prevent further downtime if the problem reoccurs.
Next Steps:
Users will need to completely close SNAPmobile Web and reopen it to ensure a new connection. If the problem persists, please clear your browser cache and test again. If after clearing your cache, you still cannot connect, please submit a ticket to support@oit.co
We understand that some users may experience poor call quality or dropped calls on ATL. If you experience any of these, please email support@oit.co with call examples or call traces so that we can investigate and potentially move affected clients back to their respective cores in advance.
Next Update: 5/31/25 2:00 PM ET
- UpdateUpdate
What Is Affected: All SNAPmobile Web Connections on GRR, LAS, IAD, & PHX
When It Began: 11:06 AM ET
Current Status:
We have temporarily redirected all WSS traffic to ATL, which is confirmed to be working. In the meantime, our engineers will continue to work on a permanent resolution.
Next Steps:
Users will need to completely close SNAPmobile Web and reopen it to ensure a new connection. If the problem persists, please clear your browser cache and test again. If, after clearing your cache, you still cannot connect, please submit a ticket to support@oit.co
Next Update: 2:06 PM ET
- IdentifiedIdentifiedAt 11:06 AM ET, we were alerted by our monitoring tools that WSS connections on GRR & LAS were failing. SNAPmobile Web uses WSS for the SIP connection. **What Is Affected**: All SNAPmobile Web Connections on GRR & LAS **When It Began:** 11:06AM ET **Current Status:** We have identified the cause and are awaiting vendor confirmation to implement a fix. **Next Steps**: No next steps for partners **Next Update**: 12:06PM ET