Ashburn (IAD) Voice - Operational
Ashburn (IAD) Voice
Atlanta (ATL) Voice - Operational
Atlanta (ATL) Voice
Phoenix (PHX) Voice - Operational
Phoenix (PHX) Voice
Messaging (SMS/MMS) - Operational
Messaging (SMS/MMS)
Native Fax - Operational
Native Fax
API - Operational
API
Call Recording (IAD) - Operational
Call Recording (IAD)
Call Recording (PHX) - Operational
Call Recording (PHX)
Call Recording (ATL) - Operational
Call Recording (ATL)
Device Provisioning (NDP) - Operational
Device Provisioning (NDP)
Manager Portal - Operational
Manager Portal
Manager Portal Pro - Operational
Manager Portal Pro
SNAPmobile - Operational
SNAPmobile
SNAPmobile Web - Operational
SNAPmobile Web
VoIPMonitor (QoS) - Operational
VoIPMonitor (QoS)
CloudieConnect - Operational
CloudieConnect
CloudieAI - Operational
Notice history
Aug 2025
- ResolvedResolved
This incident is now considered resolved. We will continue to monitor for the next 24 hours.
We appreciate your patience throughout this process. If you need additional support, please contact support@oit.co.
- UpdateUpdate**What Occurred:** We are currently experiencing a service disruption affecting The Campaign Registry (TCR) platform. This impacts the ability to create new 10DLC campaigns, register new brands, or edit existing campaigns. **Who is Affected:** All partners and clients who need to create new TCR/10DLC campaigns, register new brands, or modify existing campaign configurations. **When It Began:** 9:31 AM ET, 8-21-25 **Current Status:** * TCR has confirmed a platform-wide issue affecting campaign management functions * OIT is actively working with TCR support to monitor the situation * No updates have been provided by TCR at this time * You can follow along for live updates from TCR at [Campaign Registry Status](https://status.campaignregistry.com/#) **Workarounds:** There are no workarounds available for campaign creation or editing at this time. Please plan accordingly for any urgent campaign needs. **Next Steps:** * We recommend postponing any planned campaign creation or modifications until service is restored * OIT will continue monitoring the situation and provide updates as information becomes available **Next Update:** 3:15 PM ET, 8-21-25
- IdentifiedIdentified
What Occurred: We are currently experiencing a service disruption affecting The Campaign Registry (TCR) platform. This impacts the ability to create new 10DLC campaigns, register new brands, or edit existing campaigns.
Who is Affected: All partners and clients who need to create new TCR/10DLC campaigns, register new brands, or modify existing campaign configurations.
When It Began: 9:31 AM ET, 8-21-25
Current Status:
TCR has confirmed a platform-wide issue affecting campaign management functions
OIT is actively working with TCR support to monitor the situation
No estimated time of resolution has been provided at this time
Workarounds: There are no workarounds available for campaign creation or editing at this time. Please plan accordingly for any urgent campaign needs.
Next Steps:
We recommend postponing any planned campaign creation or modifications until service is restored
OIT will continue monitoring the situation and provide updates as information becomes available
Next Update: We will provide an update within 1 hour or when new information becomes available
Offline Services:
TCR Campaign Creation
TCR Brand Registration
TCR Campaign Editing/Modifications
Degraded Services:
None
Operational Services:
SMS/MMS message sending and receiving (all existing campaigns continue to function normally)
All other messaging services
All voice services
All portal services
Important Note: This outage does NOT affect your ability to send or receive messages. All existing registered campaigns continue to operate normally.
We appreciate your patience throughout this process. If you need support, please contact support@oit.co. For more real-time status updates as well as discussion please join our Discord.
Jul 2025
- CompletedJuly 19, 2025 at 4:30 AMCompletedJuly 19, 2025 at 4:30 AMMaintenance has completed successfully
- In progressJuly 19, 2025 at 3:30 AMIn progressJuly 19, 2025 at 3:30 AMMaintenance is now in progress
- PlannedJuly 19, 2025 at 3:30 AMPlannedJuly 19, 2025 at 3:30 AM
On Friday, July 18, 2025, at 11:30 PM ET, we’ll be performing routine maintenance on the CloudieConnect system to help ensure continued performance and reliability.
During this time, CloudieConnect users may experience a brief service interruption (up to 5 minutes) during which login access and the ability to make or receive calls may be temporarily unavailable.
All other services will remain fully operational and unaffected.
We appreciate your patience throughout this process. If you need additional support, please contact support@oit.co.
- ResolvedResolved**What Occurred:** At 2:37 PM ET, all calls on the GRR server began to fail. **What was Affected:** * Inbound and Outbound Calls on GRR **When It Began**: 7/8/2025 2:37 PM ET **Current Status**: This incident is now Resolved. **Next Steps:** Major Incident Report will be available within 48 business hours.
- UpdateUpdateWe implemented a fix and are currently monitoring the result.
- MonitoringMonitoring**What Occurred:** At 2:37 PM ET, inbound and outbound calls to GRR began receiving a 503 error, causing them to fail. **What Is Affected:** Inbound and outbound calls to the GRR server **When It Began:** 2:37 PM ET on July 8, 2025 **Next Update:** July 9, 2025 at 4:00 PM ET **Current Status:** We have restarted the NMS service on the GRR server and are showing calls processing successfully again. **Next Steps:** * We will continue to monitor to ensure that calls continue to process successfully. We will also continue investigating the root cause of the failure. * Devices that were registered on GRR may need to be restarted after the NMS service restart took place. **Next Update:** July 9, 2025 at 4:00 PM ET
Jun 2025
- ResolvedResolved
At 2:26 PM ET, we received a notification from our NOC monitoring that the NMS service, which handles registration and call processing on GRR crashed. Active calls on this server did drop, and devices on GRR failed over to ATL successfully.
What was Affected:
Device Registration on GRR
Active Calls on GRR
When It Began: 06/05/2025 2:26 PM ET
Current Status: This incident is now resolved.
Next Steps:
Major Incident Report is now available
MIR report: https://voipdocs.io/announcements/-2025-06-05-registration-failure-on-grr-
- UpdateUpdate
- MonitoringMonitoring
At 2:26 PM ET, we received a notification from our NOC monitoring that the NMS service which handles registration and call processing on GRR crashed. Active calls on this server did drop, and devices on GRR failed over to ATL successfully.
What Is Affected:
Device Registration on GRR
Active Calls on GRR
When It Began: 2:26 PM ET
Current Status: The NMS service on GRR is back online as of 2:29 PM ET, and registration for all devices returned to GRR by 2:39 PM ET. At this time, all services on GRR are functioning as expected.
Next Steps: We will be monitoring GRR for 24 hours per our major incident policy while we continue to investigate the root cause. If additional outages occur, we will manually redirect devices & calls on GRR to ATL until the problem is resolved.
Next Update: 6/6/25 3:00 PM ET
- UpdateUpdate
MIR now available for Loss of Communication to GRR Server
At 10:30am ET, our monitoring system alerted us to the GRR server being unreachable for two minutes, resulting in device registrations and inbound calls to failover to other servers. Shortly after, the same datacenter lost communication again.
What Was Affected: GRR Server
When It Began: 6/04/2025 10:30 AM ET
Resolution:
The degraded circuit was immediately removed from the routing profile to prevent further instability
Enhanced monitoring was implemented during the recovery period to ensure sustained stability
The Major Incident Report is now available: https://voipdocs.io/announcements/-2025-06-04-loss-of-communication-to-grr-server
- ResolvedResolved
At 10:30am ET, our monitoring system alerted us to the GRR server being unreachable for two minutes, resulting in device registrations and inbound calls to failover to other servers. Shortly after, the same datacenter lost communication again.
What was Affected: GRR Server
When It Began: 10:30 AM ET
Current Status:
We continue to see stability on GRR's connections after 24 hours of monitoring.
This incident is now considered Resolved.
Next Steps:
Major Incident Report will be available within 48 hours.
- UpdateUpdate
Current Status: At this time we continue to see stability on GRR's connections. The datacenter will be performing maintenance and repair to the MPLS configuration tonight at 2AM and is expected to be completed before 8AM. No downtime or service interruptions are expected. Our NOC will be monitoring throughout the monitoring phase.
Next Steps:
Continue to monitor
Maintenance: Tonight, 2:00 - 8:00 AM ET
Next Update: 1:15 PM ET 6/25/2025
Degraded Services: None
Operational Services: All
- MonitoringMonitoring
What Is Affected: GRR Server
When It Began: 10:30 AM ET
Current Status: We have determined that the loss of traffic was due to a degraded circuit. The offending circuit was removed from the routing profile. Traffic has remained stable since the change. Moving to the monitoring stage.
Next Steps: Continue to monitor
Next Update: 6/5/2025 1:15 PM ET
Degraded Services: None
Operational Services: All
- InvestigatingInvestigating
At 10:30am ET, our monitoring system alerted us to the GRR server being unreachable for two minutes, resulting in device registrations and inbound calls to failover to other servers. Shortly after, the same datacenter lost communication again. We are investigating quickly and will determine if we need to close GRR temporarily. Updates to follow
What Is Affected: GRR Voice
When It Began: 10:30 AM ET
Current Status: We are investigating quickly and will determine if we need to close GRR temporarily.
Next Steps: Updates to Follow
Next Update: 12:15 PM ET
![[object Object]](/_next/image?url=https%3A%2F%2Finstatus.com%2Fuser-content%2Fv1744123237%2Fq0sngloyv09z4rkjf5gg.png&w=3840&q=75)