Affected
Degraded performance from 6:17 PM to 10:06 AM
Degraded performance from 6:17 PM to 10:06 AM
- ResolvedResolved
Starting March 30, Instagram media processing requests were failing at high-traffic times. We saw a significant spike in errors between 20:00–22:00 UTC (3pm–5pm ET) on March 30, with 347 errors in just 3 hours. Errors were concentrated at the top of each hour (:00–:05 UTC), when scheduled posts from many accounts hit Instagram simultaneously — 59% of all failures occurred in those first 5 minutes. The peak was at 02:00 UTC (9pm ET), accounting for 45% of hourly errors. Overall, the Instagram failure rate rose from our normal 3-5% baseline to approximately 12% during this period.
What we've done:
We've extended the wait times for Instagram media processing and retries. Instagram Reels and video content require server-side processing before they can be published, and during high-traffic periods, this processing was taking longer than our previous timeout windows allowed. By extending these wait times, posts that previously would have timed out and failed are now completing successfully.
What's coming next:
We will be extending these timeouts even further to handle more of these processing delays automatically, without requiring you to retry on your end. Our goal is to absorb Instagram's variable processing times so that high-traffic periods don't result in failed posts for you.
We appreciate your patience and apologize for the disruption. If you're still seeing failures, please let us know with specific RefIDs and we'll investigate immediately. - MonitoringMonitoring
In the last 24 hours, we’ve seen 846 Instagram posting errors across 340 unique UIDs, and ~6,000+ successful posts from ~2,962 unique UIDs.
The failure rate of ~12% is elevated above the normal ~3-5% baseline. (Usually from problematic media files, duplicate posts, etc. )
The dominant cause is Meta-side transient errors (errCode 2, "An unexpected error has occurred") — 245 events (29%). This is a Meta infrastructure issue, not an Ayrshare code bug.
The error spiked dramatically on March 30 at 20:00-22:00 UTC (347 errors in 3 hours), subsided overnight, and shows a secondary elevation on March 31 starting around 12:00 UTC.
We also saw multiple issues with a smaller number of users that had expired Oauth tokens. That issue was identified and announced on March 30th 2026-03-30 and is fully resolved — zero error 190 "Invalid OAuth access token" events remain. Details are available on https://status.ayrshare.com/
The key issue on our end is that error 138 continues to mask multiple distinct underlying causes, making it difficult for customers to self-diagnose.
We’re working on two fixes:
Reworking the Meta error codes to make them easier for you to understand
Improved automatic unlinking of the expired Oauth tokens.
We’re sorry for ambiguity for you and your users. Our team is working on these fixes right now.
Unfortunately, transient issues like this happen with Meta periodically and they rarely update their status page. We’ll be delivering clearer messaging in the future to ensure that you can help debug your customers.

