Infraly, LLC - Notice history

Experiencing partially degraded performance

Notice history

Oct 2025

Infraly - Global - Network Instability
  • Update
    Update

    Additional network capacity has been successfully added without any issues. Our team continues to monitor performance closely to ensure optimal stability and reliability.

    In the coming days, several more ports are scheduled for activation with our Tier 1 providers, further strengthening our network capacity and resilience.

    We want to sincerely thank everyone who has supported us throughout this unprecedented event. Your patience and trust mean a great deal to us, and we’re proud to continue delivering solid, stable experiences for all of our customers.

  • Update
    Update

    Our network team is in the process of activating several additional ports with our Tier 1 providers. During this activation period, there may be brief instances of connectivity instability as new paths are brought online and traffic is rebalanced.

    Please know that our team is doing everything possible to ensure a smooth transition while strengthening our overall capacity and redundancy. These upgrades are part of our ongoing efforts to stay ahead of the evolving challenges within today’s complex network landscape.

    We appreciate your patience and understanding as we continue improving reliability and performance across our global network.

  • Monitoring
    Monitoring

    We want to provide clarity regarding the network disruptions some customers have experienced.

    At present, the Internet is facing major instability caused by a record-breaking DDoS botnet known as Aisuru. This botnet has launched attacks in the multi-terabit per second (Tbps) range, so large that they not only affect targeted networks but also cause congestion across Internet service providers and upstream carriers worldwide. The impact is being seen far beyond any single provider, with effects rippling across a wide range of services globally.

    To minimize the impact, we have been actively shifting traffic between all available DDoS protection partners at our disposal. Our team is closely monitoring attacks and implementing real-time routing adjustments to reroute traffic away from congested paths, while also coordinating with upstream carriers to ensure the fastest possible recovery when instability occurs.

    It is important to understand that no provider is fully immune to this botnet due to its unprecedented scale. Industry-wide, providers such as OVH, GSL, Cosmic Guard, Path, NeoProtect, and many others have all faced the same challenges. If attackers target a protection provider’s own network, temporary instability can still occur until routes are rebalanced, and even with multiple providers in place, there is no guarantee that these issues can be fully resolved when attacks of this magnitude occur.

    Online games are particularly sensitive to these conditions because of the way their protocols handle connections and timeouts. Even if routing is corrected quickly, the initial attack may already have caused packet loss, lag spikes, or disconnects before adjustments can take effect.

    Please be assured that it is in our best interest to keep your services stable and resilient. Our team is monitoring this situation around the clock and taking every possible step to reduce impact. We will continue adjusting traffic flows and working closely with our DDoS protection partners as the situation evolves.

    This level of attack is unprecedented, and while it is affecting providers worldwide, we remain committed to doing everything possible to shield your services and maintain stability.

Infraly - London - Carrier Maintenance - October 1st, 2025
  • Completed
    October 02, 2025 at 12:00 AM
    Completed
    October 02, 2025 at 12:00 AM
    Maintenance has completed successfully
  • In progress
    October 01, 2025 at 8:00 PM
    In progress
    October 01, 2025 at 8:00 PM
    Maintenance is now in progress
  • Planned
    October 01, 2025 at 8:00 PM
    Planned
    October 01, 2025 at 8:00 PM

    Dear user,

    Please refer to a very important maintenance announcement that may have an impact on hosting services:

    Infraly - London - Carrier Maintenance - October 1st, 2025
    Location Affected: London, UK Path-Routed Prefixes
    Start Date: October 1st, 2025 @ 20:00 UTC
    Estimated End Date: October 2nd, 2025 @ 0:00 UTC

    Summary: Path.net, one of our third-party DDoS-protection providers in London, is performing maintenance in its London place of presence (PoP).

    Impact on Customer: During the window, there is a possibility that connections may drop and then re-establish. If deemed necessary, we will shift the prefixes to our secondary DDoS protection upstream, Neoprotect. You may see a change in latency as routes will be different throughout the maintenance. Our team will monitor throughout the window.

    Updates: We will provide updates via our status page throughout the maintenance. You can subscribe for alerts on our status page (https://status.infraly.co).

    If you have any questions, don't hesitate to contact our client relations team via support ticket. Thank you again for your cooperation and support!

    Sincerely,

    Infraly, LLC DBA Hosturly, Physgun, WISP, & Buildurly
    a: 1636 N Cedar Crest Blvd, #122, Allentown, PA 18104, USA
    e: hello@infraly.co
    p: +1 (833) INF-RALY
    w: infraly.co / hosturly.com / physgun.com / wisp.gg / buildurly.com

Sep 2025

Infraly - London - Route Change - September 19th, 2025
  • Completed
    September 19, 2025 at 6:01 AM
    Completed
    September 19, 2025 at 6:01 AM
    Maintenance has completed successfully
  • In progress
    September 19, 2025 at 6:00 AM
    In progress
    September 19, 2025 at 6:00 AM
    Maintenance is now in progress
  • Planned
    September 19, 2025 at 6:00 AM
    Planned
    September 19, 2025 at 6:00 AM

    Dear user,

    Please refer to a very important maintenance announcement that may have an impact on hosting services:

    Infraly - London - Route Change - September 19th, 2025
    Services Affected: 194.69.160.0/24 IPs
    Start Date: September 19th, 2025 @ 6:00 UTC
    Estimated End Date: September 19th, 2025 @ 6:01 UTC

    Summary: Following maintenance by one of our third-party DDoS-protection providers in London, we will shift traffic of our 194.69.160.0/24 prefix back to their network.

    Impact on Customer: During the window, there is a possibility that connections may drop and then re-establish as routes reappear in the global routing table under another upstream. Services with filters enabled on the firewall portal will notice greater effects due to all existing traffic before the route change being invalidated and requiring re-authentication.

    Updates: We will provide updates via our status page throughout the maintenance. You can subscribe for alerts on our status page (https://status.infraly.co).

    If you have any questions, don't hesitate to contact our client relations team via support ticket. Thank you again for your cooperation and support!

    Sincerely,

    Infraly, LLC DBA Hosturly, Physgun, WISP, & Buildurly
    a: 1636 N Cedar Crest Blvd, #122, Allentown, PA 18104, USA
    e: hello@infraly.co
    p: +1 (833) INF-RALY
    w: infraly.co / hosturly.com / physgun.com / wisp.gg / buildurly.com

Infraly - London - Carrier Maintenance - September 17th, 2025
  • Completed
    September 18, 2025 at 12:00 AM
    Completed
    September 18, 2025 at 12:00 AM
    Maintenance has completed successfully
  • In progress
    September 17, 2025 at 8:00 PM
    In progress
    September 17, 2025 at 8:00 PM
    Maintenance is now in progress
  • Planned
    September 17, 2025 at 8:00 PM
    Planned
    September 17, 2025 at 8:00 PM

    Dear user,

    Please refer to a very important maintenance announcement that may have an impact on hosting services:

    Infraly - London - Carrier Maintenance - September 17th, 2025
    Location Affected: London, UK Path-Routed Prefixes
    Start Date: September 17th, 2025 @ 20:00 UTC
    Estimated End Date: September 18th, 2025 @ 0:00 UTC

    Summary: Path.net, one of our third-party DDoS-protection providers in London, is performing maintenance in its London place of presence (PoP).

    Impact on Customer: During the window, there is a possibility that connections may drop and then re-establish as routes fail over to our secondary DDoS protection upstream, Neoprotect. You may see a change in latency as routes will be different throughout the maintenance. Our team will monitor throughout the window.

    Updates: We will provide updates via our status page throughout the maintenance. You can subscribe for alerts on our status page (https://status.infraly.co).

    If you have any questions, don't hesitate to contact our client relations team via support ticket. Thank you again for your cooperation and support!

    Sincerely,

    Infraly, LLC DBA Hosturly, Physgun, WISP, & Buildurly
    a: 1636 N Cedar Crest Blvd, #122, Allentown, PA 18104, USA
    e: hello@infraly.co
    p: +1 (833) INF-RALY
    w: infraly.co / hosturly.com / physgun.com / wisp.gg / buildurly.com

WISP - Front-end Degraded
  • Resolved
    Resolved

    At 14:56:09 UTC, a desynchronization event occurred with one node location in our database cluster, triggered by congestion at the upstream provider caused by a large inbound DDoS attack targeting another customer of that provider. This attack was not directed at our platform, but it impacted the cluster's ability to communicate effectively.

    The desynchronization caused a cascading effect, a behavior we've seen previously with our current database platform, where multiple nodes began marking themselves as unhealthy, and some briefly returned to a healthy state before failing again. At 15:17:23 UTC, the Singapore node, the last remaining healthy node, also marked itself as unhealthy while our team was working to bring the other node locations back online. This caused WISP front-end services to become fully inaccessible.

    From 15:17:23 to 15:30:01 UTC, no healthy nodes were available to handle traffic. At 15:30:02 UTC, the Singapore node recovered. Following this, our team scaled up the other locations as the remaining nodes resynchronized, fully restoring services.

    Since taking over WISP, we've gone through three different node configurations to improve reliability and ensure true geographic redundancy. Under the previous ownership, all nodes were hosted with a single provider in one facility, which did not meet our standards for redundancy. This design left the platform vulnerable to geographic outages or unexpected catastrophic events, and it lacked the true geographic load balancing that WISP is known for.

    To fix these issues, we tried the following approaches:
    - Five-node setup (3 Chicago, 2 London): Provided low inter-site latency and some stability, but if one location had problems, the entire cluster was impacted.
    - Three-node setup (Chicago, London, Singapore): Improved geographic diversity but created a new issue where a single node failure could trigger a cascading cluster failure.
    - Current five-node setup (Chicago, London, Singapore, Los Angeles, Germany): Our most stable design yet, offering two-node failure tolerance and strong geographic diversity. However, limitations with the underlying database system still cause occasional random desynchronization and nodes marking themselves as unhealthy even without actual communication problems.

    Due to these recurring issues, one of our team members has been tasked exclusively with evaluating alternative database backends. For the past several weeks, well before this incident occurred, we have been rigorously testing other solutions to prevent random desync behavior and eliminate the self-failure conditions that can cascade through the system.

    While our current five-node cluster is much more stable than past setups, this incident highlights a known issue with the current database platform under extreme conditions where latency and congestion disrupt syncing between sites. Normally, the cluster self-heals, and we've experienced this behavior before without any service impact, making this event unusual. Since moving to the current setup, these issues have been far less frequent, but they remain a known limitation. We are currently evaluating other database platforms to replace the existing system and prevent these problems entirely.

    Additionally, before this event, a separate issue occurred with the scheduler, where the queue manager became stuck due to failed API requests caused by Cloudflare API problems. This temporarily prevented certain tasks from being processed. The issue has been resolved, and we are implementing additional safeguards to ensure that future scheduler problems do not prevent other tasks from running as expected.

    We are fully committed to improving WISP's reliability through ongoing testing, architecture improvements, and a planned migration to a more resilient database platform following validation. We sincerely apologize for the disruption and thank you for your trust as we work tirelessly to enhance the stability and resilience of WISP.

  • Identified
    Identified
    We're aware of issues communicating to panel. Our team is working on it.

Aug 2025

Physgun - Machine Maintenance - chi-s-game-1
  • Completed
    August 08, 2025 at 9:10 AM
    Completed
    August 08, 2025 at 9:10 AM

    Firmware has been updated with little to no impact! Maintenance is now complete.

  • In progress
    August 08, 2025 at 9:00 AM
    In progress
    August 08, 2025 at 9:00 AM
    Maintenance is now in progress
  • Planned
    August 08, 2025 at 9:00 AM
    Planned
    August 08, 2025 at 9:00 AM

    We will be performing quick firmware updates on chi-s-game-1.physgun.com this will cause a brief disruption but players will be-able to reconnect almost immediately after the firmware has been updated.

Aug 2025 to Oct 2025

Next