- Prioridade - Baixo
- Afetando Outros - NY IP
-
We have identified an issue with a pair of core switches in NY that has caused random connectivity issues for some subscribers.
A problem in a pair of switches which created intermittent packet loss to groups of addresses has been resolved by restoring the configuration. We believe some process (likely an automated process) may have sent an old configuration to these switches. We have taken steps to avoid automated processes from talking to these switches for now. As a policy, we do not apply changes to the network configuration during market hours. We are investigating how the configuration changes were sent to the devices.
Our network has returned to normal operation. Please contact our helpdesk if you are still experiencing issues.
UPDATE: We have reapplied optimizations that were lost with the configuration. A small amount of packet loss observed after the configuration was reapplied appears to be resolved.
We have scheduled Saturday morning to reproduce the events leading up to the loss of configuration in an effort to identify the specific cause. What we know now is a machine that holds network management software was started and shortly after we started receiving reports of random connectivity issues. There may be periods of no connectivity on Saturday morning. We will of course work to minimize this as much as possible.NOTE: This is listed as a low priority issue because the network is operating normally. It was also listed as a low priority during the event because it was affecting small pockets of subscribers.
- Data - 04/09/2019 08:50 - 04/14/2019 07:32
- Última Atualização - 04/10/2019 07:58
- Prioridade - Baixo
- Afetando Outros - All CNS Services
-
We will be conducting network and platform maintenance and upgrades this weekend during our maintenance window, after all markets have closed.
No large service interruption is anticipated but there may be very brief periods of loss of connectivity while routes converge and equipment is updated.
During this work we will:- Update network & server software
- Deploy new and update existing edge filters
- Create additional IPv6 redundancy
- Test failover redundancy
- Data - 11/30/2018 20:46 - 12/02/2018 09:34
- Última Atualização - 11/30/2018 08:39
- Prioridade - Baixo
- Afetando Outros - All Locations
-
We will be conducting network maintenance and upgrades this weekend during our maintenance window, after all markets have closed.
No large service interruption is anticipated but there may be very brief periods of loss of connectivity while routes converge.
During this work we will:- Update network software
- Deploy new and update existing edge filters
- Create additional IPv6 redundancy
- Test failover redundancy
- Data - 11/23/2018 17:11 - 11/25/2018 09:05
- Última Atualização - 11/23/2018 17:12
- Prioridade - Baixo
- Afetando Outros - All datacenters
-
We will be conducting network maintenance and upgrades this weekend during our maintenance window, after all markets have closed.
No large service interruption is anticipated but there may be very brief periods of loss of connectivity while routes converge.
During this work we will:- Update network software
- Deploy new and update existing edge filters
- Create additional IPv6 redundancy
- Test failover redundancy
- Data - 11/16/2018 19:00 - 11/18/2018 14:47
- Última Atualização - 11/16/2018 07:14
- Prioridade - Baixo
- Afetando Outros - All IP Services / All Datacenters
-
* currently running final tests
We will be completing maintenance to our network beginning after all markets have closed on Friday and taking into Sunday morning to complete. This work has been scheduled as follows:
Further harden the entire CNS network by leveraging new information learned during network observations during this past week- Increase capacity on crucial network uplinks
- Upgrade router software in a couple different routers
We do not anticipate any outages but it is possible for some very brief interruptions.
- Data - 11/09/2018 19:00 - 11/11/2018 10:44
- Última Atualização - 11/11/2018 10:27
- Prioridade - Baixo
- Afetando Outros - Abusive traffic in NY
-
We have closed a vector used to create a DoS condition and are actively monitoring the network for abusive traffic.
Mitigation efforts appear to be successful.
CNS is offering a reward of US$25,000 for information leading to the arrest and conviction of person(s) responsible for sending abusive traffic to our network. Please contact our helpdesk with any information. You can remain anonymous. +1 (619) 225-7882.
- Data - 10/17/2018 03:11 - 11/05/2018 12:23
- Última Atualização - 10/29/2018 08:10
- Prioridade - Baixo
- Afetando Outros - All CNS IP Infrastrcture
-
In response to recent DoS attacks, we will be working while all markets are closed to deploy new filters and automated countermeasures to further harden existing systems. There may be brief interruptions to IP connectivity during this time.
Please contact our helpdesk with any questions.
- Data - 10/26/2018 19:00 - 10/28/2018 13:31
- Última Atualização - 10/26/2018 12:01
- Prioridade - Crítico
- Afetando Outros - All IP svcs
-
We will be working to enhance DoS defenses during this maintenance window. There may be brief interruptions to connectivity during this time. We will of course try to minimize them as much as possible.
- Data - 10/19/2018 19:00 - 10/22/2018 11:51
- Última Atualização - 10/20/2018 11:11
- Prioridade - Baixo
- Afetando Outros - IP services out of NY region
-
We will be completing work to dramatically increase our ability to mitigate abusive traffic targeting our NY datacenter. While there may be brief interruptions to IP connectivity, we will endeavor to minimize them as much as possible.
This work will not begin until after all markets have closed.
- Data - 10/12/2018 19:00 - 10/14/2018 12:34
- Última Atualização - 10/13/2018 09:52
- Prioridade - Crítico
- Afetando Outros - NY IP
-
We are working to mitigate attack traffic targeting our NY datacenter. We anticipate service will be restored in the next 5 minutes.
UPDATE: We have successfully mitigated the abuse and are continuing to monitor.
- Data - 10/10/2018 12:31 - 10/14/2018 12:34
- Última Atualização - 10/10/2018 12:33
- Prioridade - Crítico
- Afetando Outros - NY IP services
-
We are working to mitigatge malicious packets directed at our NY datacenter.
We have mitigated the traffic and are continuing to review and respond.
Possible route hijack. Minimal effect due to large number of peers. We are continuing to review and respond.
UPDATE: This issue appears to be resolved. We will continue to monitor and respond quickly to any abnormalities.
- Data - 10/08/2018 02:45 - 10/08/2018 12:21
- Última Atualização - 10/08/2018 12:05
- Prioridade - Baixo
- Afetando Outros - All NY based services
-
Our NY datacenter will go offline briefly so we can complete network maintenance.
Details: We are experiencing trouble removing a problem router from the rack. In order to minimize any possible damage, we will need to shut down and remove other routers to remove the problem router. We anticipate this procedure will just take a few minutes.
- Data - 09/30/2018 11:53 - 09/30/2018 15:22
- Última Atualização - 09/30/2018 12:11
- Prioridade - Baixo
- Afetando Outros - NY Network
-
A router in our NY datacenter has failed and services automatically rolled over to another router. This caused a very brief disruption while some routes reconverged.
We are working to identify the problem with the router to repair it.
UPDATE: Early analysis indicates malicious traffic has caused the router to fail because it was unable to keep up logging it. We have taken steps to mitigate it while we continue to review.
- Data - 09/27/2018 05:57 - 09/30/2018 08:02
- Última Atualização - 09/27/2018 06:13
- Prioridade - Crítico
- Afetando Outros - Some NY Hosted Brokers
-
Some brokers in NY hosted off our network are down due to an interruption at their hosting provider. This is a hard down. They are unreachable from any network. Your terminals will automatically connect when service is restored.
This page will be updated as we receive new information.
UPDATE: Connectivity has been restored. Please contact your broker if you are still unable to connect. - Data - 09/14/2018 10:16
- Última Atualização - 09/14/2018 11:41
- Prioridade - Baixo
- Afetando Outros - New York
-
A failure of the conneciton to three key IX's failed, causing a cascading failure and reducing all connectivity in/out of NY to a single transit circuit. All service is now normalized and we will continue to analyze the failure and make appropriate adjustments where necessary.
--
Connectivity has been restored to these IX's. We are cautiously moving some routes back as we work to investigate the cause. At this time all services should be reachable.We have lost connectivity to three IX's. This initially created a connectivity issue for many routes. We have re-routed these affected routes out another transit provider while the issue is being investigated. At this time all services should be reachable.
- Data - 08/01/2018 02:32
- Última Atualização - 08/01/2018 11:19
- Prioridade - Baixo
- Afetando Outros - UK
-
During this past week, we experienced a period of high bandwidth and packet loss on IX interfaces in NY and UK and across our NY-UK backbone. We believe this was caused by a routing loop with an unknown peer between Europe and America. The issue went away after many peers on LINX in the UK were dropped.
We have started reconnecting peers one at a time in order to identify the loop. At that time, we will address the loop either through an internal filter, working with the peer, or both. (We need to identify it in order to determine how to best address it).
There can be expected a brief period of packet loss as we work to reproduce the issue. We will work to minimize it as much as possible.
----
This problem has been mitigated.
We believe this issue occured because of a routing loop with a peering network. Most peers on LINX have been cut away until we can identify it after markets close.
---
We are experienicng major packet loss on traffic entering our network at LINX in London. We are in the process of cutting the IX away until the issue is resolved. - Data - 06/13/2018 09:48 - 06/17/2018 11:50
- Última Atualização - 06/16/2018 08:07
- Prioridade - Baixo
- Afetando Outros - NYIIX Connectivity
-
Due to technical issues being experienced by the New York Internet Exchange (NYIIX) and their intention to interrupt services on the IX over the week to apply a fix, we will be temporarily shutting down all peering sessions on the exchange until after the work is complete. Very little impact is expected because most peers have redundant links through DE-CIX NY or Equinix NY.
It is possible some networks will revert to transit service until after the work is complete and all peers are reconnected.
- Data - 06/17/2018 12:19 - 06/22/2018 09:57
- Última Atualização - 06/15/2018 12:22
- Prioridade - Baixo
- Afetando Outros - UK
-
A power anomaly in the datacenter has damaged PDUs powering key switching equipment. Power has been re-routed and the damaged equipment will be replaced during our maintenance window.
At this time all services are operational. Please contact our helpdesk if you are unable to connect or experience any issues.
This event appears to have damaged a router. It has been removed from service.
Multiple power supplies were destroyed. They have been already or will be replaced during our maintenance window.
All work to repair or replace damaged equipment is complete.
We believe a power supply blew and sent a surge back into the PDU, damaging connected equipment or causing it to malfunction.------------
Detail:
We are investigating reports of connectivity issues in UK.
We appear to have lost both power legs to key equipment. We are working with Equinix to investigate.
CONFIRMED blown PDU(s) have taken down key switches. We are working to re-route power as quickly as possible.
Power to key equipment has been restored through temporary circuits while work is completed to replace damaged PDU's.
2:45PM PST At this time all services should be operational. Please contact our helpdesk if you are not online.
Full replacement of power equipment will be conducted during our maintenance window after markets have closed to avoid any possible interruption to services.
Status changed to "Scheduled"
Updates will be provided as available. - Data - 06/05/2018 10:56 - 06/09/2018 18:19
- Última Atualização - 06/09/2018 18:19
- Prioridade - Crítico
- Afetando Outros - NYC Network
-
CNS has mitiageted abusive traffic entering our network. Please contact CNS Support if you are experiencing a degraded network so we may investigate.
- Data - 06/01/2018 08:25 - 06/03/2018 13:13
- Última Atualização - 06/01/2018 08:26
- Prioridade - Baixo
- Afetando Outros - All Services
-
All services are subject to a brief pause on Saturday so we can complete security audits and platform maintenance.
- Data - 04/21/2018 00:00 - 05/13/2018 11:06
- Última Atualização - 04/20/2018 08:52
- Prioridade - Baixo
- Afetando Outros - All peering inside Equinix NY/NJ
-
Uplinks to peering routers inside Equinix will be out of service on Saturday for maintenance to new lower latency equipment. During this period traffic will route around over alternate routes.
- Data - 04/28/2018 00:00 - 04/20/2018 08:50
- Última Atualização - 04/13/2018 16:49
- Prioridade - Crítico
- Afetando Outros - Abnormal traffic
-
After experienicng a period of several hours of no abusive traffic, we are now closing out this incident. The new filters mitigating this traffic will remain in place.
--
We are continuing to mitigate abusive traffic entering our network. This traffic earlier caused some connectivity issues in our NY datacenter. All services are reachable. Please contact our helpdesk by telephone, chat or support ticket if you are experiencing issues.
We will continue to monitor and adjust as required. - Data - 04/10/2018 01:38
- Última Atualização - 04/10/2018 08:11
- Prioridade - Baixo
- Afetando Outros - NYIIX Packet Loss (IX)
-
We have temporarily shut down all peering sessions on NYIIX due to packet loss on the exchange. We do not expect much impact due to redundancy at DE-CIX in NY.
- Data - 03/26/2018 13:13 - 04/02/2018 09:11
- Última Atualização - 03/26/2018 13:15
- Prioridade - Baixo
- Afetando Outros - Network Peering at NYIIX
-
Our IX port at NYIIX (peering exchange) will be shut down temporarily so that it can be moved to their new platform. There may be a brief interruption to network connectivity during this period. This will begin about 30 minutes after all markets have closed.
- Data - 03/16/2018 14:30 - 04/02/2018 09:11
- Última Atualização - 03/16/2018 14:19
- Prioridade - Baixo
- Afetando Outros - Credit Card Payments
-
We are experiencing problems processing credit card payments. We are working on a resolution and will have it operational again as soon as possible.
We apologize for emails you may have received of a declined charge. We will attempt the charge again after the gateway is back in service.
RESOLVED
- Data - 02/28/2018 12:42 - 02/28/2018 13:17
- Última Atualização - 02/28/2018 13:17
- Prioridade - Alto
- Afetando Outros - All NY Hosted Services
-
We will be moving equipment to a new and larger location within the NY datacenter over the next couple Saturdays. During this time services may be unavailable as equipment is physically moved. We will resume the services as soon as possible, as they come back online in the new location.
No reboots are expected, but services will be unavaiable during the actual move of the respective machine(s).
UPDATE: Most all services in NY are back online. We are working on a couple of remaining items to bring everything up as fast as possible.
All work is complete. During this move, some backend equipment was also upgraded with new bonded 10Gb uplinks. enjoy - Data - 12/09/2017 01:02 - 12/10/2017 12:04
- Última Atualização - 12/10/2017 12:04
- Prioridade - Alto
- Afetando Outros - All NY hosted systems
-
We are completing maintenance on our network in the NY/NJ region. There may be brief periods of no connectivity while this work is completed.
- Data - 12/02/2017 10:46 - 12/02/2017 13:20
- Última Atualização - 12/02/2017 10:46
- Prioridade - Alto
- Afetando Outros - Subscribers using Level 3 transit
-
We are investigating a possible service issue with Level 3 in North America. This issue is affecting subscribers with Internet access providers who use L3 with no other alternatives.
The issue is creating lag and disconnections between subscribers and their hosted CNS service.
More information will be posted as soon as it is available.
UPDATE: Seems to have stabilized. We will continue to monitor and open this alert again if the issue returns.
- Data - 11/27/2017 09:59
- Última Atualização - 11/27/2017 11:33
- Prioridade - Baixo
- Afetando Servidor - VMM
-
We are working to resolve a power distribution issue affecting a limited number of VMs in NY. The issue is causing CPU modulation, which results in a slow server.
UPDATE: Engineers are setting up temporary power to bring back full performance while damaged equipment is replaced.
INFO: A power issue destroyed a PDU, network switch and multiple PSU's in about 5 servers. The network in this rack switched to a redundant switch and the damaged switch has already been returned to normal service. We have completed running new (temporary) power and replacing PSU's to restore normal quality of service while a proper repair is completed.
Please let CNS Support know if you are experiencing any issues.
As QoS has been restored, priority of this issue has been reduced. We do not anticipate any further impact to services.
UPDATE: Full redundancy restored. All systems normal
- Data - 09/04/2017 10:40
- Última Atualização - 09/06/2017 12:45
- Prioridade - Médio
- Afetando Outros - Peering to AMS-IX
-
Peering to AMS-IX has flapped. This will cause brief connectivity issue for traffic that transits the connection while routes reconverge. We are investigating the cause. We will update this notice with more information as soon as a review is complete.
---
The original notice listed the IX as DE-CIX. It is actually AMS-IX
---
We are observing continued instability in the circuit and so the port has been administratively shutdown until the issue is resolved. Traffic will route around the IX. No major impact is expected.
---
The problem circuit has been repared and the network has been normalized.
The circuit vendor has responded they are experiencing a major outage. We will leave the IX offline until they call all clear.
- Data - 05/15/2017 07:32
- Última Atualização - 05/17/2017 07:44
- Prioridade - Médio
- Afetando Outros - Peering links into Amsterdam
-
Service to Amsterdam has been restored. The network has been normalized.
____________
A dedicated circuit connecting our network to an IX in Amsterdam has gone down. Possible fiber cut. We have raised a ticket with the vendor.
This may cause increased latency to parts of Europe and Africa, but it will not create an outage because there are multiple redundant routes to the affected networks.
We will update this notice as soon as service to Amsterdam IX is restored.
- Data - 05/04/2017 08:55
- Última Atualização - 05/04/2017 21:35
- Prioridade - Crítico
- Afetando Outros - FXCM routing
-
We have received a report of high latency to FXCM. Our investigation has revealed that China Mobile is advertising their prefix, causing this traffic to route through their network.
We are working to filter the advertisement.
UPDATE: We have filtered out the hyjacked route. All appears normal.
We will continue to monitor.
- Data - 01/29/2017 18:53 - 01/29/2017 23:15
- Última Atualização - 01/29/2017 19:07
- Prioridade - Baixo
- Afetando Outros - Subscriber Control Panel
-
Please be advised the CNS Subscriber Control Panel will be unavailable during scheduled maintenance so we can install software upgrades. If you require assistance during this time, please call the helpdesk at any number below:
San Diego, CA: +1 (619) 225-7882
Los Angeles, CA: +1 (213) 769-1787
New York, NY: +1 (646) 930-7435
London, UK: +44 (2035) 191453
- Data - 07/02/2016 00:30 - 07/04/2016 07:43
- Última Atualização - 06/30/2016 15:51
- Prioridade - Alto
- Afetando Outros - NY POP
-
Two circuits in the NY region have been cut somewhere outside our cage. We are working with the datacenter to get them re-run as fast as possible. In the mean time, increased latency may be experienced as we have routed around the trouble.
One of two have been replaced. Currently waiting on second line run and we will then normalize the network.
Damaged fiber has been replaced. We are now testing before normalizing the network.
All tests have passed with no packet loss. We are proceeding to normalize the network.
Network has been normalized. We are observing to confirm.
Some routes are still reconverging. we are observing
- Data - 06/13/2016 08:03 - 06/14/2016 07:57
- Última Atualização - 06/13/2016 12:05
- Prioridade - Baixo
- Afetando Outros - Network
-
We are performing maintenance on our network. During this time some routes may be intermittent. We are working to complete the maintenance as fast as possible.
Hardware issue during maintenance created complications. We are finishing it up. Apologizes for the delay. - Data - 06/12/2016 00:18 - 06/12/2016 18:20
- Última Atualização - 06/12/2016 10:37
- Prioridade - Alto
- Afetando Outros - UK
-
We are investigating reports of packet loss. It appears rooted off our network. We have routed some ISP's out and are announcing UK prefixes out of NYC (in addition to UK) while we investigate.
It does not appear to be affecting local UK links.
More to follow as it becomes available.
Connections stable. We are still investigating the source
Network has been normalized. Countermeasures have been deployed in response.
We will of course continue to monitor.
- Data - 05/27/2016 07:28 - 05/27/2016 11:20
- Última Atualização - 05/29/2016 09:53
- Prioridade - Baixo
- Afetando Outros - Traffic in/out of Malaysia and Singapore
-
Subscribers connecting from Malaysia may experience higher latency to their hosted service due to a submarine cable fault. If you are experiencing an issue, please contact CNS support. We may be able to route service around the trouble for you. The cable is expected to be repaired by March 31st.
We have sent an urgent peering request to Telekom Malaysia, to peer in Europe. This should resolve the issue for all mutual subscribers. We will update after we hear back from them.
Update 3/9: Telekom Malaysia has responded positively to our peering request to peer in UK. This will help both traffic to both our UK and US datacenters. We will update this report again as soon as peering is established.
Update 3/14: Telekom Malaysia has scheduled to turn up peering with CNS in UK on the night of 3/23-24. Telekom Malaysia already peers with CNS in Los Angeles, however this is over the faulty cable. Adding a peering point in the UK should substantially improve connectivity to all CNS datacenters.
Update 3/30: We received information from Telekom Malaysia that the peering has been rescheduled for April 4-5.
No further updates have been received from Telekom Malaysia. We are also not receiving any reports of trouble from affected subscribers. As a result we are closing this issue but will re-open if necessary.
--
http://subtelforum.com/articles/update-from-telekom-malaysia-on-restoration-works-to-repair-submarine-cable-fault/
Press ReleaseTelekom Malaysia Berhad (TM) earlier announced that we have detected a fault on the Asia Submarine Cable Express (ASE) system off Singapore, which affected our Cahaya Malaysia submarine cable linking Malaysia to North Asia and the United States. ASE consists of 6 fiber pairs of which TM owns 2 fiber pairs, named as Cahaya Malaysia.
We would like to reiterate that these cable faults have caused international link outages which affected the browsing experience of not just Internet customers of TM, but also other Internet users in Malaysia and in the region as well. This is as most of the regional traffic would have to pass the affected submarine cable systems.
During this period, Internet users may experience some degree of service degradation such as slow browsing and high latency while accessing contents hosted in the United States (US), North Asia and Europe via the affected cables. However, we wish to note that our IPTV service, HyppTV is not affected by the outages as the service utilises TM’s domestic backhaul network.
Whilst we are working with our consortium members in other countries to restore Cahaya Malaysia, we are also proactively rerouting traffic to alternative routes to minimise impact to our customers. The rerouting options available in South East Asia are also hampered by minor faults and planned maintenance on several other sea submarine cables around the region, namely Asia Pacific Cable Network 2 (APCN 2), South East Asia – Japan Cable (SJC), South East Asia – Middle East – Western Europe 4 (SEA-ME-WE 4). We are actively managing this dynamic situation together with consortium partners and would like to assure customers that we are undertaking all necessary measures to ensure that customers continue to experience uninterrupted service.
We will continue to provide necessary updates on the progress of the restoration works via our official customer support accounts on Twitter @TMConnects as well as on Facebook at Everyone Connects, and will monitor feedback from our customers on the quality of service experienced.
We wish to thank our customers for their understanding and patience for this capacity reduction affecting operators in the whole region. Should you have any questions or require any assistance, you can reach us at @TMConnects? or via Everyone Connects? Facebook page?. We can also be contacted via email at help@tm.com.my. ?
- Data - 03/03/2016 07:09
- Última Atualização - 05/27/2016 08:44
- Prioridade - Baixo
- Afetando Outros - Peering in NYC to NYIIX
-
Our peering link to NYIIX is experiencing trouble and currently offline. There should be minimal impact as we have Equinix NYC and DE-CIX NYC peering active in the region. However, some subscribers may experience slightly higher latency until the circuit is repaired. We are waiting on an ETA from the provider.
RESTORED
- Data - 04/18/2016 22:15 - 04/19/2016 09:36
- Última Atualização - 04/19/2016 09:36
- Prioridade - Baixo
- Afetando Outros - UK Network Connectivity
-
During our maintenance window opening Friday, April 8 (after all markets have closed), we will perform work to return an out of service router back to service. During this time some peering links in Europe will reset, causing a brief interruption to connectivity for some end users while BGP routes reconverge. This process takes less than five minutes.
- Data - 04/08/2016 19:00 - 04/09/2016 17:03
- Última Atualização - 04/09/2016 07:30
- Prioridade - Baixo
- Afetando Outros - Many routers - all datacenters
-
During this maintenance window, some services may experience periods of high latency as we work to upgrade routing and switching equipment.
All upgrades are complete except UK. We are working to resolve a maintenance related issue. As a result, LINX and Equinix Connect peering is currently offline for an extended period during this maintenance window.
COMPLETE
- Data - 04/01/2016 19:00 - 04/03/2016 12:00
- Última Atualização - 04/03/2016 21:05
- Prioridade - Médio
- Afetando Outros - Any2 peering traffic in/out of Los Angeles
-
We have detected Any2 Los Angeles IX is offline. Any2 IX is the largest Internet traffic exchange on the US west coast. Traffic transiting thru Any2 will be affected until the associated networks heal or Any2 comes back online.
Update 1:57PM: We have seen stability over the past 7 minutes. We are continuing to monitor and will drop the IX if it bounces again until we hear back from the IX NOC.
Update 3:05PM: We have observed stabilit for the past hour. We will continue to monitor but for now are considering the issue closed.
- Data - 03/08/2016 13:43
- Última Atualização - 03/08/2016 15:05
- Prioridade - Baixo
- Afetando Outros - All virtualzied services
-
Please be advised many VM's and services hosted on VM's will be hibernated for up to one hour while we complete important updates to our platform. No reboots are anticipated. This is a continuation of last weeks maintenance and so if your service was affected last week it is unlikely to be affected this week.
- Data - 02/27/2016 00:00 - 02/29/2016 16:08
- Última Atualização - 02/25/2016 06:52
- Prioridade - Baixo
- Afetando Outros - VM Hosting Platform
-
Many VM's will be paused for up to one hour while we complete maintenance to our hosting platform. No reboot is planned. The VM will resume as soon as maintenance is complete.
- Data - 02/20/2016 00:00 - 02/21/2016 00:00
- Última Atualização - 02/21/2016 13:58
- Prioridade - Baixo
- Afetando Outros - Hosted VM's
-
CNS will be completing maintenance over the weekend of November 13. Many hosted services will be paused briefly so we can update our infrastructure. New provisioning will be held until maintenance is complete.
- Data - 11/13/2015 19:30 - 11/15/2015 07:51
- Última Atualização - 11/14/2015 12:15
- Prioridade - Médio
- Afetando Outros - All hosted services
-
We are applying a security patch to all public facing routers. There may be brief periods of no connectivity. This procedure will be completed well before markets open.
- Data - 11/01/2015 08:51 - 11/02/2015 11:15
- Última Atualização - 11/01/2015 08:52
- Prioridade - Alto
- Afetando Outros - edge reachable by transit in UK
-
An upstream transit provider in UK is experiencing techncial issues. This has caused periods of intermittent connectivity for some traffic entering the CNS network via transit links.
Most all traffic in/out of our UK datacenter travels through peering links and is unaffected.
We have isolated the issue specifically to an upstream transit network and have configured our network to route around them by utilizing our transit networks reachable from NYC and Los Angeles. This will increase latency for traffic that travels out these links while the issue is resolved.
We are working with the transit provider to setup alternate peering. Depending on the issue within their network, this may be accomplished rather quickly. We will continue to optimize the detour until we hear back from the transit provider.
We will update this report after more information is available.
UPDATE 8/21 10:04AM PST: Alternate routes optimized. Transit latency is close to normal. We will continue to monitor. Please contact CNS support if you are experiencing any issues.
UPDATE 8/24 11:33AM PST: We have applied previous re-routing configuration after detecting more transit issues in the UK. We will post further updates as we have more information to report.
UPDATE 8/25 6:38AM PST: This issue is potentially resolved. We will return European routing to normal after we observe one hour of stability on the transit providers network. Until then, we will continue to route around in order to maintain stable connectivity.
UPDATE 8/25 9:56AM PST: We are still seeing dropout on the european transit links. We have further optimized the detour around, and so latency should be much better - but not great. We will post further updates as we have more information to report.
UPDATE 8/26 2:53PM PST: Upstream tranist issues have been declared resolved by the provider for one of two legs. After having observed one hour of no packet loss, we have now brought the transit route back online. Latency for transit routes has returned to normal. We will continue to observe the routes closely and report any issues here.
UPDATE 8/27 9:40PM PST: We will bring up the second leg of the UK transit providers uplink during our regular maintenance window this weekend.
RESOLVED - Data - 08/19/2015 17:34
- Última Atualização - 08/27/2015 21:42
- Prioridade - Alto
- Afetando Outros - Connectivity issue to Divisa and Mt. Cook FX
-
Connectivity to Divisa and Mt. Cook FX are currently impossible.
This issue appears to be rooted on a server managed by Divisa and is affecting Divisa and Mt. Cook FX subscribers only.
We are waiting on an update from them.
UPDATE: This issue appears to be resolved. We are waiting on confirmation.
UPDATE: NOT resolved. The issue is off the CNS network, with PrimeXM and/or Divisa. We will post another update as we receive further information from them.
UPDATE: Resolved by PrimeXM
- Data - 07/24/2015 08:57
- Última Atualização - 08/08/2015 07:40
- Prioridade - Crítico
- Afetando Outros - IP connectivity NYC region
-
We are investigating reports of an IP issue in the NYC region.
The issue seems to be affecting one prefix hosted in our NYC datacenter. It appears to be a routing issue somewhere, likely off our network. We are working to identify it.
CNS techs can change your IP address to a different prefix for you through a support ticket. We are testing a possible work around and will post an update shortly.
The issue has been isolated to a transit route. We have taken action to route around the trouble and are now monitoring.
We have been advised the transit route in quesiton is fully operational again. Our monitoring and analysis confirms.
RESOLVED
- Data - 07/05/2015 16:03
- Última Atualização - 07/05/2015 19:52
- Prioridade - Crítico
- Afetando Outros - ALL UK
-
We are investigating several tickets from subscribers in Germany unable to access their UK based service. This appears to be a regional ISP issue. We are working to identify alternate routes. We will post updates here as new information becomes available.
UPDATE: The issue seems to have healed itself. We are working to confirm.
UPDATE: Confirmed healed. We are continuing to monitor (just in case)
- Data - 11/23/2014 14:04 - 11/23/2014 19:37
- Última Atualização - 11/23/2014 14:44
- Prioridade - Baixo
- Afetando Outros - All virtual servers
-
All virtual servers will be paused briefly during this maintenance window so we can complete important maintenance to the hosting platform.
Please - make sure your Windows VM's are up to date.
- Data - 11/14/2014 19:00 - 11/16/2014 10:30
- Última Atualização - 11/14/2014 10:10
- Prioridade - Baixo
- Afetando Outros - All UK hosted services
-
We have been notified by our transit provider in UK that they intend to perform major maintenance on their network starting at 21:30 UTC on Friday, October 24th. This is 30 minutes after all markets have closed.
We will be cutting away the transit provider at 21:15 UTC to prevent their work from impacting the CNS network. Many ISP’s peer with us directly in Europe and for traffic to those ISP’s there will be no impact. However, some traffic will be routed over the CNS backbone and in/out thru our NYC datacenter. This will increase latency temporarily as traffic routes over the Atlantic and around the maintenance.
The maintenance period is scheduled to end at 00:30 UTC. We will restore connectivity to the transit provider after we receive the all-clear from them.
- Data - 10/17/2014 14:15 - 10/24/2014 17:09
- Última Atualização - 10/23/2014 12:54
- Prioridade - Alto
- Afetando Outros - SDCA
-
We are moving our San Diego datacenter to Los Angeles. Services hosted in SDCA will be on/off until Saturday afternoon. No reboots are planned. IP addresses will change.
- Data - 09/26/2014 16:44 - 10/01/2014 16:46
- Última Atualização - 09/26/2014 16:46
- Prioridade - Baixo
- Afetando Outros - IP traffic
-
We are investigating routing issues in the NYC area to Asia and Europe.
ON NET connections are operating normally.
We will route around the trouble as soon as possible.
We have successfully routed around the problem and are continuing to monitor the situation.
the problem has been mitigated, however we are not going to return routing to normal until after all markets are closed. - Data - 08/29/2014 00:57 - 08/29/2014 08:00
- Última Atualização - 09/08/2014 21:57
- Prioridade - Médio
- Afetando Outros - NYC IP traffic
-
We have been noified of emergency network work upstream, off our network, to be completed immediately in NYC. In order to avoid possible loss of connectivity, we may route traffic around thru Los Angeles or UK. Higher latency is possible until the work is complete.
We will update this page with new information as it becomes available.
UPDATE: Repairs are complete and we have returned routes back to normal. - Data - 08/28/2014 12:05 - 08/28/2014 14:21
- Última Atualização - 08/28/2014 14:21
- Prioridade - Baixo
- Afetando Outros - NYC and UK datacenters
-
Upgrades are scheduled to network equipment in NYC and UK. Some services may be inaccessible for brief periods while equipment is replaced.
- Data - 08/22/2014 19:00 - 08/23/2014 23:00
- Última Atualização - 08/24/2014 11:37
- Prioridade - Crítico
- Afetando Outros - UK operations
-
We are detecting packet loss from Europe. VM's in our UK datacenter are online and there is partial connectivity. ON NET connections are also up. We are investigating.
Problem has been isolated to an upstream transit provider. We are working to route around them as quickly as possible.
UPDATE: We have routed around and many routes are up, however the problem transit provider is still announcing our routes causing some packet loss. We are working to stop the announcement as quickly as possible.
UPDATE: Many routes have healed. Most affected subscribers should already be back to normal.
We are going to continue routing around until we hear from upstream.
* * *
We believe this issue is resolved. However, we are continuing to route around the problem source as much as possible.
If you have been impacted by this incident please open a support ticket for credit under our 100% uptime guarantee.
* *This is a follow-up to the previously issued service advisory regarding severe packet loss in the UK on the night of August 12th.
The cause of the issue has been identified as an upstream transit provider who experienced an issue with a full routing table in multiple Cisco routers. This caused upstream routers to drop packets, resulting in a loss of connectivity for traffic transiting the transit provider.
We were unable to route around the transit provider completely because they continued to announce our routes while failing to deliver the packets to our gateway. This is in the process of being resolved right now, which will enable us to cut away the transit provider completely if they experience any issues in the future. We are also working with all of our other transit providers to make sure they are not directly announcing our routes.
This issue is not unique to CNS or the upstream provider. Many ISP’s are going to need to quickly update their router software – or even replace routers completely – as global routing tables continue to fill past the limit supported by the hardware at lightning speed. You will likely read more information about this in the news in the coming days. It has already affected other providers all over the net and will continue to do so for the next several weeks. CNS routers are not affected by this limitation.
Here at CNS, we have been working for months to expand our network dramatically. And so we are in a unique position to make sure this does not affect subscribers in the future. A new leased line between NYC and UK datacenters has finally come online, and another one is going up now between NYC and our newest datacenter in Los Angeles. This will allow us to announce our transit routes out of another CNS datacenter should a transit provider fail like they did on the night of August 12. In fact, traffic between CNS VM’s in NYC and UK are already experiencing a 10ms drop in latency due to the new leased line.
We are also converting any ON NET partners using static routes to BGP peering so that their maintenance does not affect CNS traffic. We are continuing to bring online many peering exchanges. Amsterdam and Frankfurt are already online, which is why our UK datacenter remained up for many parts of Europe. LINX will be online in the next couple days. This will further remove us from the problem as it continues to unfold across the Internet.
- Data - 08/12/2014 18:16 - 08/13/2014 00:00
- Última Atualização - 08/19/2014 07:31
- Prioridade - Crítico
- Afetando Outros - Limited number of Windows VM's in NYC
-
We are working to resolve a hardware failure affecting a small number of Windows VM's in NYC.
More updates will be posted here.
RAID controller failure. A tech is working on the issue right now.
Restored
- Data - 06/12/2014 18:30
- Última Atualização - 06/12/2014 19:02
- Prioridade - Crítico
- Afetando Outros - IP communications via NTT to UK
-
There is currently a routing issue off our network, with NTT. NTT is a long haul transit provider many ISP's use. The issue is breaking IP communications between UK and parts of North America for users who’s ISP uses NTT for IP transit.
Users on ISPs who’s traffic crosses NTT will be unable to connect with our UK datacenter until the route heals.
This is causing our datacenter to show down for short periods in the service monitor because it's monitored from the USA. Our UK facility is actually online.
We are following this closely and will post updates here.
The route over the Atlantic has healed.We are getting reports of subscribers are still unable to reach their services. This issue is due to a IP transit provider your ISP is using. We are working to get an update from NTT.
The affected area seems wider. We are now getting reports from Australia and Asia. We are working to get an update from NTT.
We are seeing some routes heal. Continuing to monitor and will report back.
We have made contact with an upstream ISP and are working to chase down a responsible party for a resolution.
Possibly resovled. We are testing and will follow-up shortly.
We believe the issue has been resolved. We are working to confirm.
It was due to a problem with an upstream ISP affecting IP traffic traversing Telecity Sovereign House. Please open a support ticket if you are experiencing issues.
RESOLVED - Data - 06/11/2014 15:14
- Última Atualização - 06/11/2014 19:55
- Prioridade - Baixo
- Afetando Outros - Hosted VM
-
Many VM's will be paused briefly so we can install important platform updates.
- Data - 05/16/2014 19:00 - 05/18/2014 04:00
- Última Atualização - 06/11/2014 15:17
- Prioridade - Alto
- Afetando Outros - All services hosted in our UK datacenter
-
We are pleased to announce the date of our UK datacenter move to Equinix LD5 has been set. The move in date is Saturday, March 8th.
On this date, all VM's hosted in our UK datacenter will go into hibernation at approximately 8AM GMT while equipment is moved to the new datacenter. No VM's will be rebooted and IP addresses will remain as is. Please check this page for status updates. We expect the move to take several hours and will be completed by Sunday morning.
We are also pleased to announce that PrimeXM will be ON NET in our UK datacenter at the same time. PrimeXM is already ON NET in our NYC datacenter.
Please do not hesitate to contact CNS Support with any questions or concerns.
UPDATE 10PM GMT: All equipment has been safely relocated. We are now working to re-patch everything back together. This process will require several hours to complete.
UPDATE 4:50AM GMT: We are slowing bringing up hosted services. IP routing is not yet optimized.
UPDATE 6AM GMT: We are continuing to bring VM's out of hibernation. IP routing is not yet optimized.
UPDATE 3:50PM GMT: All VM's are running. If your VM is not operating, please open a support ticket. IP routing is not yet fully optimized.
UPDATE 7:30PM GMT: Our migration into LD5 is complete. Many more brokers are now connected to ON NET in *both* NYC and UK. Please review the latency chart:
http://helpdesk.commercialnetworkservices.net/index.php?/Knowledgebase/Article/View/98/2/latency-to-popular-brokers-and-services-from-our-traders-vps
Please open a support ticket if you are experiencing problems connecting to the VPS or the broker.
- Data - 03/08/2014 08:00 - 03/09/2014 12:30
- Última Atualização - 03/09/2014 13:29
- Prioridade - Crítico
- Afetando Servidor - VMM
-
There was a period of poor connectivity to the UK based VPS due to a DOS (Denial of Service Attack) on the Data Centre network this has now been resolved and anyone affected should be able to connect normally. Please let us know if you have any trouble connecting to your VPS.
The attack was a data flood of 30+ GB which started at approximatly 9am GMT this morning.
We have been in contact with the DC and will continue to monitor the situation.
The UK has been stable for three hours now.
- Data - 03/06/2014 03:20 - 03/06/2014 00:00
- Última Atualização - 03/06/2014 05:52
- Prioridade - Crítico
- Afetando Outros - Packet loss
-
Our UK datacenter experienced severe packet loss beginning at 1:27PM local time today.
Packet loss was caused by a DDoS attack on upstream providers circuits.
DoS attack counter measures appear to be successful. As of 3:35PM (London time), we have monitored a 70 minute period of stability.
All possible countermeasures against this specific attack are active. We will continue to monitor closely and respond accordingly.- Attack measured 30+Gb [likely to be revised up as more measurements are figured in]
- The attack flooded many upstream circuits and caused severe packet loss
- The target of the attack has been isolated
- Normal operation has been reestablished
Update 1/17/2014 8:43AM (GMT-8): The DDoS is ongoing, but being successfully mitigated.
Update 1/18/2014 8:21PM (GMT-8): The DDoS has stopped. - Data - 01/16/2014 05:32 - 01/18/2014 20:22
- Última Atualização - 01/18/2014 20:22
- Prioridade - Baixo
- Afetando Outros - All hosted virtual servers & desktops
-
Please be advised all VM's will be paused briefly over the maintenance window so we can install important updates into our hosting platform. This maintenance will not result in a reboot of the VM.
- Data - 11/16/2013 00:01 - 11/16/2013 15:32
- Última Atualização - 11/16/2013 15:32
- Prioridade - Crítico
- Afetando Outros - connectivity
-
11/4/2013 1:02PM (GMT-8): RESOLVED BY VIRGIN MEDIA
We have received multiple tickets from subscribers of the ISP Virgin Media in the UK who are unable to reach their NYC services. This issue is due to an outage on the Virgin Media side - it has no effect on your CNS services. The only possible option until it is resolved is to try another ISP. Please open a support ticket if you need urgent assistance. CNS Support will be happy to login to your VM and complete any necessary tasks.
This page will be updated as we receive any new information.
UPDATE 11/3/2013 9:27PM (GMT-8): Virgin has acknowledged the issue and has estimated a resolution on 11/5. We recommend all affected subscribers open a support ticket with Virgin as it may speed up a resolution. We are working on setting up a temporary solution for affected CNS subscribers to reach NYC via our UK datacenter.
UPDATE 11/4/2013 8:36AM (GMT-8): We are working to setup a temporary proxy in our UK datacenter which will enable affected subscribers to bounce thru UK and reach NYC. We will follow-up with that information as soon as possible. If you also have service in our UK or SDCA datacenter, you can login to your NYC server via any of those other datacenters. (i.e. login to the other server and from that server, to your NYC server).
UPDATE 11/4/2013 12:08PM (GMT-8): We have setup a temporary proxy server in our UK datacenter. Utilizing the proxy server will bounce ALL your traffic through our UK datacenter (and ride our back-end network to NYC).
Note: ALL traffic from your PC will route through our UK datacenter, even traffic not traveling to our NYC datacenter. This is a temporary solution until Virgin Media restores their connection.
To use the proxy:
Install Proxifier for Windows or Mac to your PC desktop. Proxifer will force your software to use the proxy connection.
Get Proxifier here: http://www.proxifier.com/index.htm
Next, create a profile. It is easy to make a new one:
Select file->new profile
Then, “profile->proxy servers”
Click “add”
Address: 84.45.32.32
Port: 1080
Socks 5
Click “OK”
Click “yes” to make it the default proxy server
Click “OK” to accept any prompts and close the window.
Try to RDP to your VM.
NOTE: This proxy server will be shut down when Virgin Media has restored the connection. If the connection stops working, go into your PC’s add/remove programs and uninstall Proxifier.
11/4/2013 1:02PM (GMT-8): RESOLVED BY VIRGIN MEDIA - Data - 11/03/2013 15:10 - 11/04/2013 13:06
- Última Atualização - 11/04/2013 13:07
- Prioridade - Baixo
- Afetando Servidor - VMM
-
All Windows VM's hosted on our 2012 platform will be paused briefly during the maintenance window of 10/11/2013, so we can install an important update as quickly as possible. We anticipate the pause will last no more than 15 minutes.
- Data - 10/11/2013 19:00 - 10/13/2013 06:00
- Última Atualização - 10/13/2013 15:04
- Prioridade - Baixo
- Afetando Servidor - VMM
-
Many VM's may be offline for about 45 minutes on Saturday and into Sunday morning while we work to replace problematic SSD disks.
- Data - 09/07/2013 00:00 - 09/08/2013 11:00
- Última Atualização - 09/08/2013 10:57
- Prioridade - Baixo
- Afetando Outros - Control panel & helpdesk
-
Our control panel and helpdesk will be offline sometime this weekend and lasting for several hours so we can complete a planned upgrade.
- Data - 09/06/2013 00:00 - 09/07/2013 02:00
- Última Atualização - 09/08/2013 00:50
- Prioridade - Crítico
- Afetando Servidor - VMM
-
Our London datacenter is reporting an issue inside the datacenter has cut our IP connectivity. They are working towards a resolution as fast as possible. More details will be posted as they become available.
Connectivity has been restored. We are waiting on further information.
Was a possible fiber cut inside the datacenter. We are waiting on further information.
- Data - 06/23/2013 22:06 - 06/23/2013 23:20
- Última Atualização - 06/23/2013 23:37
- Prioridade - Médio
- Afetando Outros - mail.tradersvps.net
-
A severe flood of incoming mail from a third party source is causing significant delivery delays while we work to clear the problem.
- Data - 03/05/2013 00:04 - 03/05/2013 11:37
- Última Atualização - 03/05/2013 11:38
- Prioridade - Crítico
- Afetando Outros - All IP services
-
We are currently investigating a intermittent connectivity issue between Australia and North America. Subscribers in Australia may be unable to reach their hosted services in North America.
We believe this is a long-haul issue over the Pacific. Hosted servers are unaffected and running normally.
We are investigating and will post updates here.
Â
...
The connection has healed.
- Data - 10/16/2012 19:10
- Última Atualização - 10/16/2012 21:13
- Prioridade - Baixo
- Afetando Outros - Some UK2 VMs
-
Some UK2 virtual machines are experiencing an interruption in network connectivity.
We have rerouted traffic internally away from the problem segment to restore connectivity. We continue to work on a permanent resolution. - Data - 08/16/2012 00:00 - 08/16/2012 20:47
- Última Atualização - 08/16/2012 18:43
- Prioridade - Baixo
- Afetando Outros - ALL UK2 operations
-
RESOLVED
This incident was caused by an accidental cut of all link lines during maintenance of a link provider's connectivity in to the datacenter.
--
We are investigating a loss of connectivity in our UK2 datacenter.
[UPDATE 4:30PM PACIFIC GMT-8]
Datacenter technicians have identified the source of the issue with a link provider and are working towards a resolution.
[UPDATE 4:52PM PACIFIC GMT-8]
Connectivity has been restored. We are waiting a report from the datacenter.
[UPDATE 4:57PM PACIFIC GMT-8]
Total down time was 1:05. Each affected subscriber will receive a 15% credit under the terms of our 100% uptime guarantee. Please allow 72 hours for the credit to reflect in your account.
Priority reduced to low as connectivity has been restored. We are waiting a for full report from the datacenter.
Â
- Data - 04/25/2012 15:49 - 04/25/2012 16:52
- Última Atualização - 04/26/2012 12:27
- Prioridade - Médio
- Afetando Outros - All UK operations
-
RESOLVED by temporary routing table adjustment until problem ISP can provide a permanent fix.
Â
A European backbone ISP is experiencing issues resulting in increased latency throughout various parts of Europe. The issue is affecting CNS subscribers reaching HotForex from our UK2 datacenter. Due to the trouble, packets are traveling to the USA and back to Webazilla.
We have contacted the problem network and are awaiting a fix.
Further updates will be posted here.
- Data - 04/05/2012 11:33
- Última Atualização - 04/05/2012 16:37
- Prioridade - Crítico
- Afetando Outros - All NYC services
-
RESOLVED
This issue has been resolved by the responsible European provider.
---
We have received support tickets from subscribers in parts of Europe reporting they are unable to access their VM in our NYC datacenter. We have another ticket by a subscriber in NYC unable to ping a host in France.
This appears to be a regional connectivity issue in Europe, possibly at Level3. This issue has been escalated to Level3 and we are working to restore the route over alternate paths, if possible.
This issue has no impact on hosted applications running in the VM, or the VM's connectivity, unless the remote hosts are in the affected region.
If you are experiencing trouble outside any of the known affected regions listed below, please open a support ticket and let us know. The information will help to resolve the issue more quickly.
Â
Known affected regions:
Italy
France
Greece
Hungary
Cyprus
Â
- Data - 04/04/2012 13:09
- Última Atualização - 04/04/2012 21:47
- Prioridade - Baixo
- Afetando Outros - NYC DC
-
We are investigating tickets regarding a problem making a Zenfire connection. All tickets received so far are from our NYC datacenter.
If you are experiencing such an issue then please open a support ticket. Be sure to let us know which broker you are using.
The problem appears to be isolated to Mirus Futures.
We are investigating
This appears to be a new firewall rule put in place at the broker's network.
As of 11:37AM Pacific time, Mirus has agreed to investigate a possible problem on their side. We are waiting for their follow-up.
12:37PM: Mirus has responded: Ok, we haven’t blocked anything on our end. These logins work fine from our end. It’s definitely something on the end of the NY datacenter. I can’t provide support for the hosting of their virtual machines. If you would like to discuss hosting from our virtual machines I can send you over an e-mail address to contact our virtual machine rep.
--
We are investigating a possible name server issue with the rithmic.com domain, which is a broker domain to resolve IP addresses.
Missing nameservers reported by parent FAIL: The following nameservers are listed at your nameservers as nameservers for your domain, but are not listed at the parent nameservers (see RFC2181 5.4.1). You need to make sure that these nameservers are working.If they are not working ok, you may have problems!
ritpz03001.rithmic.com
omnebb00420.rithmic.com
ritmz01002.rithmic.com
ritpz03000.rithmic.com
ritmz01001.rithmic.comMissing nameservers reported by your nameservers ERROR: One or more of the nameservers listed at the parent servers are not listed as NS records at your nameservers. The problem NS records are:
ritmz01001.01.rithmic.com
ritmz01001-eth2.01.rithmic.com
ritmz01002-eth1.01.rithmic.com
ritmz01002.01.rithmic.com
This is listed as an ERROR because there are some cases where nasty problems can occur (if the TTLs vary from the NS records at the root servers and the NS records point to your own domain, for example).1:35PM: This problem is also affecting traders unrelated to Mirus. It is a zenfire issue, rooted at the rithmic.com domain. We are working to make contact with the appropriate person.
2:05PM: We have manually forced a DNS refresh against a good rithmic.com name server and DNS records are now resolving. This domain has a 4 week TTL, and so it may come back if it is not corrected by rithmic.com admin in the next 4 weeks. We are still working to make contact with the responsible admin. The 'health' of the domain record can be found at this third-party site: http://www.intodns.com/rithmic.com Priority reduced to low as connectivity has been restored by manual DNS refresh to good rithmic.com name server.
2:26PM: Zenfire admin has responded and is working to resolve the issue.
2:39PM: [CNS analysis] This issue appears to have occurred because the ritpz03001.rithmic.com name server is down. The domain record also contains errors in the NS records, which are preventing the DNS servers from resolving the domain with other rithmic.com name servers.
2/28/2012 7:41AM: Rithmic.com is disputing their DNS records contain any errors. So we ran a test against a second third-party DNS testing tool and found even more errors: http://dnscheck.pingdom.com/?domain=rithmic.com We adivise all traders using Zenfire connections to use caution until Rithmic.com corrects their DNS issue(s).
- Data - 02/27/2012 10:28
- Última Atualização - 02/28/2012 07:43
- Prioridade - Médio
- Afetando Outros - UK2 datacenter
-
Malicious traffic directed at our UK2 datacenter caused latency to spike for about 30 minutes while equipment worked to deny the packets. We have successfully nulled the malicious traffic and latency is back to normal levels. We are continuing to monitor. Event severity has been reduced from Critical to Medium as impact to our subscribers has been eliminated.
Â
- Data - 02/24/2012 09:17
- Última Atualização - 02/24/2012 10:51
- Prioridade - Crítico
- Afetando Outros - All NYC services
-
Some subscribers have reported severe packet loss in the NYC area. We are currently investigating.
This issue was caused by abuse. The traffic has been stopped and we are currently evaluating.
No more packet loss detected after blocking the offending traffic. -RESOLVED
Â
- Data - 11/17/2011 18:10
- Última Atualização - 11/17/2011 18:43
- Prioridade - Crítico
- Afetando Outros - UK VPS
-
A small number of VM's in our UK datacenter are offline. Symptoms point to a virtual switch. We are investigating.
Service has already been restored, but we are continuing to investigate.
- Data - 11/08/2011 00:03
- Última Atualização - 11/08/2011 01:45
- Prioridade - Baixo
- Afetando Outros - NYC Datacenter
-
On Saturday, September 17th, we will be completing a scheduled upgrade of network and power equipment. During this time, connectivity to VM's in our NYC DC may be interrupted briefly.
- Data - 09/17/2011 00:00 - 09/18/2011 00:55
- Última Atualização - 09/16/2011 10:48
- Prioridade - Baixo
- Afetando Outros - SDCA datacenter
-
This is to notify you that AIS will be performing routine maintenance on Saturday, September 24, 2011.
Type of Maintenance: Electrical
Location: 9725 Scranton Rd. San Diego, CA 92121
Purpose: AIS electrical engineers will be completing the installation of fuse status LED's on the main switchgear and utility service entrance over-current protection device. The facility will have access to utility service power (and generator power if necessary), throughout the process. Once the installation is complete, a testing of the system will take place by initiating a "simulated loss of utility" transfer from utility power to generator power, and then a return to utility power. Utility power will be available immediately throughout the transfer process in the event of any contingency.
Window Start: 9/24/2011 - 09:00am PDT
Window End: 9/24/2011 - 8:00pm PDTService Impact: This maintenance is not expected to be service impacting.
Schedule: The window for maintenance is scheduled to begin at 9:00am on Saturday, September 24, 2011, and end at 8:00pmon Saturday, September 24, 2011. Should additional time be required, notice will be provided and the maintenance window will be expanded.
Testing & Planning: All testing and planning being conducted during this window is part of a pre-defined checklist designed by the AIS electrical engineering team and consistent with IEEE and UL standards.
Regression Planning: The AIS electrical engineering team will be on-site managing this window. As with any mechanical work, while highly unlikely, there is a possibility that something unexpected may occur during the work
process. Should any issues arise, all equipment will be placed back into standard operation and the work will be postponed until the issue is resolved. - Data - 09/24/2011 00:00 - 09/24/2011 20:41
- Última Atualização - 09/16/2011 09:46
- Prioridade - Baixo
- Afetando Outros - SDCA Datacenter
-
NOTE: The maintenance has been postponed until further notice.
On Saturday, June 25th, AIS will be performing generator testing. This test will include a controlled transfer from utility power to generator and return. The critical production environment will continue to remain on UPS power and should not see any impact. We will remain on generator for approximately 30 minutes and then perform a controlled transfer back to utility power.
Type of Maintenance: Electrical
Location: San DiegoPurpose: AIS electrical engineers will be confirming the functionality of the emergency bus system to respond to a loss of utility service.
Window Start: 6/25/2011 - 6:00am PDT
Window End: 6/25/2011 - 9:00am PDT
Service Impact: This maintenance is not expected to be service impacting.
Schedule: The window for maintenance is scheduled to begin at 6:00am on Saturday, June 25, 2011, and end at 9:00am on Saturday, June 25, 2011. Should additional time be required, notice will be provided and the maintenance window will be expanded.
Testing & Planning: All testing and planning being conducted during this window is part of a pre-defined checklist designed by the AIS electrical engineering team and consistent with IEEE and UL standards.
Regression Planning: The AIS electrical engineering team will be on-site managing this window. As with any mechanical work, while highly unlikely, there is a possibility that something unexpected may occur during the work process. Should any issues arise, all equipment will be placed back into standard operation and the work will be postponed until the issue is resolved. - Data - 06/25/2011 00:00 - 06/25/2011 00:00
- Última Atualização - 06/26/2011 00:42
- Prioridade - Crítico
- Afetando Outros - All hosted services in San Diego datacenter
-
our san diego datacenter has experienced a major power failure. all systems have rebooted. If you are unable to connect to your online service then please open an emergency support ticket.  we will continue to post more information as soon as possible.
Official explanation from the facility:
Services Affected: All Electro-Mechanical and most Infrastructure Support systems.
Event Start: 13:47 PDT
Event End: 14:19 PDT
Power Outage Duration: 32 Minutes
End User Impact: All Client ITÂ hardware and all applications lost power / production.
Summary: SDGE experienced a major interruption at the sub-station feed to the SDTC at 9725 Scranton Rd. Indications are that a ground-fault occurred at the sub-station; the definitive cause of the anomaly is unknown and under investigation by the utility service provider (SDGE). The outage affected the entire grid and impacted over 5970 customers including AIS.
As a consequence of the fault, the emergency stand-by engine generator system came up but was prevented from closing to the power distribution buss due to a loss of control voltage logic. The UPS system supporting the logic control relays was lost as a consequence of the fault and the primary side control transformer fuses were blown.
The production UPS system remained online for 16 minutes (14:04 PDT), supporting the critical load; however during this period a transfer to generator did not occur due to the issues cited previously. At the end of the 16 minute interval, the UPS batteries fully discharged and the UPS system supporting the production COLO floor client IT hardware and applications then tripped offline.
Approximately 15-16 minutes later (14:19 PDT) AIS engineers performing the electrical system trouble-shooting identified the cause of the failure of the transfer control circuit and performed a manual closure onto the generator buss. This restored power to the control circuit UPS and the transfer control relays were then energized and power restored.
Once placed back into the automatic mode; the system automatically re-transferred back to utility power as utility service had been restored by this time. The production UPS system returned to power in the "bypass" mode. All (4) UPS modules had already tripped offline. Power was now restored to all production IT hardware.
Mechanical systems support was manually brought online approximately 15 minutes later (14:34 PDT) providing environmental conditioning to the production environment.
UPS system trouble-shooting continued; all (4) modules were reset and placed back into production and the UPS system taken from "bypass" to UPS power approximately 19 minutes later (14:52 PDT).
By 15:07 PDT, the data center was stable with environmental conditions and power distribution system operating at nominal.
Root Cause: All issues are currently under investigation however the following determinations are pending completed forensic analysis:1.   Loss of control voltage / transfer control logic: fault current at the CT fuses and dedicated UPS.
2.   OCPD (breaker) lock-out: loss of the control relay sensing due to blown buses (above).
3.   SDGE ground fault: pending dialogue with SDGE (currently unknown); meeting scheduled 06/24/11.
Â
- Data - 06/22/2011 15:16 - 06/22/2011 00:00
- Última Atualização - 06/23/2011 19:15
Status do Servidor
Abaixo está uma lista em tempo real do status de nossos servidores, onde você pode verificar se há algum problema.
Servidor | HTTP | FTP | POP3 | Informação do PHP | Carga do Servidor | Uptime |
---|---|---|---|---|---|---|
Winthrop | Informação do PHP |