cancel
Showing results for 
Search instead for 
Did you mean: 

EOD 4th June

EOD 4th June

EOD 4th June

In the early hours of this morning one of our new data centres here in Sheffield lost power so it was an early start for the network crew - and our Bob too, manfully putting out the Service Status and then going to the gym athletic soul that he is.   Most affected services were back up and running by 9am but there were some residual issues. We're now fairly confident that all customer-facing problems have been addressed or are in the process of being fixed.   Thu 4 Jun 2009, 07:46  Data Centre Outage (56812) - NEW Thu 4 Jun 2009, 09:11  Data Centre Outage (56812) - UPDATE Thu 4 Jun 2009, 11:40  Data Centre Outage (56812) - UPDATE Thu 4 Jun 2009, 18:18  Data Centre Outage (56812) - UPDATE Keep up to date with our Twitter feed  http://twitter.com/plusnet   Here's Nick from the CSC with a staff's eye view of the day: "Howdy, all! All fun and games here today on the CSC floor after a power outage at one of our main data centres caused a lot of aggravation this morning. It's made today extremely busy and now I have a slight headache. For a short while we couldn't load up any customer details when they called in, which although inconvenient, did test our detective skills somewhat when it comes to diagnosing faults, etc - and given that we couldn't see customers' connection logs for much of the day, also tested our psychic ability (I don't have any, I have discovered). Big sighs of relief, then, when the brave network engineers on the front line got it fixed and we could finally be up to our normal standards again. My apologies to any of you out there that were affected by this, or tried to call in and had to wait in the longer-than-normal call queues today. Other than that, it's good to be back on tech calls after a month on the faults team - although that was an excellent experience and we now have three more tech agents on secondment over there. It's all part of the grand master plan to make everyone here an expert I guess! It certainly makes my job easier, and helps me to attain the first time fix holy grail as it means there's less referring customers to other people. So anyway, what have we been dealing with here then -

  1. Intermittent faults, first touch. I've decided to give a little more detail to customers about why we're asking them to do the long list of checks when they ring in with a fault. Not enough people actually complete them all, and they are so incredibly important. I've found that a little explanation goes a long way, rather than just ask them to do it without saying why.
  2. No sync faults
  3. Email configuration - The normal problems, really. People not getting the right usernames, etc. Nothing too exciting, but relatively easy to help with so not so bad!
  4. Provisioning updates - A good, health number of provisioning updates. Always good to talk to new customers anyway.
  5. Router configuration

So, nearly the end of a busy day for me then. The headache isn't helped by the building work being done next door. Loud drilling isn't my thing, I guess. Anyway, in true Jerry Springer style - look after yourselves, aaaaand each other."   Well it's good that the drilling noise didn't go on too long.  Carl in Networks is so busy he's just given me this update: Network Transformation continuing to work on the new data centre in Sheffield and providing support.  Network Ops getting the platform back after the power outage in one of our Sheffield data centres and continuing to deal with questions and problems.. and we're all ace. And that of course is true. PJ out.

 

0 Thanks
0 Comments
140 Views