ASayre
Retired RunUO Developer
Finished with Downtime on 7-31
Hello. on July 27th, one of our Harddrives in our RAID 5 failed. Due to the RAID, all our data is thankfully in place. Some of you have noticed though that the saves began to take much longer. Having one of the drives degraded will do that.
The last save before the crash did NOT complete. This means there'll be a small revert when we do get it back up. It was unavoidable.
All the data is backed up and safe, as well as still being fine and present on the demise Server. On 2 other locations. One being the web server which has an even more aggressive safety system, and the other being my home computer.
Rather than risk losing the entire array though, we brought down the server and we are currently working with our host, EV1 Servers, to get the drive replaced, rebuilt, and then we should carry on like normal.
In response to this though, we have upped our backup policies to do an automatic, nightly backup to a different computer. This backup will consist of everything, scripts, saves, data, etc.
Thanks for your patience,
The Demise Staff
(There is no current ETA as it'll take however long it takes for the drive to be replaced and for it to be rebuilt from the other drives. Asking for an ETA won't help it go any faster, we're doing whatever we can to get it back up asap.)
EDIT 11pm PST: The faulty drive has been replaced and is in the process of being rebuilt.
Edit 1AM: previous report was inaccurate. The replacing is NOW successful, and rebuilding is underway. EV1 tech said ABOUT 8 hours. This is a minimum time, The ev1 tech said they'll call me (which'll wake me up in my much needed sleep) so I can get the server back up when it's done!
EDIT 11AM: The server is BACK UP. Sorry for the extra 2 horus before getting it up even though the drive was rebuilt, was asleep!
Anyways, it's all fixed and we have a total of FOUR drives now in our raid. Thanks for your continued patience!
Hello. on July 27th, one of our Harddrives in our RAID 5 failed. Due to the RAID, all our data is thankfully in place. Some of you have noticed though that the saves began to take much longer. Having one of the drives degraded will do that.
The last save before the crash did NOT complete. This means there'll be a small revert when we do get it back up. It was unavoidable.
All the data is backed up and safe, as well as still being fine and present on the demise Server. On 2 other locations. One being the web server which has an even more aggressive safety system, and the other being my home computer.
Rather than risk losing the entire array though, we brought down the server and we are currently working with our host, EV1 Servers, to get the drive replaced, rebuilt, and then we should carry on like normal.
In response to this though, we have upped our backup policies to do an automatic, nightly backup to a different computer. This backup will consist of everything, scripts, saves, data, etc.
Thanks for your patience,
The Demise Staff
(There is no current ETA as it'll take however long it takes for the drive to be replaced and for it to be rebuilt from the other drives. Asking for an ETA won't help it go any faster, we're doing whatever we can to get it back up asap.)
EDIT 11pm PST: The faulty drive has been replaced and is in the process of being rebuilt.
Edit 1AM: previous report was inaccurate. The replacing is NOW successful, and rebuilding is underway. EV1 tech said ABOUT 8 hours. This is a minimum time, The ev1 tech said they'll call me (which'll wake me up in my much needed sleep) so I can get the server back up when it's done!
EDIT 11AM: The server is BACK UP. Sorry for the extra 2 horus before getting it up even though the drive was rebuilt, was asleep!
Anyways, it's all fixed and we have a total of FOUR drives now in our raid. Thanks for your continued patience!