Archive for January, 2008

2007 – The Blog in Review

Okay, this is a super easy blog post. These are just 2007 stats from this website.

I made 47 blog entries which led to 40 comments (okay – a few of the comments were mine). While my posting frequency was down from 2006, traffic nearly doubled! I get an average of 3500 page hits each month. Keep in mind some of that is the spiders that crawl the Internet – but still cool.

The top referring site is Google (both search and images). Top client browser is still Microsoft’s IE, but Firefox is a close second. Top five search criteria that brought people to my website in 2007 (with links to the articles where relevant) …

1. yenrab (this blog!)
2. 10.77.77.77
3. press to meco
4. bl460c slow boot
5. clarion san training

So technology and NASA related posts seem to rule the roost when it comes to search engine traffic.

Comments

The road to recovery … Phase 2

Thanks to those of you who have expressed your support and concern over the last week – it is greatly appreciated.  A special thanks to Amy who sent us everything from server parts to boxes of candy (all of which we needed)!  While our email server is not back to “normal” yet, we have made progress.  The first phase was restoring basic functionality (blank mailfiles and working Blackberrys).  That is mostly done now, and we are into the planning for our phase 2 recovery.

Not all the details have been hashed out yet, but two things are clear.  First, the server based mailfiles may contain some data that doesn’t exist anywhere else.  Second, the tape backups are going to be our cleanest and most reliable copies.  So we don’t want to lose the server based mailfiles, but we need a place to restore the tape copies …. all 1.8 TB of them!

So this weekend I began moving the server based mailfiles to another server.  It may be a Gigabit Ethernet network, but SATA drive arrays are not that fast (we are moving from the production FC drives to archive SATA drives).  It took a little over 24 hours to move the first TB.  That process will continue through the night and should be done sometime tomorrow morning.

At that point we will have high speed storage with enough capacity to handle the tape restore.  We don’t really need the FC disk for the restore process, but it will be critical when we get to the heavy lifting time as we merge mailbox contents (old and new must somehow meet again).  So that is about it for now.  Sorry I can’t promise to write daily, but I will try keep this up-to-date.

Comments (1)

Part 2 of “A Day in the Life”

It’s late (or early depending on how you look at it). I’m exhausted, and I’m still at the office. I didn’t get much quality sleep last night, and this day started at 5:30 AM. It is now almost 12:30 AM … so a little over 19 hours so far.

Short server update. Maintenance ran for 10+ hours. Mailboxes were working, but mail is still missing from various views. Server crashed again only an hour or two into the business day. Down the tubes goes all that database maintenance from last night. I’m hear to tell you – IBM is clueless when it comes to supporting Lotus Domino.

A new plan in three parts:

1. Build a new server from equipment on hand
2. Create 450+ empty mailboxes for each person
3. Reconfigure each client to point to the new server

The goal was to provide a stable server so we could continue to communicate with our customers while we sort out the whole mess. Our IT department did all of the above TODAY. We even had a power supply for the drive array sent via same day courier from another office almost 200 miles away. It was some really great teamwork under tough pressure. The fun continues. Now we just have to figure out how to put Humpty Dumpty back together again.

Comments

A Day in the Life

Ever have one of those days where nothing turned out the way you expected when you woke up in the morning?  Probably – most of us have.  Today was my turn.  I woke up to an email (which was followed by a phone call) saying that we had a mail server down at work.  And not just any mail server – it was the one that all our big wigs use.  Now I’m not involved with servers, storage, or backups on a day-to-day basis, but this was an “all hands” call kind of problem.

What started as a reboot lead to mail databases that required consistency checks.  Just about when we had those almost cleared up, then we realized no mail was routing (in, out, or intra).  Couldn’t solve the mail routing issue without bouncing the server again.  So now mail was flowing again.  But it seems that a large number of users mailboxes are damaged (inbox and folders missing or missing content).  Throughout the afternoon the problem got worse as the email server software self restarts (multiple times).

So right now the server is down for “offline” maintenance.  That should take 5 or 6 hours to run.  We will be backing off a software patch from the previous weekend (to rule out any possible connection that update might have to the events of today).  After the maintenance and patch back-out are done, we will restore (from tape) about 30 mailboxes.  Then we have to compare inbox and folder contents from the repaired mailboxes to the restored copies.  Sounds like fun, right?

So I’m going to taking a nap.  I expect to begin working around midnight or 1 AM.  Sometimes this is how it goes in the IT business.  Hope your day was less stressful than mine.  🙂

Comments (1)

Creative Commons License Creative Commons License