Pages

Wednesday, September 24, 2014

How to restore a file with StorageCraft ShadowProtect

I've installed ShadowProtect on most of my clients' servers - it's a great product and if you're not using it for backups, then seriously consider it. One of our sub-contractors emailed me with some issues on restoring files so I thought I'd add my reply to him here as a quick cheat sheet:

  • Log onto the server you need to restore the file from
  • Open up the share where your backups are going
  • Browse through the list of files and look for an .cd or .cw or .cm file around the correct date
    • -cd.spi is a consolidated daily
    • -cw.spi is a consolidated weekly
    • -cm.spi is a consolidated monthly
    • for a full listing see here: http://bit.ly/1mNLBps
  • once you've found the correct file, right click on it and choose ShadowProtect Mount
  • pick the defaults, except when it comes to the right date - the consolidated files have a list of possible days / weeks that you can choose - find the right one and click on it, then go Next
  • mount the file as read-only unless you need read-write access
  • the computer will mount the drive as a new drive letter
  • browse through that drive until you find the file you want to restore, then copy and paste it to the right location and that's it
  • unmount the backup image and all finished! 

It's important to note this is only one of two ways of doing it. You can use the wizard that is part of ShadowProtect and it's even easier. At the time I was in a hurry and had to find multiple awful files so I used this method, plus I find that I use this method for Granular Recovery for Exchange restores.

Friday, August 8, 2014

Restoring Windows Sharepoint Services 3.0 - from disaster to victory beer!

Recently during a server upgrade I applied SP3 to Windows Sharepoint Services 3.0. This particular server had seen no love in a long, long time and it needed an absolute slew of updates. Naturally, Sharepoint broke and the site loved by my client was unavailable, as were many other services.

The errors in the Eventlog were varied and painful, with lots of vague references to the apocalypse and the like. Naturally the logs get incredibly dense and I had another issue to contend with along the way - disk corruption. The ntfrs filesystem was reporting corruption and had taken out a chunk of the Sharepoint Wizard's configuration. That obviously had to be fixed first and was very worrisome - especially given I was working on a RAID1 disk.

Normally, because the database needs an upgrade when you apply SP3, if it doesn't start straight up you can run the SharePoint Products and Technologies Configuration Wizard to repair it. Failing that, you can disconnect the farm, fix the database issues and then re-run the wizard and connect back to the farm. With the disk issues and also with the failure of the systems admin to fully apply all the updates none of this was working - in fact the Wizard was failing spectacularly.

This is where things got to from my notes:
  • Ran the config wizard and told it to disconnect from the server farm per MS documentation
  • re-ran config wizard - it is now reporting that IIS is not working properly.
  • have looked in to this - suggestion is that a compatibility mode setting has not been applied. Unable to apply this in Windows Server 2003.
  • have run a repair on WSS 3.0 - this requires a reboot
  • many many ASP.NET 2.blah errors. All non-descriptive and very dense to understand without being a .NET programmer.
So we were right up that fabled creek without a paddle. I finished patching the system, which sorted out the issues with ASP.NET. I still had no connectivity to SharePoint so I ran through some more updates and managed to partially get the SharePoint Admin site up. I was still getting all sorts of errors and came across a post that suggested I change the ASP.NET version of the Admin site to 2.0.whatever. You can get to this via the IIS management tool, right click on the website, go to ASP.NET and edit the configuration, altering it to the version you want. I did this and it made no difference, but after restarting IIS the admin site came up. Awesome sauce. There were also a few permission changes I needed to make - the Network Service account had somehow lost access to the content database.

I had a backup of the all the WSS databases, and the databases themselves were actually running on the server still. What I didn't realise and what I hope you, gentle reader, can take from this, is that the restore was far easier than I thought. I removed SharePoint from IIS, and created a new web application. I also created a new site collection and new database. From here I went to Content Databases and added in the old content database but I still couldn't get the right site to come up. In fact, the old content DB and the new one conflicted and I had no access to anything. What I should have done was this (all through the WSS Central Administration Site)

  • create a web application
  • in Content Databases add the old content database - you may have to use the stsadm command to do it which is:
    • stsadm -o addcontentdb -url http://server -databasename WSS_Content (which is the default name)
  • Check under Site Collection List - you should see your old website application there
  • restart IIS and check the site.
Where I had a lot of pain was that I didn't realise the old site was held within the WSS_Content database and I didn't need to add a new site or create a new site collection. How remarkably painful is all I can say. I hope in future that it'll be a bit easier during upgrades.

Tuesday, July 15, 2014

Upgrading DragonFlyBSD

I always forget how to do this, so I'm documenting it here. The DragonFlyBSD website is quite good and this all comes from www.dragonflybsd.org/docs/newhandbook/Upgrading

Firstly, make sure the Makefile is present:

# cd /usr
# make src-create

and wait while it does it's thing.

Then we need to get the new source to build it:

# cd /usr/src
# git checkout DragonFlyBSD_RELEASE_3_8 (which is the current one)

To find out what the current one is:

# cd /usr/srv
# git pull
# git branch -r

Then the build and upgrade process:

# cd /usr/src
# make buildworld
# make buildkernel
# make installkernel
# make install world
# make upgrade
# reboot

And it should all be done.

Monday, June 9, 2014

Adventures with Crashplan for backups

Recently through the excellent SAGE-AU (www.sage-au.org.au) I read about Crashplan. Produced by Code42 and found here: http://www.code42.com/crashplan/ there were lots of positive comments about it. I've since deployed it in two separate locations - in the office and at home. I'm using the free implementation at the moment, which allows you a backup each day to a variety of places. They include a 30 day trial of their cloud backup solution - which is quite cheap for a home implementation - $165 / year for 2 - 10 computers. Check out the full pricing - but see what you can do with the free version:-

At the office we have a straight Microsoft Windows based environment - Windows 7, Server 2008 R2 and a wee Windows 8 here and there. I've set up a Crashplan account using a single email address and installed it on almost all our machines. I have a Windows 2012 Server running in our virtual environment and I'm using it as the base for all the backups to go to. I added a 2TB virtual disk to it, configured Crashplan and started pointing machines back to it. It's working brilliantly! As they say though, backups are optional - restores are mandatory. Since implementation I've had to run three separate restores, from all sorts of weird files to basic word documents and it's run flawlessly!

At home I've been messing with it too. I've installed it on my Linux Mint desktop which runs all the time, and has an NFS share back to my FreeNAS. I've set up Crashplan to use that location for backups and I have the wife's Windows 7 laptop, my MacBook Air and my Windows 8 PC all backing up to that location now. Totally cool! Crashplan has installed and worked on all the machines without any issues, complications or anything. It's excellent!

Emails are sent from Crashplan to notify you if machines are backing up properly or haven't backed up for a given amount of time and this is very handy. Our offsite techs are frequently away for days and as soon as they get back, their laptops start automatically backing up. It's the easiest implementation I've found so far.

Check it out http://www.code42.com/crashplan/ it's awesome!

Sunday, May 25, 2014

Securely wiping a hard disk in Linux

We're getting ready for some changes at home, and I thought I'd go through the old hard disk drives I have laying around. Once I'd managed to get them all together there are a staggering 25 to be wiped :(

Usually I use the excellent Darik's Boot and Nuke (DBAN) which is awesome and very simple to use. In this instance, however, I'm also doing a fairly large data sort, archive etc and I need to have a functional machine to browse the disks prior to their destruction and reissue. Given my well know love for Linux Mint I executed an extensive (20 second) search of Google and came up with the following interesting information:-

ATA, SATA and SSD's now have an internal way of securely wiping themselves! From a command prompt (elevate it to root for ease of use and make a note of your disk drives - if you wipe your system disk or data disk then it's game over! Maybe use a LiveCD?)

Go and check out https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase

The quick version is:

# hdparm -I /dev/sdx (where sdx is your disk) and check that "not frozen" is there. If that's OK proceed:

Set a password on the disk (otherwise the secure wipe won't work):

# hdparm --user-master u --security-set-pass ryv1 /dev/sdx (where ryv1 is the password, and the username is u)

Check it worked:

# hdparm -I /dev/sdx
Security:
       Master password revision code = 65534
               supported
               enabled
       not     locked
       not     frozen
       not     expired: security count
               supported: enhanced erase
       Security level high
       440min for SECURITY ERASE UNIT. 440min for ENHANCED SECURITY ERASE UNIT.


Note the 440min is for a 2TB Western Digital Green drive. 440min is over 6 hours!

Now it's time to unleash the full power of this fully operational command!

# time hdparm --user-master u --security-erase ryv1 /dev/sdg security_password="ryv1"
/dev/sdg:
 Issuing SECURITY_ERASE command, password="ryv1", user=user

It's potentially valuable to note that when I ran the command above on my Linux box I stupidly pressed CTRL-C to copy the above text - which is also the command for cancelling a running program. NOTHING HAPPENED! It's a runaway freight train so be *very* careful to select the right disk or it could be a sad day for you.

The good thing about this command though, the load on your computer is negligible - the disk itself is doing all the work. I can see it's I/O is through the roof, but otherwise normal system actions are not compromised.

The upshot of all of this is as follows - although it's a cool way to do it, I'm going to simply find the data I need off all these disks, then take them and hook them up to another machine with multiple SATA ports and DBAN the lot - much faster in the long run!

Saturday, May 24, 2014

Effects of travel on IT or What the hell do I take when I go overseas?

Recently I was on a trip to Jakarta, for pipe band of all things, however while there I still needed to keep up with my normal information load. My gear load out for work, or for holidays in Australia typically consists of two mobile phones (one work / one private), Google Nexus 7 (WiFi) and my 11" MacBook Air or 15" MacBook Pro. Taking all of this junk to Indonesia was unfeasible - although altogether the weight was under 3KG. I knew I would have my normal number of emails, still want to check my Feedly, Facebook, take photos etc. Keeping everything charged and good to go is a usual challenge, and I imagined it would be worse in Jakarta.

Heading over, I took my HTC One X, Nexus and that was it. It was a gamble because I didn't want to unplug too much, but still needed to have access to a wide variety of data. I wondered at what other people travelling took and it seemed very much that this was fairly typical - tablet + mobile phone. Very few people seemed to have included a laptop of any type. I generally find that typing on a tablet, even one with a bluetooth keyboard, is difficult to do over a long period of time, especially with any degree of accuracy so I thought this was pretty interesting. Also given the data storage limitations of tablets/phones I thought it was interesting given the amount of photos and videos everyone was taking. More than one person remarked to me that they had filled their storage and needed to delete some stuff.

Neither of the devices I took have upgradeable storage, so I had to manage it fairly carefully and took less shots than I might normally have.

Something I found to be very nice was lots and lots of free WiFi everywhere. Hotels, airports, cafes, coffeeshops, etc all had free internet and it was beautiful. As a country lad where we're lucky to get 3G coverage - let alone 4G - it was very exciting. It was nice to see such strong cell coverage everywhere too. I noted that mobile towers were spotted across the landscape. It was even better for me with the photo backups to Dropbox my HTC performs whenever it's on a WiFi connection. This is a cool feature and HTC give you a space upgrade to your Dropbox when you connect. Very nice indeed.

In reflection, I should have taken my MacBook Air at least. There were a number of times I needed to SSH to a server for changes, and using the tablet/phone was awful - slow and cumbersome. Also, I wanted to write up a travel journal, but I found that using the tablet/phone to type was interrupting to my flow - I tend to write, refine and spellcheck as I type, so getting the whole tiny little keyboard, searching for the key etc thing was very hard to get around. Constantly refining my expression was very hard. I asked about and the chaps I travelled with found no difficulty - rarely did they send big messages, and those that did were adept at using tablets to do so. It should be noted they have much smaller hands than I do! USB power adaptors were very useful, although the power in Indonesia can be a bit sketchy at times.

Good luck if you're travelling and be safe.

Wednesday, March 12, 2014

Amazon EC2 experiences

Recently I was reading about Arscoins and the usage they made of the free Amazon EC micro instances. Intrigues I decided to take a look.

Amazon have a free tier of services. Minimal devices with enough hours to run all month. I chose an Ubuntu Linux instance and after running through a simple sign up had an instance ready to go. Using shared keys I could ssh to it (the only way to go) and I had set firewall rules so that only a couple of static addresses could get to it. Amazing! It was all up and going in about 15 minutes. Only a barebones server of course but enough for testing and the obligatory oooh from my co-workers.

The instance is free for 12 months and I've set alarms so that in the case of exceeding usage I will be notified of any billing. They also offer Windows servers too and a variety of different operating systems. For the minimal amount of time involved it was a great experience. I strongly recommend treating the instance like a server and keeping it updated and secured.