Tuesday 6 November 2012

Adventures in backups and restores - StorageCraft Shadow Protect

After taking a new client over recently I have been working with StorageCraft's Shadow Protect backup solution for servers: http://www.storagecraft.com.au/  Initially I was just checking that the damn thing was backing stuff up to somewhere and trying to work through a multitude of other issues. Since these problems have settled down I've had more leisure to examine this product. 

The server solution has a wide range of backup capabilities - full backups, incremental backups and continuous incremental backups. The incremental backups can be performed every 15 minutes and up from there. These are keyed back to an initial full backup of the server and are a sector by sector backup. This means we can mount the backup image as the next free drive on the server. One can simply copy across the lost files and then dismount the backup. It's elegant, simple and works well. There is a very nice extension to this as well - the Granular Recovery for Exchange. When needing to recover a email, a folder or an entire mailbox, the ShadowProtect GRE makes it very easy. Simply mount the backup image of the drive with the Exchange database on it, tell GRE where it is, where the existing Exchange database is and then copy across the items you need to recover. It's awesome because of it's ease and speed, search functions and also as a means of migrating to a newer version of Exchange. The few times I've used this it absolutely blew me away with it's ease of use, speed and simplification of data recovery. 

ShadowProtect uses ShadowControl ImageManager software to help manage all the images and snapshots. It's another neat bit of kit and it's functionality is excellent.

The most impressive part of this backup solution is the ability it has to virtual boot a server from an image. You can boot it as an Oracle VirtualBox virtual machine, or as a HyperV VM. A failed physical server can be up and running in minutes as a virtual server. It will still be running backups as per the physical server and once the hardware is restored, the up to date virtual server can be restored back to the physical hardware and once again full services can be restored.

As with any software though it needs to be carefully set up and configured, with usual ongoing maintenance applied. The install I'm working on currently is a bit of a mess and there are a lot of images that need to be sorted out. As a backup solution it also ties in with the virtual environment, although this is something I'm still exploring - check StorageCraft's ShadowProtect out, it's well worth a look.

Sunday 14 October 2012

Adventures with the HTC One X

Just a week and a half after espousing the wonders of my One X the damn thing had to be replaced. Yep - the autofocus on the camera wasn't working. If I stopped and restarted it several times it might work, but eventually failed completely. After searching the net for "HTC One X autofocus not working" and several other permutations of this, I found it's a hardware error. The local Optus dealer, to whom I'm deeply indebted swapped it without issue.

The new phone works brilliantly and I'm happy to say that I'm taking plenty of snaps with it.

I have also purchased it wallet type cover for it - I think it will protect the screen better than some of the other types of protectors out there. I do of course have a screen protector on it, but this won't necessarily stop keys or change in my pocket from damaging it. On to the next adventure with it!

Wednesday 19 September 2012

HTC One X Review

So I have taken possession of my new phone, bidding a fond adieu to my beloved Nokia e72. Originally I was considering getting the One S, not the One X - as a cost thing more than anything else. At any rate, I've chosen the One X on a new plan only marginally more expensive than my previous plan, with slightly more included calls after some excellent selling by an uber efficient sales lass.

Rather than re-type all the specs you can find them here: http://www.htc.com/au/smartphones/htc-one-x/#specs

Key things to note are are the Quad core 1.5GHz processor, 1GB of RAM and 32GB of storage. The device itself is quite large - much bigger than the iPhone I use for work and it makes the iPhone seem quite a small phone. It makes the e72 even littler by comparison. The speed and the gorgeous screen make up for those slight detractions though. 4.7" Super LCD2 - oh so very bright, clear and lovely to look at. Reviewing photos taken by the 8 mega pixel camera is a joy, and taking photos with the camera has been excellent, apart from a small issue I have where the camera won't focus. After I leave the camera app and come back in a couple of times it seems to come good, but this can be annoying when I'm trying to get a quick action shot.

The phone is light and thin, it easily sits in a top pocket of a business shirt, or in my case, the side pocket of my cargo pants. Sound is clear and vibrant, music sounds great from the little speaker at the back. For the first time I've actually gotten a top of the line phone and now I see what the hype is about. Although the Samsung Galaxy S3 is a very similar beast in size etc, I find the HTC Sense interface to be comfortable and easy to use - I've used it before with my HTC Aria.

The bad aspects of this phone are as follows:

  • the camera focus thing - perhaps this an update or it's something I'm not doing correctly but this irritates the hell out of me
  • the size of this phone renders all my accesories obsolete - which while it's good to buy new stuff means that my old faithful phone case is no longer usable.
  • Facebook doesn't ding on notifications
  • Occasionally I miss the dings of emails etc
  • I can automatically tell it not to receive emails/Facebook updates and the like between a given period. The one thing the Nokia e72 has all over this phone, for me, is that it only checked emails between given time periods. Neither the iPhone 4S I have, or the One X do this - always on email means you either manually have to turn the auto-sync off (which is what I do now) OR put up with email dings throughout the damn night. NO. I like to sleep peacefully, but I also rely on my phone as my alarm clock.
All in all, I'm very pleased with this new device and I recommend that if you want a great Android phone - consider it very seriously!

Friday 14 September 2012

Exchange 2007 Send As from a different domain

Imagine this scenario, it may be one you've come across:

  • the organisation you're working for / consulting to has a single Exchange Server (be it standalone or part of SBS)
  • You have it receiving multiple domains e.g. example1.com and example2.com
  • Users would like to send from name@example1.com and from name@example2.com
Exchange does not support this without either adding an additional mailbox for example2.com to each user's Exchange account or implementing some expensive third party software. 

There is an easier way to do this and it has two separate parts to it: creating a relay for example2.com via the Exchange server, and setting up a dummy POP3/SMTP client in outlook to send as the second domain using the "From" drop down in the create email window in Outlook.

Part 1 - Setting up an additional SMTP Relay to avoid the dreaded 550 5.7.1 Unable to Relay

The Exchange server won't necessarily allow mail from a different domain to be relayed through it to the outside world. In Exchange 2007 you don't add a an extra SMTP relay, you have to add a New Receive Connector (because the server is receiving the mail to then send it on). 

Firstly add an IP address to your Network adaptor - don't try to re-use the existing IP address, this will over complicate things. Simply add an extra address - increment your existing address by one, or find a free one. This will be the outgoing SMTP server address we set up later in Outlook so note it down.

Open the Exchange Management Console and go to Server Configuration. Hit Hub Transport and choose "New Receive Connector"

Name it, and choose Custom as the intended use for the Receive Connector. Hit Next and on the Local Network Settings page, click the Add button and type in your new IP Address. Leave the port at 25 - most mail programs don't like this to be messed with.

Remove the "All Available" Local Network address and hit Next again.

The next window should be the Remote Network Settings window - use this to control which addresses can relay through the server. Ensure you put in a range that is meaningful and allows for some security. If you put in too large a range, or do 0.0.0.0 to 255.255.255.255 you have created an Open Relay and spammers love these - probably not the best plan to do that. Pick your DHCP range or something similar to lock it down to.

Choose Next and on the Summary screen click New to create the connector. OK so now we need to alter the permissions.

On the properties page of the new Connector (right click and choose Properties), choose the Permissions Groups tab and select the checkbox next to "Exchange Servers" and hit Apply.

Go to the Authentication tab and select the checkbox next to "Externally Secured (for example, with IPsec)", and hit Apply and OK.

Now we can relay through our server.

Part 2: Configure a Dummy Outlook Account to get access to the extra "From" option in Outlook

Open Outlook on your desktop and go to Options, then Accounts and create a new POP account.

Put in the User's name, their email address and then for the POP3 Server address put in a dummy address pop.local for example. Put the IP address you configured above in to the SMTP server and click finish. The Test button won't work - the POP account will fail every time. Because we have only a single mailbox with multiple addresses assigned to it in Exchange, we don't have to worry about where emails sent to example2.com land - the Exchange server will automatically put them in the correct folder.

Open Tools again, Options and go to the Send/Receive section and disable "Receive email items" from our new dummy account. Restart Outlook. 

Now when you open an email to send to someone, you'll see the "From" button beside the sender's address at the top and you can select your example2.com account.

I hope you find this useful - I've cobbled it together from two separate issues that ended up being interrelated.

Samsung Galaxy Tablet 2 Review

Recently, through sheer blind luck, I was able to get a hold of one of these excellent little devices. A friend sent it down (he prefers his iPad) so I was able to play with it a bit before handing over to the wife for her amusement. The Galaxy Tablet is a 7" tablet, much like the Google Nexus I recently reviewed and use almost continually. At first blush here are the differences I noted:

  • the user interface is different (naturally). I feel that the Galaxy's is more polished, looks crisper but is a lot more busy. The QWERTY keyboard interface for example has the numbers above it and the keys are smaller on screen than the Nexus.
  • SD Card slot - upgradeable storage is a nice thing indeed.
Weight, size and battery life appear comparable. My wife loves it. She wasn't convinced initially about a tablet and didn't think she could find a use for it. As a non-technical person it seemed like another gadget to her. Now though she uses it for Pinterest, Facebook, eBay, email and generally web searching.

The Galaxy Tablet is a fine piece of hardware, but I have to criticize the interface. It is not as user friendly as the Nexus and this was actually a factor that led me to purchase a HTC One X over the Samsung Galaxy SIII recently. Navigating through some of the menus has been a pain, and initially finding in the Yahoo Mail app the Create Email option was not straightforward at all. That being said, now I've played more with it it's quite good.

I recommend the Galaxy Tablet to anyone who needs 3G or LTE connectivity as the Nexus doesn't appear to offer this. WiFi connectivity is very good on both devices and the Galaxy has a bright easy to read screen. Occasionally the light sensitivity gets a bit annoying as it dims the screen unnecessarily, or makes it too dark, but otherwise it's very good.

Saturday 18 August 2012

Google Nexus 7 Review

For my recent birthday I purchased one of these excellent devices. I had been eyeing one off for several weeks - in lieu of an Apple iPad or similar device. I picked it up from JB HiFi here in town and paid less than what I'd pay online. The basic specs of the Nexus 7 are:

Specifications

SCREEN
  • 7” 1280x800 HD display (216 ppi)
  • Back-lit IPS display
  • Scratch-resistant Corning glass
  • 1.2MP front-facing camera
WEIGHT
  • 340 grams
MEMORY
  • 8 or 16 GB internal storage
  • 1 GB RAM
BATTERY
  • 4325 mAh
  • 9 hours of HD video playback
  • 10 hours of web browsing
  • 10 hours of e-reading
  • 300 hours of standby
CPU
  • Quad-core Tegra 3 processor
SIZE
  • 198.5 x 120 x 10.45mm
WIRELESS
  • WiFi 802.11 b/g/n
  • Bluetooth
USB
  • Micro USB
OS
  • Android 4.1 (Jelly Bean)
FEATURES
  • Microphone
  • NFC (Android Beam)
  • Accelerometer
  • GPS
  • Magnetometer
  • Gyroscope

General Information

These specs come from the Google Website here: http://www.google.com/nexus/#/7/specs

I chose this tablet because of it's 7" size - the large iPads I find are too big to hold and type on with my thumbs, not because I've got small hands, on the contrary they're quite large, but just the comfort of the big iPad (or larger tablets for that matter) I find to be better on a smaller device. 

I also chose this device because of an ongoing love affair with Android - a love affair that I've felt to be a bit one sided at times. My first use of it was with the Motorola Backflip and it was dreadful. A HTC Aria followed, and while it was better, the speed was a real issue. I was hoping very much to avoid this again with the Nexus and thankfully it was all I hoped it would be. Let's start at the beginning:-

My initial impression of the device was a pleasant weight, solid build and a lovely bright screen. The onscreen keyboard was responsive and the buttons were large enough I was able to type quite quickly on it straight away. The pretty graphics and screen bling were very nice, smoothly drawn and with great colour depth. 

I managed to get a hold of a case, a rubbery plastic thing which I've since replaced with a nice faux leather one. The screen is robust and has thus far survived being dropped and drooled on by my 15 month old baby and also the rigours of being carried around in my hip pocket while I'm working. 

The screen's brightness and easiness to read have meant an introduction to eBooks in a serious way for me now. It also displays movies in HD and came with Transformers Dark Side of the Moon which I delightedly watched - the $25 Google Play voucher was very welcome too - I promptly bought a few eBooks and apps with it - the billing was very easy and it all worked well.

Now that I've had the device for a week I have it syncing 4 separate Google Apps accounts - email/calendar/contacts/Google Drive, plus Facebook and Twitter and the battery lasts for 2 days. I use it more or less continuously during working hours - reading and responding to mail, and using the Cards features for calendar and weather information. It has certainly accelerated my ability to respond quickly and stay on top of what is happening in a busy work environment. The WiFi connectivity has been solid, and I've been tethering it with my iPhone for out of office work - this has worked brilliantly thus far. 

Conclusion

For my money, the Google Nexus is an excellent device and has already saved me a significant amount of time and effort in keeping on top of the unending information flow that's part of my job. These are an extremely useful tool, and a great little toy too - definitely worth the money!

Friday 27 July 2012

Further adventures with the HP N40L and Dragonfly BSD

Since my last post, I've tried a bunch of different things with this box. These include:

  • Xenserver with the following virtual machines:
    • Ubuntu Server
    • Windows Home Server 2011
    • FreeNAS
    • DragonflyBSD
  • Ubuntu Server
  • Linux Mint 13
Having the 2TB mirror has caused many of the problems. FreeNAS simply crashed and used 100% of CPU the as soon as I tried to copy a file. Dragonfly wouldn't install. Ubuntu took 4 days to build a software RAID and then when we had a power failure, the RAID failed to build again. There was a lot of frustration and perhaps a cranky swear word or two.

Eventually I decided to follow this route:
  • Installed Linux Mint 13 on the box, set up a software RAID for the large disks, and left the other disks simply as an install disk and an archive disk. 
  • moved all the data from my Netgear Stora to the new server
  • I've deployed it to replace my server and my media PC, allowing me to ditch two PCs, one UPS and the Stora.
Yes I'm going a bit greener :-) Thus far it's going quite well. I've managed to also build up a Dragonfly BSD backup server as well.

I took my old server and put the disks from my Stora and from an external USB drive that had a failed power converter. With 3 x 2 TB Disks and a 1 TB disk in the system I wanted a way to effectively use all that disk space. It's not for live data, just for periodic backups. The HAMMER file system is awesome for this because there is no fsck on boot - it's live and happening straight away. Dragonfly is also very lightweight and runs well on the system I've got. The HAMMER filesystem has a lot going for it too - I was able to add the three disks into a single filesystem of 5.5TB and it does the normal snapshots built into HAMMER. The set up and the speed it created the filesystems with was excellent. I had one small issue with my USB DVD drive during the installation which caused the whole thing to fall in a heap but once I used a USB drive the installation went quickly and smoothly.

Sunday 15 July 2012

Adventures with XenServer on the N40L Microserver

Since my last post (in amongst family and work and stuff) I've been playing with Xenserver and the N40L Microserver. Here's how it's gone down:

The initial install went quite well, except I had configured two mirrored RAID arrays - one with 2 x 1TB disks, the other with 2 x 2TB disks. Xenserver saw neither of them. So I removed both arrays, set the SATA controller back to AHCI and went from there.

After a quick re-install, I had an initial local storage of 1TB. Following this I added three more local storage devices and re-thought how I was going to do the installation of my VMs. Following the exceptional directions on creating an Ubuntu 12.04LTS server on Xenserver from http://www.invalidlogic.com/2012/05/01/deploying-ubuntu-12-04-on-xenserver-made-easy I created a 12.04 Ubuntu virtual server - all went very well. In accordance with my plan, I also created a Windows Home Server 2011 using the Windows 2008R2 template.

The WHS2011 server was so slow to install, setup and run. I had assigned it 2 CPU's, 4GB of RAM and it took at least three hours longer than the Ubuntu server to install and get going. I must note that the template from invalidlogic.com was superb and perhaps that has spoilt me a bit :-)

I also set up a FreeNAS - installed to one of the 1TB drives with 2 x 2TB virtual disks presented to it. The FreeNAS server installed quite fast and everything looked good. I set up NFS and CIFS shares and thought it was all going well. Unfortunately, as soon as I started to copy data across to the FreeNAS it's CPU usage hit 100% (as did the host's CPU). This was clearly a failure and wasn't going to work.

Undeterred, I thought about Dragonfly BSD and the excellent HAMMER file system - let's give that a crack I thought. The initial boot of the disk failed.... So FreeNAS and Dragonfly aren't going to play the game - new plan! More about this tomorrow :-)

Adventures with my new HP N40L Microserver

I took delivery today of my brand new HP N40L Microserver. I plan to use it to replace my existing whitebox server, Netgear Stora and add to my network at home. The idea is to install Citrix XenServer on this little box, then virtual guests running Ubuntu 10.04LTS, Windows Server (of some variety - 2008R2 or maybe Windows Home Server) and if required FreeNAS or another *BSD product (for fun).

The default N40L comes with 2GB of RAM, 250GB HDD and a 1.5GHz processor. I've updated the RAM to 8GB of RAM and I'll put a couple of 2TB HDDs in and probably 2 1TB HDDs disks as well.   The idea is then to set up two RAID arrays - one for the install of operating systems and associated applications, one as a storage pool for data (hopefully 2TB will be enough initially). I'll look into using 4 x 2TB disks in RAID10 and see what happens. Here we go!

Saturday 30 June 2012

Munin for Monitoring

Lately I've been very busy with work, but I find that I still need to keep an eye on the Linux servers I have kicking around at home and at the office. I use Nagios for basic monitoring (ping, smtp, http) but wanted something a bit more fine grained on the actual host itself. I stumbled on munin when doing some Google searching.

Munin's homepage is here: http://munin-monitoring.org and I'm running it on a variety of servers at them moment (mostly Ubuntu and Debian). It supports quite a few distributions and operating systems so check it out if you're interested in seeing what's happening on your servers.

I like the many graphs that it produces to demonstrate what's happening and I also fell into the trap of trying to over-configure things. There are only a couple of basic things to change to suit yourself, restart apache2 and munin-node and it's slamming. It also monitors multiple machines - install munin-node instead of munin and munin-node and configure as recommended. I thought I'd be tricky with it and things didn't go as planned so it appeared that it wasn't working. I was over thinking it though and it did work, very well actually, once I found the spot it was creating the files in :-)

A handy app and a nice way to keep on top of your servers!

Thursday 15 March 2012

Further experiences with Virtualization - VMware ESXi and Oracle Virtualization

So continuing in the vein of trying out these virtualization options in the marketplace I looked into the big gun - VMware to see how that would go on my cheapernetwork. Alas, my attempts were doomed to fail. The initial installation of ESXi failed - it was unable to detect the hard disk drives in my HP virtual servers. It was a bit of a WTF moment - that a fairly easily available mainboard, with no special interfaces or anything like that. The disks were fairly standard SATA disks - Seagate drives both. 160GB so not small and not unusually large. At any rate, after trying different disks, going to the console prompt and looking there I gave up. dmesg didn't even detect the /dev/sdi disks at all and the more I investigated the more I found that there were some hardware limitations. Given I have limited time, I abandoned my VMware attempts and instead looked into Oracle's Virtualization offerings.

The first thing that struck me was the sheer size of the downloads for the components of Oracle VM - it's a complex system, needing not only your head units (my HP desktops in this case) but also another PC as the manager - with an install of Oracle Linux required, plus the over 2GB VM Manager and then I needed two NFS shares for more data. In terms of complexity compared with XenServer I was surprised. I could install the VM Manager as a virtual machine on another host ( I decided to use my notebook running VMware player - which I love by the way). I had downloaded the netboot option for Oracle Linux - trying to minimise the impact on my corporate network. None of this went smoothly of course. VMware Player ran fine, the Oracle Linux install needed a URL to install from - and this proved surprisingly hard to find. I spent a bit of time searching for it, but the Oracle install information I found was for Oracle Linux Release 5, not 6 and very little information was available - perhaps my google-fu was letting me down... At any rate, again - time was against me. I had to let it go for the time being.

The thing that has struck me throughout this exercise is just how easy it was to get XenServer going - Citrix are really on to something. Additionally, for supported platforms under the XenServer virtual platforms it is silly easy to transfer guest machines between hosts - right click - move to server and off it goes. The downtime is extremely small. If you pay for XenServer you can enable High Availability so guests can be restarted on host fail and you can really start to get some high uptimes. If you know that the box is about to fail - it's trivial to migrate them around - fantastic!

So we'll keep playing with XenServer for the time being and with a bit of luck I can re-visit VMware down the track.

Sunday 11 March 2012

Experiences with Virtualisation - XenServer

I have been experimenting with different virtualisation technologies lately. At work I already run a Microsoft Hyper-V server with two Ubuntu servers running on it. One is an OTRS (ticketing system) server and the other an FTP / DNS server. This server is a 1RU rackmount box with no virtualisation supported on the hardware. Oh well... it still runs reasonably well, however, I've found that under high disk / network load, the virtual machines grind to an absolute halt and I have to reset them. So I've begun expanding my horizons.

Recently I searched for and found some small form factor desktops with support for virtualisation - namely: HP Compaq dc5750 Small Form Factor Black AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ Dua. I bought two (for the princely sum of $9 each plus postage so $130 (!) delivered) - they came with 1GB of RAM and 80GB HDD. I've upgraded the RAM in both to 4GB and I've got more on order. I also turfed the disk drives - one had a dodgy sector and the other was just plain dodgy. I slammed a couple of 200GB disks in and away we went.

XenServer is produced by Citrix, a well known company for remote access solutions and now more so for their virtual server solutions. I've used VMware before on the desktop and still use the VMware Player on the desktop for various things, but I had not looked into XenServer before. I started with the Live CD and was reasonably encouraged. The information out of it looked pretty good so I thought I'd install the XenServer operating system on some machines and see how it went.

The install on my dc5750's went very smoothly - all hardware detected and accounted for. The dc5750 supports AMD virtualisation so it went very nicely and XenServer ran very happily. On the first server - xenserver1 (very imaginative naming) I neglected to set the timeserver or enable NTP and this did come back to bite me later on. After the initial set up, I installed XenCenter on my Windows 7 notebook. It's a slick interface and once I put the IP of xenserver1 in it detected it without issue. My notebook and both servers are on a gigabit interface so it all runs pretty fast. I started the install on the second dc5750 (xenserver2 - more imagination there) while I added a pool in XenCenter and put xenserver1 in to it as the master.

After xenserver2 was installed I added it to the pool and noticed that the tool wasn't reporting the RAM usage on the second server. I had fully updated both servers and XenCenter. Kind of strange - then I got messages about the clocks not being synchronised. I went back and reset the NTP servers on both machines and it turned out that xenserver1 was an hour ahead - once that was reset both servers reported CPU / RAM / Network and disk usage quite happily. So now to the installation of virtual machines - but where to put the virtual disks? Aha! I added a storage pool via NFS on our FreeNAS server - and although this in itself caused some issues until I sorted the NFS share out, eventually it was all good.

XenServer has templates that are used to create the virtual machines. There is, naturally, a blank template for unsupported operating systems (like *BSD?!). I started with an Ubuntu Server 11.10 install - the template suggested RAM usage, disk size etc and I created the the virtual machine very quickly. I had previously added a file storage pool for iso images - I pointed the server template at the appropriate ISO image and declined to point the VM at a particular xenserver, opting to allow it to choose one with the available resources. It chose xenserver2 and the installation began. I undocked the console so I could watch it and returned to the XenCenter to watch the load and usage on the servers. I also started a Windows Server 2008 R2 installation from the template for the hell of it (I love Microsoft TechNet Direct). Again, I allowed the template to set the configuration for the server and again allowed XenCentre to pick the server with the available resources - it chose xenserver1 and the installation began.

Both installs ran through their usual routines, until the Ubuntu server reached the disk partitioning stage and it stalled. The Windows 2008 R2 server install ran perfectly. It detected all the hardware properly and I installed the Xen tools on it without issue - the reporting detail in XenCentre improved markedly after that - individual cpu and RAM usage and network/disk usage too. The install was actually pretty quick across the network (I was surprised to say the least). After I restarted the Ubuntu install it ran again and finally completed. While this was happening I was updating the Windows 2008 R2 Server and I began an install of FreeBSD in a new server under the default template. It installed perfectly and once again, detected the hardware properly (detecting the network card as a RealTek device) and I was left with a fully functional FreeBSD Release 9.0 server. Eventually the Ubuntu Server finished installing too and it was working properly.

My initial impressions were good. The software was clear to understand, the virtual machines easy to manipulate and work with and the support for the hardware in the virtual machines was all good. Over the next few weeks I'll continue testing them and record my impressions here. Then, I'll take the disks out, install VMware's offering and test them too.

Microsoft's Hyper-V server is not really the system I wish to run - while it's great for Microsoft products, they aren't the only operating systems we run (for a variety of reasons). I like to be able to deploy the best suited OS to the requirement and I hate being locked in to anything - I really prefer to be flexible. I'll also cover some of the licensing costs as we go along - how much and how it's all costed out. Stay tuned!


Friday 10 February 2012

A simple script to use either robocopy or xcopy to backup files

Under various circumstances, I've found it useful to cobble together a script to do a sync backup across the network from one Windows server to another. Usually this is for files only and is for either a mirror or a daily, full backup of data. Obviously there are some great backup tools available that make something like this largely unnecessary, however, this is quick, simple and gives you an email output of what has happened. The first example below uses robocopy (Robust Copy) which is a very nice bit of kit indeed. It's a bit more useful than xcopy and handles larger numbers of files better. Don't get me wrong, I love xcopy, but it has it's limitations. I use rsync a lot on Linux servers and robocopy gives me many similar options for how I want to handle files.

The destination directory could be anything - another folder on the same PC, a removable disk, a mapped share or even a straight UNC path e.g. \\server\share - flexibility is the key for this script, once the basic variables are right and you've decided to use robocopy or xcopy then off you go.

So to the script:

Open notepad and put this info in - note where things are comments and what variables you'll have to change:

Script start:
echo on
REM Set up some time variables
for /f "Tokens=1-4 Delims=/ " %%i in ('date /t') do set dt=%%l-%%k-%%j
for /f "Tokens=1" %%i in ('time /t') do set tm=-%%i
set tm=%tm::=-%
set dtt=%dt%%tm%
REM set up variables for log files, source and destination - change this variable
set log="C:\Users\owner\Documents\Scripts\Logs\%dt%.log"
REM local stuff to be backed up - change this variable
set src="c:\documents"
REM remote location to put backups - change this variable
set dest="I:\backups\server"
REM now for the actual work - change switches as required - explanation of switches is below.
robocopy %src% %dest% /E /Z /MIR /R:1 /LOG:%log%
REM I'd like to know how it went (this file can be big if there are a lot of files copied)
echo Backup Logs attached | blat - -subject "Sync Log Report for %dt%" -to "me@mydomain.com" -attach %log% -f user@domain.com

Use blat to send the email - grab it from www.blat.net (great program!) It sends an email with a header that will look like this:
Sync Log Report for 2012-02-10
and an attachment of your log file. You can add different things to this - for example I'll often use a [servername] tag after the date.

The robocopy switches used are:

  • /E = copy sub-directories, including empty ones
  • /Z = copy files in restartable mode (in case the network drops out or something similar)
  • /MIR = MIRror a directory tree (which is /E plus /PURGE)
  • /R:1 = number of retries on failed copies. It's best to set this - by default it's 1 million (!)
I run this from the Windows Scheduler and have a mirrored copy of data files each night. It's quite a useful little tool. If you'd like to use xcopy instead there are a few things to consider:
  • the src and dest variables need to have a trailing backslash and a wildcard
    • set src="c:\documents\*"
    • set dest="i:\backups\server\*"
  • and the command to insert would be:
    • xcopy %src% %dest% /C /D /E /H /Y > %log%
    • where the switches are:
      • /C = continue copying even if there are errors
      • /D = copies files whose source is newer
      • /E = copies directories and sub-directories (even if empty)
      • /H = copies hidden and system files
      • /Y = suppresses prompting to overwrite files
    • the > redirects xcopy's output to the %log% variable we configured earlier in the script, and then blat will email the resulting file out.
If you find this useful in anyway, please let me know in the comments.

Sunday 15 January 2012

The Fundamentals of building Client Networks

Recently I've been thinking a lot about the best way to help my clients understand and engage with their IT networks and systems. I have also been thinking a lot about how to best manage and look after these systems for my clients in a sustainable way. In order to do this I've been looking at the fundamental basic building blocks of my client base and considering the commonalities. The reason for understanding these commonalities is to put into place simple guidelines for developing and maintaining a network. Each network will of course have certain unique circumstances but if the fundamental infrastructure is well understood, these unique aspects of each network will be easier to manage.

So thinking of all of these things, I've looked at the commonalities in my clients and found they can be grouped into several broad categories:


  • sites with a single server, single location and a small (less than 30) number of users. They may have some mobility but generally only a small requirement.
  • sites with multiple servers but only a single location and between 25 and 50 users. Again, some mobility but not a lot - potentially they'll want more.
  • sites with multiple servers, multiple locations and 25 plus users. Requirements for mobility including file access and remote VPN access.
Although these categories are quite broad, they cover 90% of the small to medium business clients I tend to deal with. These clients are all important to me, and given I have a finite amount of time to work with them, its critical that understanding the fundamentals and underlying structure of the networks doesn't need to be re-discovered every time I'm on site. How then to ensure efficient support of clients?

Firstly by grouping sites into the broad categories I mentioned earlier, I have a quick higher understanding of each site. By building each network following standard procedures there is plenty of efficiency to be gained and also it's a lot easier to explain to a client what is on their network.

Secondly good documentation is key. It's not just writing stuff down, but having it available to review when you are onsite - and this means there are several things that have to be in place:
  • data must be available via some sort of mobility
  • data must be secure
  • data must be organised and detailed
Having the data secure is incredibly important - if it's available using some type of web interface, it has to be SSL secured and the passwords have to be strong. Although this seems obvious, it doesn't seem to be well executed. Having data organised and detailed is the key to keeping client networks well looked after.

Thirdly using the same basic ideas to build each network type mean that if the key support staff member is not available, other support staff can easily work out what is where and how it's set up. These basic ideas also make it far more efficient to produce quotes and proposals, and, I've found anyway, that development of new ideas can be more easily implemented into proposals and integrated into networks.

Recently I've noted that lately I have been speaking with businesses that aren't currently clients of mine and I've found some over-complicated and under-documented networks. By applying some of the basic principles I've touched on in the post I'm able to start getting these networks back under control. I've found that the easiest way to do this is the following:
  • determine what the client needs
  • determine what the client wants
  • determine what the client already has
  • determine what the client actually can have
  • document it all and discuss at length and in as non-technical a manner as possible
These are the fundamentals of building client networks and they are also the fundamentals of recovering a client network from a state of disrepair. The major difference is that the former causes a lot less pain than the latter.

Questions?

AB out.

Saturday 14 January 2012

Understanding a network

Recently I've been spending time with several prospective clients and I've found a few quite horrible things. The common, awful things stem from a complete lack of disclosure by the incumbent IT support consultants. In one instance, one of the clients aren't even allowed to have administrator access to their systems! They can't add or remove users, or perform any basic administrative functions. They are being kept in the dark and spoonfed bullshit by the IT guys. So when they get a hugely expensive proposal to upgrade their systems, the first, maybe even the second time they fall for it and finally they call someone else in to look at it.

What I've found is awful - barely ethical behaviour by the IT consultants, systems with non-genuine software and lies to the client. Networks that are probably capable of so much more being poorly managed - even by basic standards. For example, several of them have multiple sites with poor data delivery - but rather than look at the bandwidth as an issue, the IT guy is telling them the servers are under performing, but an analysis of the system shows plenty of overhead available in disk, cpu and memory capacity. The bandwidth is the problem, but again, rather than work on that, and fix some poorly configured routers, there are inaccurate reports of server issues - for example "the server is running out of RAM, that's why it goes slow..." but checking the RAM shows that there is plenty free and the system isn't swapping at all.

I just find this completely unethical. Why not consider some different options if things aren't working properly? It's been my experience that a client is willing to accept that new ideas come up and give different options for an office to be more productive. It's also been my experience that a client won't be looking to replace an IT consultant unless they are very unhappy and willing to risk the potential for damage to systems for the opportunity to get a more reliable system that they can trust and at the end of the day that's what this is all about - trust.

Without trust then the relationship is over. It's very obvious but people get lazy and without checking to make sure they are looking after their clients, well sloppy behaviour becomes prevalent and then its time for someone else to take over, with the client paying an awful lot of expense both in time and pain of changeover, plus the loss of valuable site knowledge.


Sunday 8 January 2012

Useful script for unrar files in multiple directories

A friend of mine recently asked me to help with a problem he had. When he downloaded files from the internet, no doubt legitimate, many of them contained nested directories with an rar file and associated components in them. Some of these downloads look like this (for example):

  • Main Folder
    • Sub-Folder 1
    • Sub-Folder 2
    • Sub-Folder n etc

This is really tedious to go through each sub-folder and unrar each archive so I wrote a simple script for him to run straight from the linux/*BSD command line:


angus@server: ~# directory=/path/to/directory ; for dir in $( ls $directory ) ; do cd $dir ; unrar e *.rar ; cp *.avi /path/to/end/directory ; cd .. ; done

It seems to work relatively well. An expansion of this as a bash script:

#!/bin/bash
# Script to extract RAR files downloaded in torrents - usually TV series type torrents
# This is the directory your torrents are downloaded to
echo "Please input torrent directory: "
read -r "input_torrent"
echo "$input_torrent"
# This is the directory you want the extracted files to be copied to
echo "Please input directory for extraction: "
read -r "output_dir"
echo "$output_dir" 
#enable for loops over items with spaces in their name
IFS=$'\n'
for dir in `find "$input_torrent" -type d`
do
        cd $dir
        # ls # uncomment this line and comment the two lines below for testing
        unrar e *.part001.rar #or this can be unrar e *.rar
        cp *.avi "$output_dir"
        cd ..
done

Notes about this script:
  • unrar e *.part001.rar
    • I've found that this may need to be altered dependent on my friend's torrents. The directory may have the files set up in a similar pattern to this above: file.partXXX.rar OR also commonly found is file.XXX with a file.rar that is the key file to the archive
  • The input_torrent and output_dir variables need to be written without backslashes i.e.
    • /path/to/files with a space in the name
    • NOT /path/to/files\ with\ a\ space\ in\ the\ name as you would usually expect in a *nix environment
      • This is because I'm learning bash scripting and making things all neat and tidy is more than I'm capable of doing :-)
  • It's set up to copy the extracted avi file elsewhere
The bit of the script between the "do" and the "done" can be modified to do different things which might be handy for you down the track.

Modify as you require and drop a comment if you have anything to add to the script!

AB out.

Saturday 7 January 2012

rtorrent - the friendly torrent application

I use rtorrent for my legitimate torrent requirements. I find it extremely useful and here is why:

  • I run it on a linux server I have under a screen session so it's always available
  • it's set to have an upload and a download limit for torrents
  • stops after I've uploaded double what I've downloaded
  • reliable
  • easy to drive
Of course, getting it to this point wasn't totally straightforward. I had to set up my .rtorrent.rc file in my home directory to get all this stuff to work properly. It isn't using 100% of the capabilities of rtorrent, merely the ones I find most useful. For example I don't have it set to check for new torrents in a particular directory - I add them manually for an additional measure of control and so torrents I'm finished seeding aren't accidentally added back in. It does send me an email when a download is finished, retains info about where each torrent is up to and stops if diskspace becomes low (which it occasionally does)

Here is my .rtorrent.rc contents - everything in grey is a comment:
#=================================================================
# This is an example resource file for rTorrent. Copy to
# ~/.rtorrent.rc and enable/modify the options as needed. Remember to
# uncomment the options you wish to enable.
# Maximum and minimum number of peers to connect to per torrent.
#min_peers = 40
#max_peers = 100
# Same as above but for seeding completed torrents (-1 = same as downloading)
#min_peers_seed = 10
#max_peers_seed = 50
# Maximum number of simultanious uploads per torrent.
#max_uploads = 15
# Global upload and download rate in KiB. "0" for unlimited.
download_rate = 200
upload_rate = 5
# Default directory to save the downloaded torrents.
directory = /home/angus/torrents
# Default session directory. Make sure you don't run multiple instance
# of rtorrent using the same session directory. Perhaps using a
# relative path?
session = ~/torrents/.session
# Watch a directory for new torrents, and stop those that have been
# deleted.
#schedule = watch_directory,15,15,load_start=/home/angus/torrent/.torrent
#schedule = untied_directory,5,5,stop_untied=
# Close torrents when diskspace is low.
schedule = low_diskspace,5,60,close_low_diskspace=100M
# Stop torrents when reaching upload ratio in percent,
# when also reaching total upload in bytes, or when
# reaching final upload ratio in percent.
# Enable the default ratio group.
ratio.enable=
# Change the limits, the defaults should be sufficient.
ratio.min.set=150
ratio.max.set=200
ratio.upload.set=20M
# Changing the command triggered when the ratio is reached.
system.method.set = group.seeding.ratio.command, d.close=, d.erase=
# The ip address reported to the tracker.
ip = xxx.xxx.xxx.xxx
#ip = rakshasa.no
# The ip address the listening socket and outgoing connections is
# bound to.
#bind = 127.0.0.1
#bind = rakshasa.no
# Port range to use for listening.
port_range = 6900-6999
# Start opening ports at a random position within the port range.
#port_random = no
# Check hash for finished torrents. Might be usefull until the bug is
# fixed that causes lack of diskspace not to be properly reported.
#check_hash = no
# Set whetever the client should try to connect to UDP trackers.
#use_udp_trackers = yes
# Alternative calls to bind and ip that should handle dynamic ip's.
#schedule = ip_tick,0,1800,ip=rakshasa
#schedule = bind_tick,0,1800,bind=rakshasa
# Encryption options, set to none (default) or any combination of the following:
# allow_incoming, try_outgoing, require, require_RC4, enable_retry, prefer_plaintext
#
# The example value allows incoming encrypted connections, starts unencrypted
# outgoing connections but retries with encryption if they fail, preferring
# plaintext to RC4 encryption after the encrypted handshake
#
# encryption = allow_incoming,enable_retry,prefer_plaintext
# Enable peer exchange (for torrents not marked private)
#
# peer_exchange = yes
#
# Do not modify the following parameters unless you know what you're doing.
#
# Hash read-ahead controls how many MB to request the kernel to read
# ahead. If the value is too low the disk may not be fully utilized,
# while if too high the kernel might not be able to keep the read
# pages in memory thus end up trashing.
#hash_read_ahead = 10
# Interval between attempts to check the hash, in milliseconds.
#hash_interval = 100
# Number of attempts to check the hash while using the mincore status,
# before forcing. Overworked systems might need lower values to get a
# decent hash checking rate.
#hash_max_tries = 10
# First and only argument to rtorrent_mail.sh is completed file's name (d.get_name)
system.method.set_key = event.download.finished,notify_me,"execute=~/scripts/rtorrent_mail.sh,$d.get_name="
#===================================================================

I hope this is useful for you.

Friday 6 January 2012

Restoring OTRS on an Ubuntu Server

Some time ago I relocated our OTRS server from a failing server to a virtual machine under Microsoft Hyper-V. While the change to a virtual machine ran smoothly and I used the details in a previous post to set it up, after a month I noticed some strange errors creeping in to the installation - the nightly log emails had inconsistencies in them. Fortunately I was able to run a full backup of the OTRS installation using the built in backup tool and very shortly thereafter the server fell in a heap. Rebooting it caused a complete failure of the virtual disk. Now, how the hell something like that happens is beyond me. It was like the virtual disk dropped a head or something.... Ridiculous I know, but the fsck I ran basically told me the disk had failed and corruptions crept in to everything on the disk. Realising that I was fighting a bad fight, I decided to create a new virtual machine and transfer the data back across.

The recovery procedure, described here: http://doc.otrs.org/3.0/en/html/restore.html doesn't really cover everything that needs to happen. Here is a short breakdown of the notes that I made while I was running the recovery process:


  • make sure you set the MySQL (or whatever database you use) password to be the same.
  • in fact, make sure you match up all the passwords where possible.
  • Install OTRS first using the Ubuntu install method - which is well described here: http://wiki.otrs.org/index.php?title=Installation_on_Ubuntu_Lucid_Lynx_(10.4) 
    • make sure you run all the right commands, including the cron ones (which I initially forgot - oops!)
  • Run the restore as per the link above and then restart apache and cron.
  • Test your installation and see how it goes.
Since this error, I've written a very simple script that runs the backup and scp's it across to another server I have. This in turn is backed up to my FreeNAS box, hopefully protecting my useful data. Here is the script:

---------------------------------------------------------------------------------------------------
#!/bin/bash
NOW=$(date +"%Y-%m-%d_%H-%M")
/opt/otrs/scripts/backup.pl -d /home/user/backup
scp -r /home/user/backup/$NOW user@server:/home/user/backup/OTRS/
---------------------------------------------------------------------------------------------------

The $NOW variable is configured to match the output of the OTRS backup.pl script and then I simply scp it across to my server. It's date organised and works pretty nicely. rsync might be a nicer way to do it, but this virtual machine only provides OTRS and nothing else so I'll keep it simple. 

If you can use any of this then please do.

AB out.

Thursday 5 January 2012

Service Delivery in bandwidth poor locations

Being in the country presents some interesting challenges and one I find that I come up against frequently at the moment is, as the title suggests, getting needed services into various remote sites. Although ADSL is quite widespread, and where not available various wireless services (NextG and the like) are able to cover the connectivity issues. But in the case where one is attempting to link sites via a VPN, 512/512Kbps is really not enough for modern applications, particularly if you're pushing internet as well as mail and remote desktop connections over that particular link. Even an ADSL2+ link with speeds up to 24Mbps/1Mbps is not really adequate for the task at hand.

So how to get around this? I'm thinking along the lines of a division of service, decentralising where possible and using cloud technologies to take the burden off the VPN links, that is, push email out to the cloud and whatever other services available out to the internet, thereby reducing the outgoing bandwidth requirements at the central site. Hopefully this will free up more bandwidth for RDP and the like. Additional ADSL services can further reduce the burden if I push http traffic out that way (using a Netgear dual WAN router or the like).

I have recently had to put this theory into practice and it seems to be working out, but it's not entirely solving the problems. Perhaps a packet caching device at either end, such as the ones produced by Riverbed, might be the answer. It's a difficult question and it gets worse when people want to put voice over it as well. You can use clever calling plans to get cheap inter-office calls more easily than implementing a whole other VPN simply for voice. And at the end of the day, let's not forget that ADSL is provided in a "best effort" scenario and no provider in the country guarantees bandwidth availability.

Tricky tricky tricky....

Wednesday 4 January 2012

Skyrim issues

I really like playing the Elder Scrolls games - I've played and completed Morrowind, Oblivion and now I'm working through Skyrim. The issue I've got is frequent freezes. Now I play it on the PlayStation 3 and do that for a very specific reason - I don't have to worry about compatible hardware or any of that jazz, I just want to play the damned game. So when I find that a game, configured for very specific hardware crashes like this it's extremely irritating. I've got both the PS3 and the game patched to the latest updates so that's all current and I'm not missing any potential fixes.

Generally I find the gameplay very good, enjoy the skill system and the levelling. I try to avoid using online walkthroughs or FAQ's - that's cheating! This means I occasionally screw things up and go back to a recent save (of which I have a lot because of the afore mentioned crashes) it costs me in time. In the 45 minutes I've played today it has crashed twice. I turn off the PS3, turn it back on, go through the disk recovery and then I can eventually get the game started again.

I hope they can get things sorted with it. It will be a much better game once it's stability issues are improved on.

AB out.

Tuesday 3 January 2012

Migrating to Blogger

Previously I had been using Google Sites to host www.ryv.id.au. Sites is great, don't get me wrong, however the main purpose of my webpage is to host this blog and I don't think that sites do it well. For example, it doesn't list the entries in date order, rather in alphabetical order on the left hand side. While this is OK for a webpage, it makes it difficult for a blog oriented site to be easily navigated. My other webpage - www.zenpiper.com has a similar issue, only I also have other content on there not so easily migrated to Blogger.

It's horses for courses naturally. I've used Blogger previously and been reasonably happy with it. I'll stick with it for now and review what's happening with Google Sites as I go. Naturally, as a Google Reseller, I'm trying to keep up with it to the best of my ability to offer it to my valued clients.

AB out.

Adventures with OpenBSD - OpenBSD 5.0 on Sun Blade 1500

The scenario:

Installation of OpenBSD 5.0 on an Sun Blade 1500. I've replaced the default XVR-600 piece of proprietary junk video card with a Sun PGX-64 PCI Video Graphics card that uses the mach64 chipset for rendering things. Instantly I had a much nicer console and a far more workable X configuration. The only trick was getting the bloody thing to use 1280x1024 with 24bit resolution on my 19" Dell monitor. Here are the notes from the exercise:

Default installation
man afterboot

Dell E198FP Sync rates:
  • 30 kHz to 81 kHz (automatic)
  • 56 Hz to 76 Hz
Make sure to copy the above into the /etc/X11/xorg.conf file and also add:
Section "Screen"
        Identifier "Screen0"
        Device     "Card0"
        Monitor    "Monitor0"
        DefaultDepth    24
                SubSection "Display"
                Viewport   0 0
                Depth     24
                Modes   "1280x1024"
        EndSubSection
EndSection
- to force it to use 1280x1024

Add to .profile: PKG_PATH=http://mirror.aarnet.edu.au/pub/OpenBSD/5.0/packages/`machine -a`/

Installing Fluxbox (to play with more than anything):

pkg_add -i -vv fluxbox feh

Make sure to add exec /usr/local/bin/startfluxbox to .xinitrc by doing:

$ cat "exec /usr/local/bin/startfluxbox" > .xinitrc

Also do this to .xsession so startx grabs it straight away:

$ cat "exec /usr/local/bin/startfluxbox" > .xsession

pkg_add -i -vv midori -> lightweight browser, and tends to install a billion dependencies (mostly media playing type stuff which isn't bad)
pkg_add -i -vv firefox36
pkg_add -i -vv mousepad (lightweight text editor)
pkg_add -i -vv filezilla (FTP and stuff)
pkg_add -i -vv goffice (some kind of office thing - need to examine it more closely)
pkg_add -i -vv ristretto (basic image editing and viewing)
pkg_add -i -vv epdfview (PDF viewing)
pkg_add -i -vv conky (for checking out the system loads)
pkg_add -i -vv eterm (my favourite terminal program)

Note: fluxbox menus need a lot of work - I've deleted/commented out a *lot* of stuff to clean this all up.

pkg_add -u (check for any updates or errata)

Also look at this : http://www.gabsoftware.com/tips/tutorial-install-gnome-desktop-and-gnome-display-manager-on-openbsd-4-8/ for using Gnome and GDM

Further adventures with OpenBSD - XFCE vs Gnome

So continuing the great adventure - recently whenever I've used Gnome there is a string of "Starting file access" or something similar that appears in multiple tabs down the bottom. This continues endlessly and the load on my Blade 1500 gets up to about 5 which is unacceptable. So I hit the net and looked into using something different. I found a great blog (which I neglected to bookmark or make any other notes about)  that explained a bit about how to do it. Basically I did this:

# pkg_add -i -vv pkg_mgr

which is an easy way to do searches and install large number of packages and then go to X11 and pick all the XFCE packages. How easy is that? Download and install and off you go. The load on my machine is:

angus@blade:~$ w
11:43AM  up 13 days, 21:04, 3 users, load averages: 0.71, 0.63, 0.59

With 792MB of RAM in use (of 2048MB) and this is with Firefox running while I write this entry. 

Overall I find XFCE to be more responsive than Gnome - which is hardly surprising and for the basic features I require it looks quite nice and drives quite well. 

I do tend to find that the machine struggles when I'm looking at various webpages on the net - it doesn't handle processor intensive work all that well - and after all, why should it? This computer is old and does only have 1GHz processors so it will go slow. As a basic server type machine - running with the encrypted file systems and the like with SSH access in, it's working quite well.

Configuring an Ubuntu server under Microsoft Hyper-V

It's fairly straightforward to make this happen. Do a basic config of the system and then:

$ sudo vi /etc/initramfs-tools/modules
    & add below lines
hv_vmbus
hv_storvsc
hv_blkvsc
hv_netvsc

Save the file, then: 

$ sudo update-initramfs –u

$ sudo reboot

$ sudo ifconfig -a

$sudo vi /etc/network/interfaces
    Add below lines for dhcp:
    Auto eth0
iface eth0 inet dhcp

    Add below lines for static IP:
auto eth0
iface eth0 inet static
address 10.0.0.100 [IP address]
netmask 255.255.255.0 [Subnet]
gateway 10.0.0.1 [Default Gateway]

Now restart networking service & reboot:

$ sudo /etc/init.d/networking restart
$ sudo reboot

And you will be good to go!

Further adventures with OpenBSD - Encrypting Files systems

So I decided to create an encrypted folder on my workstation to use as a storage device for work related files (which typically have passwords etc located in them). After some trial and error I found the way to do it. Blog entries and the like that reference this material mention using the svnd0 vnode device for the encryption but it doesn't work. I'm not sure if this is an OpenBSD 5 peculiarity or something to do with my Sparc install but I eventually sorted it out.

Note: do all commands as the root user - it's a lot easier.

I created the sparse file to be encrypted:
    # dd if=/dev/zero of=/location/of/secret/file/.cryptfile bs=1024 count=1024000

Note that it's 1GB in size and has a preceeding "." so it's at least a little bit hidden from a casual ls search.

I have to mount .cryptfile somewhere so I created a folder for that too:

    # mkdir /media/crypt (or wherever you'd like to put it)

I have to check what vnodes are available:

    # vnconfig -l
vnd0: not in use
vnd1: not in use
vnd2: not in use
vnd3: not in use

I can choose any of these to associate with my virtual encrypted device. I will use vnd0. Using vnconfig again:

    # sudo vnconfig -ck -v vnd0 .cryptfile
Encryption key: (use something good)
vnd0: 1048576000 bytes on .cryptfile

OK so now we need to create a file system on our device (which is only a single partition) so we need to newfs the "c" slice as this is the whole disk:

    #  sudo newfs /dev/vnd0c
/dev/rvnd0c: 1000.0MB in 2048000 sectors of 512 bytes
5 cylinder groups of 202.47MB, 12958 blocks, 25984 inodes each
super-block backups (for fsck -b #) at:
 32, 414688, 829344, 1244000, 1658656,

So now to mount our encrypted filesystem to store our secret files!

    # mount /dev/vnd0c /media/crypt

Probably a good idea to make it usable for me:

    # chown -R angus:wheel /media/crypt

And we're off and racing:

# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/wd0a     1005M   42.2M    913M     4%    /
/dev/wd0k     42.8G    1.0G   39.7G     2%    /home
/dev/wd0d      3.9G    224K    3.7G     0%    /tmp
/dev/wd0f      2.0G    450M    1.4G    24%    /usr
/dev/wd0g     1005M    135M    820M    14%    /usr/X11R6
/dev/wd0h      8.6G    1.9G    6.3G    23%    /usr/local
/dev/wd0j      2.0G    2.0K    1.9G     0%    /usr/obj
/dev/wd0i      2.0G    2.0K    1.9G     0%    /usr/src
/dev/wd0e      7.9G   42.7M    7.4G     1%    /var
/dev/vnd0c     984M    2.0K    935M     0%    /media/crypt

I'll be re-creating this whole thing again soon so watch out for any updates or errata.

Check out: 
http://www.backwatcher.org/writing/howtos/obsd-encrypted-filesystem.html for some handy mounting/unmounting scripts.

Playing with Proxmox

 Up until recently I've used Hyper-V for most of my virtualisation needs. Hyper-V is a fully integrated Type 1 hypervisor and comes with...