Skip to main content

Posts

Showing posts from January, 2012

The Fundamentals of building Client Networks

Recently I've been thinking a lot about the best way to help my clients understand and engage with their IT networks and systems. I have also been thinking a lot about how to best manage and look after these systems for my clients in a sustainable way. In order to do this I've been looking at the fundamental basic building blocks of my client base and considering the commonalities. The reason for understanding these commonalities is to put into place simple guidelines for developing and maintaining a network. Each network will of course have certain unique circumstances but if the fundamental infrastructure is well understood, these unique aspects of each network will be easier to manage. So thinking of all of these things, I've looked at the commonalities in my clients and found they can be grouped into several broad categories: sites with a single server, single location and a small (less than 30) number of users. They may have some mobility but generally only a sma

Understanding a network

Recently I've been spending time with several prospective clients and I've found a few quite horrible things. The common, awful things stem from a complete lack of disclosure by the incumbent IT support consultants. In one instance, one of the clients aren't even allowed to have administrator access to their systems! They can't add or remove users, or perform any basic administrative functions. They are being kept in the dark and spoonfed bullshit by the IT guys. So when they get a hugely expensive proposal to upgrade their systems, the first, maybe even the second time they fall for it and finally they call someone else in to look at it. What I've found is awful - barely ethical behaviour by the IT consultants, systems with non-genuine software and lies to the client. Networks that are probably capable of so much more being poorly managed - even by basic standards. For example, several of them have multiple sites with poor data delivery - but rather than look at

Useful script for unrar files in multiple directories

A friend of mine recently asked me to help with a problem he had. When he downloaded files from the internet, no doubt legitimate, many of them contained nested directories with an rar file and associated components in them. Some of these downloads look like this (for example): Main Folder Sub-Folder 1 Sub-Folder 2 Sub-Folder  n  etc This is really tedious to go through each sub-folder and unrar each archive so I wrote a simple script for him to run straight from the linux/*BSD command line: angus@server: ~# directory=/path/to/directory ; for dir in $( ls $directory ) ; do cd $dir ; unrar e *.rar ; cp *.avi /path/to/end/directory ; cd .. ; done It seems to work relatively well. An expansion of this as a bash script: #!/bin/bash # Script to extract RAR files downloaded in torrents - usually TV series type torrents # This is the directory your torrents are downloaded to echo "Please input torrent directory: " read -r "input_torrent"

rtorrent - the friendly torrent application

I use rtorrent for my legitimate torrent requirements. I find it extremely useful and here is why: I run it on a linux server I have under a screen session so it's always available it's set to have an upload and a download limit for torrents stops after I've uploaded double what I've downloaded reliable easy to drive Of course, getting it to this point wasn't totally straightforward. I had to set up my .rtorrent.rc file in my home directory to get all this stuff to work properly. It isn't using 100% of the capabilities of rtorrent, merely the ones I find most useful. For example I don't have it set to check for new torrents in a particular directory - I add them manually for an additional measure of control and so torrents I'm finished seeding aren't accidentally added back in. It does send me an email when a download is finished, retains info about where each torrent is up to and stops if diskspace becomes low (which it occasionally does)

Restoring OTRS on an Ubuntu Server

Some time ago I relocated our OTRS server from a failing server to a virtual machine under Microsoft Hyper-V. While the change to a virtual machine ran smoothly and I used the details in a previous post to set it up, after a month I noticed some strange errors creeping in to the installation - the nightly log emails had inconsistencies in them. Fortunately I was able to run a full backup of the OTRS installation using the built in backup tool and very shortly thereafter the server fell in a heap. Rebooting it caused a complete failure of the virtual disk. Now, how the hell something like that happens is beyond me. It was like the virtual disk dropped a head or something.... Ridiculous I know, but the fsck I ran basically told me the disk had failed and corruptions crept in to everything on the disk. Realising that I was fighting a bad fight, I decided to create a new virtual machine and transfer the data back across. The recovery procedure, described here: http://doc.otrs.org/3.0/en/

Service Delivery in bandwidth poor locations

Being in the country presents some interesting challenges and one I find that I come up against frequently at the moment is, as the title suggests, getting needed services into various remote sites. Although ADSL is quite widespread, and where not available various wireless services (NextG and the like) are able to cover the connectivity issues. But in the case where one is attempting to link sites via a VPN, 512/512Kbps is really not enough for modern applications, particularly if you're pushing internet as well as mail and remote desktop connections over that particular link. Even an ADSL2+ link with speeds up to 24Mbps/1Mbps is not really adequate for the task at hand. So how to get around this? I'm thinking along the lines of a division of service, decentralising where possible and using cloud technologies to take the burden off the VPN links, that is, push email out to the cloud and whatever other services available out to the internet, thereby reducing the outgoing band

Skyrim issues

I really like playing the Elder Scrolls games - I've played and completed Morrowind, Oblivion and now I'm working through Skyrim. The issue I've got is frequent freezes. Now I play it on the PlayStation 3 and do that for a very specific reason - I don't have to worry about compatible hardware or any of that jazz, I just want to play the damned game. So when I find that a game, configured for very specific hardware crashes like this it's extremely irritating. I've got both the PS3 and the game patched to the latest updates so that's all current and I'm not missing any potential fixes. Generally I find the gameplay very good, enjoy the skill system and the levelling. I try to avoid using online walkthroughs or FAQ's - that's cheating! This means I occasionally screw things up and go back to a recent save (of which I have a lot because of the afore mentioned crashes) it costs me in time. In the 45 minutes I've played today it has crashed twice

Migrating to Blogger

Previously I had been using Google Sites to host www.ryv.id.au . Sites is great, don't get me wrong, however the main purpose of my webpage is to host this blog and I don't think that sites do it well. For example, it doesn't list the entries in date order, rather in alphabetical order on the left hand side. While this is OK for a webpage, it makes it difficult for a blog oriented site to be easily navigated. My other webpage - www.zenpiper.com  has a similar issue, only I also have other content on there not so easily migrated to Blogger. It's horses for courses naturally. I've used Blogger previously and been reasonably happy with it. I'll stick with it for now and review what's happening with Google Sites as I go. Naturally, as a Google Reseller, I'm trying to keep up with it to the best of my ability to offer it to my valued clients. AB out.

Adventures with OpenBSD - OpenBSD 5.0 on Sun Blade 1500

The scenario: Installation of OpenBSD 5.0 on an Sun Blade 1500. I've replaced the default XVR-600 piece of proprietary junk video card with a Sun PGX-64 PCI Video Graphics card that uses the mach64 chipset for rendering things. Instantly I had a much nicer console and a far more workable X configuration. The only trick was getting the bloody thing to use 1280x1024 with 24bit resolution on my 19" Dell monitor. Here are the notes from the exercise: Default installation man afterboot Dell E198FP Sync rates: 30 kHz to 81 kHz (automatic) 56 Hz to 76 Hz Make sure to copy the above into the /etc/X11/xorg.conf file and also add: Section "Screen"         Identifier "Screen0"         Device     "Card0"         Monitor    "Monitor0"         DefaultDepth    24                 SubSection "Display"                 Viewport   0 0                 Depth     24                 Modes   "1280x1024"         EndSubSect

Further adventures with OpenBSD - XFCE vs Gnome

So continuing the great adventure - recently whenever I've used Gnome there is a string of "Starting file access" or something similar that appears in multiple tabs down the bottom. This continues endlessly and the load on my Blade 1500 gets up to about 5 which is unacceptable. So I hit the net and looked into using something different. I found a great blog (which I neglected to bookmark or make any other notes about)  that explained a bit about how to do it. Basically I did this: # pkg_add -i -vv pkg_mgr which is an easy way to do searches and install large number of packages and then go to X11 and pick all the XFCE packages. How easy is that? Download and install and off you go. The load on my machine is: angus@blade:~$ w 11:43AM  up 13 days, 21:04, 3 users, load averages: 0.71, 0.63, 0.59 With 792MB of RAM in use (of 2048MB) and this is with Firefox running while I write this entry.  Overall I find XFCE to be more responsive than Gnome - which is hardly surprising and

Configuring an Ubuntu server under Microsoft Hyper-V

It's fairly straightforward to make this happen. Do a basic config of the system and then: $ sudo vi /etc/initramfs-tools/modules      & add below lines hv_vmbus hv_storvsc hv_blkvsc hv_netvsc Save the file, then:  $ sudo update-initramfs –u $ sudo reboot $ sudo ifconfig -a $sudo vi /etc/network/interfaces      Add below lines for dhcp:      Auto eth0 iface eth0 inet dhcp      Add below lines for static IP: auto eth0 iface eth0 inet static address 10.0.0.100 [IP address] netmask 255.255.255.0 [Subnet] gateway 10.0.0.1 [Default Gateway] Now restart networking service & reboot: $ sudo /etc/init.d/networking restart $ sudo reboot And you will be good to go!

Further adventures with OpenBSD - Encrypting Files systems

So I decided to create an encrypted folder on my workstation to use as a storage device for work related files (which typically have passwords etc located in them). After some trial and error I found the way to do it. Blog entries and the like that reference this material mention using the svnd0 vnode device for the encryption but it doesn't work. I'm not sure if this is an OpenBSD 5 peculiarity or something to do with my Sparc install but I eventually sorted it out. Note: do all commands as the root user - it's a lot easier . I created the sparse file to be encrypted:      # dd if=/dev/zero of=/location/of/secret/file/.cryptfile bs=1024 count=1024000 Note that it's 1GB in size and has a preceeding "." so it's at least a little bit hidden from a casual ls search. I have to mount .cryptfile somewhere so I created a folder for that too:     # mkdir /media/crypt (or wherever you'd like to put it) I have to check what vnodes are available:     # vnconfig -

FreeNAS Upgrade from i386 to x64

To get reporting working properly do the following: SSH to the box (or use the console) [root@freenas] ~# service collectd stop     Stopping collectd.     Waiting for PIDS: 4002. [root@freenas] ~# find /data -name "*.rrd" -exec rm -rf {} \; [root@freenas] ~# find /var/db/collectd -name "*.rrd" -exec rm -rf {} \; [root@freenas] ~# service collectd start     Starting collectd. ... and reporting will be fixed. FreeNAS version is FreeNAS-8.0.2-RELEASE-amd64 (8288)

*BSD vs Linux for Home Server

I have a few simple needs for my home server - it needs to be stable, functional on older hardware (P4 2GHz with 1 or 2 GB of RAM) and run a few simple applications: rtorrent (for... ahem... legitimate torrent requirements) irssi - the bestest IRC client (and the one I've spent ages getting a nice config file for) screen (for teh awesomeness!) SSH - for remote work, and for sshfs so I can rsync and backup data remotely and a bit of storage space - 100GB is nice nagios - monitoring work sites as required DHCP DNS Currently I'm running Ubuntu 10.04.3 LTS on a P4 3GHz USDT HP that has a noisy fan in it and I'm going to migrate back to my Dell P4 2GHz box that I was running before. It has a slower processor, is quiet and reliable. It's also more power efficient than the current one. I've been considering getting my hands on an Atom powered box or the like with very low power requirements for home. After all this server really doesn't have to do a lot or