Thursday, May 11, 2017

A mystery problem, somehow fixed

TL;DR: XFS filesystem corrupted, ran xfs_repair on unmounted system via a live USB to fix it.

A couple of weeks back one of the drives in my RAID 5 array started going bad, with SMART errors appearing. It so happened to be the drive that I had bought to replace another drive that went bad (with the same error) back at the start of 2017. Anyway, warranty replacement drive went in, re-added it to the array and all seemed fine.

Then one Friday night, I went to watch a movie that had been recorded on MythTV previously. Switched on the TV, changed to MythTV, all of the recorded programs were showing as not found. Uh-oh. Rebooted the frontend, issue still there. Time to check the backend. Logged in to it, ran dmesg, up come a heap of XFS errors. That's not good. Better restart it and see if that fixes things. Nope, now it won't boot. It gets to the recovery console and I can't do much. Checking the SMART status of the drives, they seem fine, but the RAID array is not shown. It also seems to be missing a drive, but not the one I replaced. Scratching my head, wondering what's wrong, a niggling thought that I'd have to do a fresh re-install because Mythbuntu hosed itself. I'd been thinking about it for a while, maybe setting it up in a virtual machine, changing over to ZFS, but I wanted to do that in my own time, rather than having my hand forced. I went to bed to think about it.

The next day, I thought I'd try booting off a live USB of Mythbuntu. Fortunately, that worked, and I could see the array. It was missing one of the drives - so I added it back in. Many hours of rebuilding later, it was back up. While it was rebuilding, I looked in to repairing XFS filesystems.

Once the array was up, I unmounted the volume that contained the recordings, and ran xfs_repair /dev/raid1/tv. It found quite a few problems, and rectified quite a few. With fingers crossed, I tried rebooting back into the original system...success! With a sense of relief and amazement the desktop appeared. Upon checking the list of recordings, quite a few were still missing, mostly from the last few days. I hadn't checked up on it for a little while, so whatever issue there was, was screwing up the recordings. Deleting them from within MythTV, to clean up the database, fixed that. I also noticed that a number of older recordings were also showing as missing. Probably just a byproduct of what was happening. No big deal, they are just TV shows after all.

The system has been running OK for the last few days, so hopefully the issue is resolved. I'm still at a loss as to what actually went wrong though. Might have to dig through some log files.

Thursday, June 18, 2015

Converting RAID1 array to RAID5 using the mdadm grow command

I have finally decided to upgrade the storage in the home theatre PC, by adding a third 3TB hard drive. The storage was set up previously as RAID 1, using the software mdadm solution for the two 3TB disks. By adding a third drive and changing to a raid 5 format, the storage would increase from 3TB (about 2.7TiB) to 6TB (about 5.4TiB).

There are a few ways to do this:
1. Copy the data to a spare drive, delete the RAID 1 array, create the RAID 5 array with the three disks, copy the data back to it.

2. Back up the data to a spare drive, remove one disk from the RAID 1 array, use that disk and the new disk to make a 2 disk RAID 5 array, copy data over, remove the RAID 1 array and add that disk to the RAID 5 array so it is 3 disks.

3. Back up the data to a spare drive, Use the mdadm --grow command to change the RAID level from RAID 1 to RAID 5, add the third drive to the RAID 5 array and let it rebuild.

Initially I was going to try option 2, such as described here. But noticing one of the comments (by the wonderfully named Svenne Krap) describing that you can just change the level of the array using the mdadm grow command, I thought it would warrant further investigation. I couldn't find many other mentions of it elsewhere on the internet, so I thought I'd document what I'd done so it might help someone else.

So to try it out, I set up a virtual machine on another PC, and created three separate drives. I used mdadm to create a RAID 1 array with two of them, and then converted that array to RAID 5. It worked! I then added the third drive, and after a little while it rebuilt into a full three disk array.

So the time came to do it for real. I first wanted to back up the array - and found an old 2TB drive lying around. Fortunately, the array was only just over two thirds full, so it all managed to squeeze on to the 2TB drive after I deleted a few old TV shows. That's still nearly 600 TV shows and movies left - I have no idea when I'll get around to watching them all, but it's nice to have.

So the data was safe. I could do this switchover with a little less stress.

One thing I noticed with the options was chunk size. Different sizes give different performance. Since this box would be primarily a media box, reading and writing large video files, I went for a larger chunk size of 512KB. When creating the array in the virtual machine initially, it created only 4KB chunks as the size of the array was not a multiple of 512KB. So first up, I resized the RAID 1 array so it was. Basically I divided the size reported by the cat /proc/mdstat command, by 512. It gave a number ending in .625, meaning there weren't an exact number of 512KB chunks. By multiplying that answer (minus .625) by 512, it gave a size that could work.

So I resized the array with the following command:

sudo mdadm -G /dev/md0 --size=2930266112

I then let mdadm know about the new drive, by adding it. The new drive is /dev/sde:

sudo mdadm /dev/md0 --add /dev/sde

Now there was a third drive added as a spare to the array, but not actually in use by it yet.

Next, the big one, change the array from RAID 1 to RAID 5 (still only 2 drives):

sudo mdadm /dev/md0 --grow --level=5 --chunk=512

The array is now a RAID 5 array, with two disks. The chunk size of 512KB was also set with that command. Time to let it use the third drive to create the full, three disk RAID 5 array:

sudo mdadm --grow /dev/md0 --raid-devices 3

This kicks off the rebuilding process, which takes many hours. You can track the progress with the cat /proc/mdstat command. It ran pretty slowly on my system, at around 30,000KB/sec, until it got to the empty part of the array, when speed nearly tripled.

Later that day, the process was finished. Running sudo mdadm --detail /dev/md0 gave the new array size of about 5.4TiB - something I was a little concerned about while the rebuild was underway, because it was only showing the old size of less than 3TB. I thought I might have to resize the array afterwards, but it all came out good.

Resizing LVM

Because I was running LVM on top of the raid array, the size of the LVM volume was still unchanged - I had to increase that to let it use the extra space that was now available. Going by the size reported by the mdadm --detail command run previoulsy, there were approximately 5588GB in the array. LVM works with three 'layers' - the physical volume, basically the hard drive or RAID array; volume groups, which consist of a number of logical volumes. See here for more information. The first step is to resize the physical volume to match the size of the array:

sudo pvresize --setphysicalvolumesize 5588G /dev/md0

Next, I could extend the logical volumes to use up more of that space. I have two main logical volumes on this volume group: one called 'tv' that holds TV shows, movies, and music from MythTV, and another called 'sysbackup' that holds backups of data. I wanted to enlarge both of these - the 'tv' one, and also 'sysbackup' because I wanted to use it with Crashplan as a backup destination for some other PCs in the house.

I wanted to increase the 'sysbackup' volume to 1.2TB, so I used the following command:

sudo lvextend -L1.2T /dev/raid1/sysbackup

A couple of things to note here: the '-L1.2T' instructs it to make the total size of the volume 1.2 Terabytes. Also, the name of the volume group is still /dev/raid1 - I haven't changed the name of that, even though it is now RAID 5. It can be done with the vgrename command, but it would also mean changing mount points in the /etc/fstab file.

Next was the resizing of the 'tv' volume. I wanted to increase it by 1.6TB, so I used the following command:

sudo lvextend -L+1.6T /dev/raid1/tv

Notice the '-L+1.6T' - the plus sign commands it to expand the volume by the specified amount. Unfortunately, it reported that there wasn't 1.6TB spare, and gave a number of available extents that were free. So I tried a different approach, specifying size in extents rather than TB:

sudo lvextend -l+418800 /dev/raid1/tv

The lower-case 'l' instructs it to use extents rather than GB, TB or whatever. This worked, and I now had two newly-resized logical volumes.

Resizing the file systems

But there was one final step - the logical volumes were now bigger, but the file systems did not know about it yet. Fortunately it isn't too difficult.

The sysbackup volume was formatted with the ext4 file system. To resize that, it first had to be unmounted:

sudo umount /mnt/sysbackup

Then a check of the file system was done:

sudo e2fsck -f /dev/raid1/sysbackup

Then finally, the file system was resized to fill the available space:

sudo resize2fs /dev/raid1/sysbackup

Finally, the file system can be re-mounted. Since it is specified in the /etc/fstab file, running

sudo mount -a

did the trick. Next was the 'tv' volume. This volume was formatted with the XFS file system, and it can be resized while still mounted. The command for that was:

sudo xfs_growfs /dev/raid1/tv

Conclusion

The advantage of using the mdadm --grow command is that it is still the same array according to mdadm, just a different level. Running

sudo mdadm --detail --scan

shows that the array still has the same identifiers as when it was a RAID 1 array, so no further work is required. I ran

sudo update-initramfs -u

just in case, so it would pick up any changes in the array (and not give it a name like /dev/md127 after a reboot). I restarted the PC, and everything came back up, with the roomy new array showing up as it should. Done!

Monday, March 16, 2015

HTPC update - separating backend from frontend

After using the Atom-based HTPC for a while, I noticed it start to struggle in high load situations - like playing back HD TV while recording multiple streams, commercial flagging, trying picture-in -picture, running disk checks. I decided to upgrade the box to something a bit more powerful, so there’s a bit more headroom.


So a while back I picked up an AsRock H77-M motherboard, with a cheap Pentium G2020 CPU. 4GB of RAM was sitting around idle after a previous desktop PC upgrade, so in that went. Two, 3GB drives were installed, still in a RAID-1 configuration. The OS was installed on an SSD that was left over. Graphics were handled by an Nvidia GT630 card, made by ASUS. I chose that one because of the passive cooler, and the fact that it is quite power efficient for what it does. A TP-Link wireless card was put in so it could see the network.


For the operating system, I went with Mythbuntu 12.04 again. Installation went pretty straightforward. No real problems, TV listing data was handled by the excellent shepherd - that just does its own thing, causing very little trouble.


Playback worked beautifully on this machine - it handled everything I could throw at it. However, my curiosity was getting the better of me, so I dug out the old Atom box and reinstalled mythbuntu, this time running just as a frontend. That made quite a bit of difference - without all the overhead of doing all the database tasks, flagging commercials and the like, it ran beautifully.


Since I had recently had some network cabling installed in the house, I tried running the new box as a backend only, up in the study, with just the Atom frontend in the lounge. Without any worries about wifi dropping out, the nice fast, stable wired network connection supplied data from the backend nicely.


It ran like that for a while, but then I started thinking that there are now three points of failure instead of one. Where previously there was just the Pentium backend/frontend to troubleshoot, there was now also the frontend box, plus the router that could cause problems. Eventually I put the frontend box back away.


I’m not sure what made me do it, but soon after getting a new TV (LCD! HDMI inputs! No more fooling around with xorg.conf and custom modelines and VGA to Component transcoders!) I wanted to clear out a bit of clutter from the lounge room. Moving the big box meant I could get rid of a whole heap of things from the lounge - the UPS, the transcoder (not needed any more), the aerial connection with splitter and the external TV tuners. All of the bulky component cables are no longer needed, either, replaced by a single HDMI cable.


So up to the study the box went, along with all the tuners and UPS, but they have been tucked away behind the desk, out of sight. The lounge room now just has a power cable to the frontend box, a network cable, an optical cable to the receiver, and the HDMI cable to the TV.


For now, I’m just rebooting the router every few weeks, or whenever there is some sort of issue like PCs not being able to access the internet. A lot of data is now going across it with TV viewing.


The backend box was able to have the graphics card removed from it, as it is now running headless. I only connect to the machine via ssh. That has saved a bit of power consumption. Another change made to the backend box has been to add a second SSD, to run the OS off a RAID-1 array. This way, a drive failure will not stop the machine. It will run in a degraded state until a new one can be installed.


I did the same with the frontend box - a couple of old laptop drives were pressed into service for that. Both are pretty elderly, but since they are in RAID-1, unless they both die at the same time, it should be safe. And really, it’s only a frontend, it wouldn’t be too much of a hassle to rebuild it from scratch if needed. I’m considering replacing those drives with a couple of USB sticks, to further reduce power consumption.


For the backend box, disk usage is sitting at around 65%. I’ve been copying some of my DVD library to the server, just to avoid dealing with physical discs. I don’t actually have a DVD drive in the frontend, anyway. If space gets tight, I might need to expand storage. I’m toying with getting a third 3TB drive, and converting the system to either RAID-5, or going a bit radical and putting ZFS on Linux back on it. My only concern with that is that it can have issues with fragmentation, and my PC doesn’t use ECC RAM. A lot of memory is also used for caching purposes with ZFS.

One thing I am wondering about with my current XFS file system is to do with fragmentation. I currently have a cron script to defragment it nearly every morning. Not sure what the deal is with RAID-5, and how it deals with fragmentation. Might just have to try it and see, when I get around to it. Might just go with ext4 for the file system. Outright performance isn’t so much of an issue for this machine - video recording and playback isn’t pushing it too much.

Saturday, October 26, 2013

Laptop keyboard stopped working after upgrade to Lubuntu 13.10...fixed

I have a fairly old LG laptop (circa 2007) that I have been running Lubuntu on. It is a lighter-weight version of Ubuntu, that uses the LXDE desktop rather than Gnome. Performance is reasonable with it. Anyway, since this is sort of the guinea-pig of all my systems, I thought I'd upgrade it to the latest version of Lubuntu (my other systems are still on the 12.04 LTS releases).

The upgrade went pretty smoothly, it has to be said. It was only when I got to the login screen and tried to type my password that things went wrong - I couldn't type anything. None of the keys worked, not even Control or F1, so I could switch to a text console. The touchpad and left/right click buttons were still working, and the keyboard still worked in the Grub boot menu.

A bit of head scratching, and plugging in a separate USB keyboard allowed me to type again. But that isn't quite an optimal solution. A bit of Googling led me to the following bug report for someone else with an LG laptop. Going by the eighth comment in that, it turns out a couple of options have to be added to the grub menu, by editing the /etc/default/grub file. In my case, the file had the following line:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

I added two options to the end, so it became:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.nopnp=1 i8042.dumbkbd=1"

Saved the file, and ran sudo update-grub to let the changes apply, and rebooted. I am now typing this blog post on the working laptop keyboard!

Monday, November 26, 2012

Adding a larger drive to a RAID array - ZFS

If you have waded through my previous post about adding a larger drive to an lvm volume running on an mdadm RAID array, you'll have seen it's a bit of a fiddly, complex process. I have been fiddling around a bit with ZFS on Linux on my old desktop machine (Core 2 Quad Q9300, 8GB RAM). I had a couple of old 1TB drives, and added the 2TB drive that I removed from the home theatre PC, mentioned in the previous post. Actually, I had put the 3TB drive in this old box initially, intending it to replace the HTPC. But it's a bit power hungry, using around 90W at idle, and I am having some niggling playback issues with it. But that's another story. So the 3TB WD Red drive gets replaced with the old Samsung 2TB.

With the three drives, I tried creating a ZFS raidz array, basically equivalent to RAID 5. It runs pretty well. Swapping the drives over (from 3TB to 2TB) gave me an opportunity to do the ZFS version of a drive swap.

It was a bit of an anticlimax, really. There was one command, the syntax from the man page is below:

zpool replace [-f] pool device [new_device]

In my case it was something like

sudo zpool replace tank /dev/disk/by-id/scsi-SATA...(the drive removed) /dev/disk/by-id/scsi-SATA....(the 2TB drive)

That started off the resilvering process (resynchronising the data across the drives). If I then were to replace each other drive, one at a time, with the above process, it would resize the array to match the new drive sizes.

The beauty of ZFS is it simplifies things so much. It is the RAID management and file system all in one.

I would have liked to run it in the HTPC, but ZFS is quite resource-hungry, in terms of processing power and memory. Since my HTPC just has a little Atom 330 chip and is maxed out with 4GB of memory (of which only 3GB is visible), I would be staying with mdadm and lvm. There is virtually no noticeable performance penalty with it. Oh well, going through all those steps is not exactly a frequent task.

Adding a larger drive to a software RAID array - mdadm and lvm

The MythTV box I have in the lounge previously had two storage hard drives, in a RAID 1 configuration to prevent data loss in case of a drive failure. The drives were a 3TB Hitachi and a 2TB Samsung. I figured the Samsung drive was getting on a bit now, and it was time a new drive was installed. Might as well make it a 3TB model as well, to take advantage of all the space available on the other one that was sitting unused.

A 3TB Western Digital Red drive was picked up. I chose this as it is designed for use in a NAS environment: always on. It also has low power consumption and a good warranty. I considered a Seagate Barracuda 3TB - they were cheap, performance would be better than the Red, but they are only designed for desktop use, around 8 hours a day powered on. Warranty was pretty short as well.

Removing and replacing the old drive

The drives were configured in a software RAID 1 array, using mdadm, with lvm on top of that. This makes the array portable, and not dependent on a particular hardware controller.
The commands here were adapted from the excellent instructions found here at howtoforge.com.
Fortunately I had enough space on another PC that I was able to back up the contents of the array before starting any of this.
To remove the old drive, which on this machine was /dev/sdc, the following command was issued to mark the drive as failed, in the array /dev/md0:

sudo mdadm --manage /dev/md0 --fail /dev/sdc

Next step is to remove the drive from the array:

sudo mdadm --manage /dev/md0 --remove /dev/sdc

Then, the system could be shut down and the drive removed and replaced with the new one. After powering the system back up, the following command adds the new drive to the array:

sudo mdadm --manage /dev/md0 --add /dev/sdc

The array will then start synchronising the data, copying it to the new drive, which could take a few hours. Note that no partitioning was done on the disk, as I am just using the whole drive in the array.

While the sync is in progress, you can check how it is progressing via:

cat /proc/mdstat

It will show a percentage of completion as well as an estimated time remaining. Once it is done, the array is ready for use! I left the array like this for a day or so, just to make sure everything was working alright.

Expanding the array to fill the space available - the mdadm part

Once the synchronisation has completed, the size of the array was still only 2TB, since that is the largest a RAID 1 array could go when it consists of a 3TB and a 2TB drive. We need to tell mdadm to expand the array to fill the available space. More information on this can be found here.

This is where things got complicated for me. It is to do with the superblock format version used in the array. More detail can be found at this page of the Linux RAID wiki.

To sum up, the array I had was created with the version 0.90 superblock. The version was found by entering

sudo mdadm --detail /dev/md0

The problem, potentially, was that if I grew the array to larger than 2TB it may not work. As quoted by the wiki link above:

The version-0.90 superblock limits the number of component devices within an array to 28, and limits each component device to a maximum size of 2TB on kernel version [earlier than] 3.1 and 4TB on kernel version 3.1 [or later].

Now, Mythbuntu 12.04 runs the 3.2 kernel, so according to that it should be OK supporting up to 4TB. But I wasn't 100% sure on that, and couldn't find any references elsewhere about it. I decided the safest way to go about this was to convert the array to a later version of the superblock, that didn't have that size limitation. Besides, it would save time in the future in case of trying to repeat this with a drive larger than 4TB.

Following the suggestion of the wiki, I decided to update to a version 1.0 superblock, as it would store the superblock information in the same place as the 0.90.

Note: if you are trying this yourself, and the array is already version 1.0 or later, then the command to grow it is just as below (may not want to do it with 0.90 superblock and  larger than 2TB):

mdadm --grow /dev/md0 --size=max 

Since I was going to change the superblock version, it involved stopping the array and recreating it with the later version.

Once again, to check the details of the array at the moment:

sudo mdadm --detail /dev/md0

Now, since the array is in use by MythTV, I thought it safest to stop the program:

sudo service mythtv-backend stop

Also, I unmounted where the array was mounted:

sudo umount /var/lib/mythtv

Since the data is in an LVM volume on top of the array, I deactivated that as well (the volume is named raid1 in this instance):

sudo lvchange -a n raid1

The array is now ready to be stopped:

sudo mdadm --stop /dev/md0

Now it can be re-created, specifying the metadata (superblock) version, the RAID level, and the number and names of the drives used:

sudo mdadm --create /dev/md0 --metadata=1.0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc

The array will now start resynchronising. This took a number of hours for me, as there were around 770GB of recordings there. The RAID wiki link included --assume-clean in the above command, which would have skipped the resync. I elected to leave it out, for safety's sake.


Progress can be monitored with:

cat /proc/mdstat


The lvm volume can be restarted:

sudo lvchange -a y raid1

and the unmounted volumes can be re-mounted:

sudo mount -a

Check if they are all there with the

mount

command. The mythtv service can also be restarted:

sudo service mythtv-backend start

When the array is recreated, the UUID value of the array will be different. You can get the new value with:

sudo mdadm --detail /dev/md0

Edit the /etc/mdadm/mdadm.conf file, and change the UUID value in it to the new value. This will enable the array to be found on next boot.

Another thing to do before rebooting is to run

sudo update-initramfs -u

I didn't do this at first, and after rebooting, the array showed up named /dev/md127 rather than /dev/md0. Running the above command and rebooting again fixed it for me.

Expanding the array to fill the space available - the lvm part

Quite a long-winded process, isn't it? Using the lvm command to show the lvm physical volumes:

sudo pvdisplay

showed the array was still 1.82TiB (2TB). It needed to be extended. The following command will fill the volume to the available space:

sudo pvresize -v /dev/md0

To check the results, again run:

sudo pvdisplay

Now, running:

sudo vgdisplay

gave the following results for me:


--- Volume group ---
  VG Name               raid1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               2.73 TiB
  PE Size               4.00 MiB
  Total PE              715397
  Alloc PE / Size       466125 / 1.78 TiB
  Free  PE / Size       249272 / 973.72 GiB
  VG UUID               gvfheX-ifvl-yW9h-v4L2-eyzs-95fe-sng2oN

Running:

sudo lvdisplay

gives the following result:

--- Logical volume ---
  LV Name                /dev/raid1/tv
  VG Name                raid1
  LV UUID                Dokbch-ZJkg-QmRW-d9vR-wfM8-BFxb-3Z0krs
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.70 TiB
  Current LE             445645
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

I have a couple of smaller logical volumes also in this volume group, that I have not shown. That's why there's a bit of a difference between the Alloc PE value in the volume group, and the Current LE value in the logical volume. As you can see from the lines shown in bold text, the volume group raid1 has 249272 physical extents (PE) free, and the logical volume /dev/raid1/tv is currently sized 445645. To use all space, I made the size 249272+445645, which is 694917.

The command to resize a logical volume is lvresize. Logical.

sudo lvresize -l 694917 /dev/raid1/tv

Alternatively, if you want to avoid all the maths, an alternative command is

sudo lvresize -l +100%FREE /dev/raid1/tv

That command just tells lvm to use 100% of the free space. I didn't try it myself (I only found it after running the command before it).

Now, after that has been run, to check the results, enter:

sudo lvdisplay

The results:

--- Logical volume ---
  LV Name                /dev/raid1/tv
  VG Name                raid1
  LV UUID                Dokbch-ZJkg-QmRW-d9vR-wfM8-BFxb-3Z0krs
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                2.65 TiB
  Current LE             694917
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

and

sudo vgdisplay

gives:

 --- Volume group ---
  VG Name               raid1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               2.73 TiB
  PE Size               4.00 MiB
  Total PE              715397
  Alloc PE / Size       715397 / 2.73 TiB
  Free  PE / Size       0 / 0   
  VG UUID               gvfheX-ifvl-yW9h-v4L2-eyzs-95fe-sng2oN

No free space shown; the lvm volume group is using the whole mdadm array, which in turn is using the whole of the two disks.

The final step for me was to grow the partition that is on the logical volume. I had formatted it with XFS, as it is good with large video files. XFS allows increasing size on a mounted partition, so the command used was:

sudo xfs_growfs -d /var/lib/mythtv

Finally, it is complete!

Tuesday, August 21, 2012

Upgrading MythTV to Mythbuntu 12.04 64-bit

As described in the previous post, the time had come to upgrade my MythTV installation. It had been installed using Mythbuntu 10.10, 32-bit. I wanted to move to the most recent version, currently Mythbuntu 12.04. There were two options available to me: upgrading the current installation, or a new install of the latest version, and copying the previous MythTV database over to it. The first option was not too appealing. Doing a distribution upgrade is a fingers-crossed, hope-this-works process a good part of the time - although I have had pretty good fortune with it on my desktop PC. If I were to go this route, I couldn't just upgrade from 10.10 -> 12.04. It would have to be 10.10 -> 11.04 -> 11.10 -> 12.04. Three complete system upgrades in succession, which would likely leave a lot of cruft left over in the installation. Lots of points of failure. The final nail in the coffin was that I wanted to move to a 64-bit installation, to take advantage of the full 4GB of memory that was installed in the system.

So a clean install it was. To prepare, I copied a number of setup files to a safe place that wouldn't be overwritten during the install. Files like:


  • mdadm.conf, from /etc/mdadm, which had the RAID configuration information. 
  • xorg.conf, from /etc/X11, which had the custom modeline used to output to my (now ancient) CRT TV. 
  • ssmtp.conf, from /etc/ssmtp, for email notification in the case of errors.
  • apcupsd.conf, from /etc/apcupsd, the setup of the APC UPS monitoring software.
  • hardware.conf, from /etc/lirc, used by LIRC to set up the Media Centre remote control.
  • the MythTV database password, from /etc/mythtv/config.xml


Not all the files would end up being needed, but I felt safer having them handy, just in case.

The installation process started from a USB stick loaded with the Mythbuntu 12.04 installer, since I don't have an optical drive in it. No room, with three drives already there. I also hooked up a monitor instead of a TV, because that needs extra setup to output to a component connection.

Since there was some left over free space on the SSD, I installed to that, leaving the 10.10 install intact for now. It went remarkably smoothly. One thing I noted was that at the beginning, there is an option to update the installer. I clicked that, and the system seemed to do nothing for quite a while. Opening up a terminal window and running "top" showed that it was in fact doing something, so I waited until it finished. Once done, the installation process could begin. It would have been nice for it to have some sort of indication that it was doing something, though.

Partway through the process, it prompted for what sort of remote control is being used. I selected "Windows Media Center remote", and the thing was set up automatically. Brilliant! Didn't need that LIRC setup file listed above.

To get the system outputting to my TV properly, I just copied the old xorg.conf file to the /etc/X11 directory (I renamed the existing file, in case I needed it again) and rebooted with a sense of hope and anxiety. A minute or so later, the screen came to life! It worked!

The big one, that I was dreading, was getting the RAID array to be seen by the new install. I still consider myself quite the mdadm novice, so I was pleasantly surprised to find that the RAID array was picked up by the new install quite easily - after installing mdadm and lvm2. This command:

sudo mdadm --assemble --scan

was all that was needed for it to scan for any drives, re-assemble the array, and write a new mdadm.conf file. Easy!

Next was seeing if LVM could find the volumes created. Running

sudo lvm pvscan
sudo lvm vgscan
sudo lvm lvscan

detected all the physical volumes, volume groups, and logical volumes, respectively. Mounting the logical volume that held the recordings was successful - I could see them all. Adding the device entry to /etc/fstab seemed to work, eventually. On the first reboot it came up with an error message, mentioning there were serious problems with the mount. Not sure what it was, but a reboot afterwards seemed to make it happy. I had installed xfsprogs, to manage and defragment XFS filesystems; maybe that helped.

Going through some of the MythTV screens brought up some error messages that it couldn't write to the drive - it turned out to be a permissions issue for the files and directories on the array. Entering

sudo chown -R mythtv:mythtv /var/lib/mythtv

changed the ownership back to the mythtv user, and it seemed happy again.

Bringing the previous database over was a painless process as well. I followed the instructions here and here and the new database came over without drama.

So far it all looks good. I may have a tweak of the VDPAU settings to see if playback can be improved any - it looks fine, but the settings have just been carried over from the previous 0.23 version. Things may have changed in the current 0.25 release.

Now that the system is updated to 12.04, which is a Long Term Support (LTS) release of Ubuntu, I won't be caught out with an obsolete version after 18 months like I was with 10.10.

As a side note, one of the main reasons for moving to 64-bit was because the Zotac IONITX-A-U motherboard, with an Atom 330, was only showing 3GB of RAM installed, even though I had two, 2GB sticks in there. I thought this was a 32 bit software issue. After installing the 64-bit OS, the free memory remained the same. Even in the BIOS, it only reported 3072MB. Strange. A bit of searching around revealed this thread on Zotac's supprt forums. As it turns out, if 4GB is installed, 512MB is reserved for system use, and then there is the amount used by the onboard Nvidia ION graphics - which I had set to the maximum allowed 512MB. So only 3GB visible. It's OK, it still runs fine, but it would have been nice to be able to use a little extra.