OpenSUSE and Partitioning

On and off for a while now, I've been using OpenSUSE.  Initially I started using this on my laptop, starting with 11.2 x86_64.  But, as always, I keep switching distros from time to time, because of various reasons and I think the most recent was to switch from 64 bit to 32 bit because of problems with flash etc, and some specialist tools I require for my job which for some reason just won't work on a 64 bit system for the time being.
My home desktop system, I'm currently using as a test server, and so it has gone from distribution to distribution over the past months, depending on whatever the requirement was.  Now, I needed a Xen Server for testing, and so it doesn't leave me with much choice by default, because a lot of distros are going for KVM instead of Xen.  And I just hate KVM because the performance isn't so brilliant right now, and so prefer Xen.  Just my personal preference.  Xen just happens to perform so much better for me particularly when running multiple servers and distribution of resources amongst them.
Yesterday, I was trying to get OpenSUSE 11.2 x86_64 installed (I know 11.3 is out, but I didn't want to wait and download, and needed to test urgently).  Problem was, every time the system booted, it kept wanting to configure a dmraid and I didn't want this.  When partitioning manually, and deleting this, the installer would complain and crash out later and I could never do what I wanted to do with it.  Now, I had installed Citrix XenServer prior to this, and wondered maybe this had left something behind on the disks which the installer was picking up.  I tried deleting the metadata, and it just wouldn't do it.  Incidently, my disk controller is an Adaptec Serial ATA II RAID 1430SA.  I didn't have it configured as a raid as such, but obviously I had which I found out later.
I had configured each disk, as simple raid.  And so, it was picking this up from the controller, and causing the dmraid problem.  Obviously, this card is software raid and not a full hardware raid (so fakeraid if you like).  What I did later, was delete the whole configuration, and then thought, I'll configure a Raid0 across all four 250GB disks.  Then in the installer, I wanted to use this and create the partitions, but it was so damn slow.  I couldn't figure out why this was, Raid0 should be fast.  Perhaps something wrong with my controller.  I then figured, well let's just use it how I wanted, so completely deleted all the raid config, but this time, didn't create simple raid on each of the disks like I had before.  That meant that the controller then saw them as JBOD (Just A Bunch of Disks).  Previously when I had tried this, I think it was CentOS, but it wouldn't see the disks at all, which was why I had done the other method before.  Once I had it as JBOD, the OpenSUSE installer worked as it should do, didn't try to do any dmraid config of which I never wanted in the first place.
Now, I've got it configured with a couple of partitions on /dev/sda for swap and /boot and the rest of the partitions are added to an LVM of 930GB.  I know, if a disk fails I lose the lot, but as I'm only using this for testing and nothing serious, it doesn't really matter to me.  I could always configure Raid1+0, but then I lose 50% of the disk space.  If I was doing something serious with this system, then that is the method I would choose.  But I think from now on, I'll just use this controller in JBOD mode instead of configuring the array because of two problems.  First it seemed that disk access was slow, and the second, I hate the automatic detection wanting me to create dmraid when I don't want it.

For a while I was mad at OpenSUSE thinking this was the problem, but as it turned out it was my disk configuration :-)