Raid with GPT disk on proxmox

Proxmox 3.2 RAID-1 Conversion – GPT Mode – With Bonus Features!

The latest Proxmox can install as GPT.  Not sure if we have any say in the matter.  BUT, in case you had a die-hard need to convert your Proxmox server to RAID-1, this is a cleaned-up set of instructions.  YMMV, make sure you backup your data, check your particular partition configuration, and READ CAREFULLY.  Also it is important to note that gdisk and sgdisk will be your friend, fdisk and sfdisk will not.

Basis: http://boffblog.wordpress.com/2013/08/22/how-to-install-proxmox-ve-3-0-on-software-raid/
and http://burning-midnight.blogspot.com.au/2014/05/proxmox-3x-raid-1-transfiguration.html
Notes:
It is necessary to either have a valid license or to manually configure apt to use another set of repositories.  Otherwise, mdadm will not be available.  Details at:
https://pve.proxmox.com/wiki/Package_repositories

Instructions:

  • Duplicate partition info onto new drive:
    • sgdisk -b sda-part.txt /dev/sda
    • sgdisk -l sda-part.txt /dev/sdb
  • Configure partition types as RAID members:
    • sgdisk -t 2:fd00 -t 3:fd00 /dev/sdb
    • NOTE:  partition 1 must be type EF02 to support GRUB.  DO NOT TOUCH THIS PARTITION!
  • Create RAIDs:
    • boot
      • mdadm –create /dev/md0 -l 1 -n 2 missing /dev/sdb2
    • root
      • mdadm –create /dev/md1 -l 1 -n 2 missing /dev/sdb3
  • Copy /boot
    • mkfs.ext3 /dev/md0
    • mount /dev/md0 /mnt
    • rsync -avsHAXS /boot/ /mnt/
    • ## Update fstab with new boot mount: /dev/md0 (don’t use UUID)
    • reboot
  • Configure mdadm.conf
    • mdadm -D –brief /dev/md0 >> /etc/mdadm/mdadm.conf
    • mdadm -D –brief /dev/md1 >> /etc/mdadm/mdadm.conf
  • Update GRUB & initramfs
    • echo ‘GRUB_DISABLE_LINUX_UUID=true’ >> /etc/default/grub
    • echo ‘GRUB_PRELOAD_MODULES=”raid dmraid”‘ >> /etc/default/grub
    • echo raid1 >> /etc/modules
    • echo raid1 >> /etc/initramfs-tools/modules
    • grub-install /dev/sda
    • grub-install /dev/sdb
    • update-grub
    • update-initramfs -u
  • Convert BOOT partition to RAID-1
    • sgdisk -t 2:fd00 /dev/sda
    • mdadm –add /dev/md0 /dev/sda2
  • Migrate all logical extents to /dev/md1
    • pvcreate /dev/md1
    • vgextend pve /dev/md1
    • pvmove /dev/sda3 /dev/md1
    • …. wait a long time ….
    • vgreduce pve /dev/sda3
    • pvremove /dev/sda3
  • Convert sda3 to RAID
    • sgdisk -t 3:fd00 /dev/sda
    • mdadm –add /dev/md1 /dev/sda3
    • …. wait a long time ….
  • Reboot for final test – everything should work!
  • Configure monitoring, device scans, emails, etc
  • Configure SMART
  • SUCCESS!!

Add a relay or Smart host to proxmox ve

Setup a proxmox server and still waiting for your admin emails you may need to add a relayhost to your proxmox host

 

To do this do the following:

 

Log in as root

nano /etc/postfix/main.cf

then add relayhost = mail.bigpond.com (example use your providers email host)

ctrl x to save

then postfix stop

postfix start

and you should be up and running

 

Network Channel Bonding or teaming

I must say I had some issues trying to set up network interface card (nic) bonding recently, for those of you that are unfamiliar with this term it is the process of using more than one network adapter together as one to achieve different outcomes. For example failover when one network adapter fails the other continues as if nothing has happened (depending on how it is set up you may experience lower bandwidth or throughput or you may not experience anything). They can also be set up to increase bandwidth, for example your typical gigabit network speed would vary depending on the hardware used but typically far less than rated, bonding would usually increase this proportionally to double your original speed. I consistently get more than 120MB/s transfer speed across the network, this can help when you have multiple users on the network even at home, every little bit helps and it is a great time saver for backups (much faster than usb 2.0 backup drives) To cut a long story short my preferred distributions are Redhat based such as Centos, Scientific Linux, Oracle etc. On occasion I also use Freenas for storage as opposed to Server use. I scoured the net and came across quite a few instructions to setup nic bonding. This is all done manually editing configuration files as follows:

This should be a typical setup for 2 nics from my experience in order for this to work you must remove NetworkManager if it is installed. Just a simple yum remove NetworkManager (case sensitive) and once setup it should work. Contrary to some things I have read it is possible to set this up using dhcp just not as easy. NetworkManager may only be a graphical tool I am unsure if it is not there dont lose any sleep over it.

/etc/modprobe.d/modprobe.conf

alias bond0 bonding

next we edit the following files in /etc/sysconfig/network-scripts

ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.10.0
NETMASK=255.255.255.0
IPADDR=192.168.10.100
USERCTL=no
BONDING_OPTS="mode=1 miimon=100" See a description of the modes below: miimon is required particularly where link failover is used and is a measure of how often in milliseconds the link is checked to be active
ifcfg-eth0
#eth0
DEVICE=eth0
MASTER=bond0
SLAVE=yes
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
ifcfg-eth1
#eth1
DEVICE=eth1
MASTER=bond0
SLAVE=yes
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

On completion of this you should type the following in you terminal:

service network restart

And you should have a fully functioning bond
Please post if you find any errors or suggestions for improvement.

Replicated from Redhat for more info see : https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/s2-modules-bonding.html
mode=<value>
…where <value> is one of:
  • balance-rr or 0 — Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.
  • active-backup or 1 — Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.
  • balance-xor or 2 — Sets an XOR (exclusive-or) policy for fault tolerance and load balancing. Using this method, the interface matches up the incoming request’s MAC address with the MAC address for one of the slave NICs. Once this link is established, transmissions are sent out sequentially beginning with the first available interface.
  • broadcast or 3 — Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.
  • 802.3ad or 4 — Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant.
  • balance-tlb or 5 — Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.
  • balance-alb or 6 — Sets an Active Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPV4 traffic. Receive load balancing is achieved through ARP negotiation.