Categories
Design

How to setup software raid for a simple file server on ubuntu

  • Test your wits and sharpen your skills. Take the Challenge »
  • factory rest ubuntu
  • Can’t install software or update Ubuntu 20.10
  • Never even logged into Ubuntu. please help!
  • ROM
  • CPU
  • RAM
  • GPU

45 Replies

Cant you just use the manual wizard option when choosing disk in ubuntu setup?

momurda ​ I set up those partitions using the manual wizard during the installer.

There is a software raid option in the manual disk setup wizard. Raid 10 is one of the options. Then it allows you to choose partitions of each disk device to add to md.

If you have those 3 partitions(/boot, /swap, /) on each disk, make md0 raid10 mount point /boot as ext4, md1 raid10 mount point /swap as swap, md2 raid10 mount point / as ext4

I have the SWAP and / configured in their own RAIDs. Anytime I’ve tried to RAID the boot partition, the GRUB install fails; I thought I read somewhere that GRUB didn’t like being installed on a RAID 10.

How to setup software raid for a simple file server on ubuntuJust for the sake of trying, I redid my partitions the way you suggested, and GRUB won’t install.

Which ISO/installer are you using?

The standard download at ubuntu.com will offer you ubuntu-18.04.4-live-server-amd64.iso

This includes a new (ish) installer called Subiquity and I’ve had a lot of issues with it getting it to play nice when it comes to more advanced disk setups using LVM, ZFS and whatnot. It’s possible you’re having issues getting these Software RAID levels to work, too.

In general I find unless you are doing a straight forward simple installation, use the alternate installer instead. Download from http://cdimage.ubuntu.com/releases/18.04/release/ The ISO will be called ubuntu-18.04.3-server-amd64.iso

This is the previous installer before Subiquity entered the scene. Its advanced partitioning and disk setup is, well, more advanced.

Having said all of this, I am not 100% sure if RAID10 works with Grub2 so this alternate installer may still not do the job for you.

If Grub2 will work with RAID10 and the wizard/installers do not, then you may just need to perform a full manual installation. It involves using the desktop ISO which includes a Live option. Once in the Live system, drop to shell, partition, mount and install manually.

Use this guide as a starting point. It is ZFS based, but you can adjust for your needs. It gives you at least a basic idea of what you are looking at having to do. https://github.com/openzfs/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

I am curious to find out if the alternate installer will do the trick to avoid this complicated way of doing it. If you do want to give it a try, I’d be happy to try it along with you for the academic value. I use a modified version of the methods in the guide above to get my Linux servers running with ZFS on /boot and root. These methods do assume EFI, however. That may become a sticking point but I’m sure it can be worked around.

Part of the benefit of the alternate installer is that it allows us to install Ubuntu in ways that aren’t normally available with the live installer. One of the additional features is to be able to install Ubuntu on software RAID, which is what we will do in this section. Here, the steps that we’ll go through actually continue on from step 14 in the previous section. Follow these steps to set up an installation with software RAID, specifically with RAID1 between two disks in this case.

  1. In the previous section, we chose Guided – use entire disk at the screen shown in the following screenshot. To set up RAID, we’ll select Manual on this screen instead:
  1. Next, we’ll select the first disk and press Enter :
  1. Next, you’ll be asked whether you will want to create a new partition table on this disk, which will wipe all data on it. Choose Yes and press Enter :
  1. On the next screen, select the second disk (the one we haven’t initialized yet) and press Enter :
  1. You’ll again be asked whether you wish to create a new partition table. Select Yes and press Enter :
  1. On the next screen, we’ll choose the Configure software RAID option:
  1. Before we can continue, the installer must finalize the changes we’ve made so far (which at this point has only been to initialize the disks). Select Yes and press Enter :
  1. Next, choose Create MD device and press Enter :
  1. Next, we’ll select the type of RAID we wish to work with. Select RAID1 and press Enter to continue:
  1. Next, choose the number of disks we will add to RAID1. RAID1 requires exactly two disks. Enter 2 here and press Enter :
  1. If we have additional drives beyond two, we can add one or more hot spares here. These will be used in the event of a failure with one of the RAID disks. If you have an extra disk, feel free to use it and enter the appropriate number here. Either way, select Continue and press Enter :
  1. Next, we will choose which disks to include with our RAID1 configuration. We should have exactly two listed. Use the arrow keys to move between them, and press Space to select a disk. We need to select both disks, which means we will be marking both with an asterisk ( * ). Once you’ve selected both disks, select Continue and press Enter :
  1. Next, we’ll finalize our selections so far. Select Yes and press Enter :
  1. Select Finish to continue on:
  1. Now, we have successfully set up RAID. However, that alone isn’t enough; we need to also format it and give it a mount point. Use the arrow keys to select the RAID device we just set up and press Enter :
  1. Using the following screenshot as a guide, make sure you set Use as to EXT4 journaling file system (you can choose another filesystem type if you wish to experiment). In addition, set the Mount point to / and then select Done setting up the partition :
  1. To finalize our selection, choose Finish partitioning and write changes to disk :
  1. Next, we’ll see one last confirmation before our new partition will be created. Select Yes and press Enter :

That pretty much covers the process of setting up RAID. From here, you can return to the previous section and continue on with the installation process with step 16. Once you finish the entire installation process, it will be useful to know how to check the health of the RAID. Once you log in to your new installation, execute the following command:

In the previous screenshot, you can see that we’re using RAID1 (look for the section of the output that reads active raid1 ). We an also see whether each disk in the RAID array is online, looking for the appropriate number of Us. It’s this line: 20953088 blocks super 1.2 [2/2] [UU]

If the output included U_ or _U instead of UU , we would have cause for alarm, because that would mean one of the disks encountered an issue or is offline. Basically, each U represents a disk, and each underscore represents where a disk should be. If one is missing and changed to an underscore, we have a problem with our RAID array.

Originally asked on Server Fault here, but I was told to move it here.

I have an old PC which I want to use as a file server in my office. I connected two 500GB HDDs and one 300GB HDD to it. Now I want to install an operating system on it and configure RAID so I have fault tolerance.

One thing to note is that the PC is pretty old – it only has 2GB of DDR2 RAM (not sure about the CPU model).

I thought of installing CentOS 7 because of the low system requirements, and using RAID-5. Are those good choices for my setup? And how can I configure RAID-5 on the server?

Edit: Let’s assume I get a 3rd 500GB HDD (OS goes on the 300GB drive). How can I set up RAID-5?

1 Answer 1

Based on your question, I am assuming you do not have a lot of IT experience. Forgive me if I am wrong. I will answer this as simply as I can.

Without going into complicated technical details as to why, you cannot do RAID-5 with those diska, as you need three 500 GB drives for RAID-5. With the two 500 GB disks, you can do RAID-1, which is mirroring. You need to make a choice, invest a small amount of money into a third 500 GB disk, or lose half your space with mirroring. Google RAID levels to learn more. Personally, I would install the OS on the 300 GB drive and invest in another drive for RAID-5, as you will keep redundancy and lose less space for the redundancy.

Since this machine is “old” and you appear to lack the knowledge on how to set up RAID, I would stay away from a server OS like CentOS. Instead, I would go with a more dedicated OS like OpenMediaVault. OMV is a Linux based OS, but is designed to be an easy to use dedicated file server. It has much of the Linux distro “bloat” removed, so it is lightweight and requires low resources. I have used OMV and can say it is excellent, stable, and easy to use. OMV is extremely new user friendly with a simple web interface. Actually, once you get OMV installed, it is controlled entirely though the web interface. OMV will walk your through the setup, as well as making the RAID you decide on.

Good day to all. Encountered problem while installing Ubuntu on SuperServer 1028R-WTRT. In principle, already found two working solutions, but I want to ask the community – how all the same correctly and optimally. There is a platform SuperServer 1028R-WTRT with MB X10DRW-iT. The task is to configure a RAID1 (mirror) using built-in C610/X99 series chipset sSATA controller in RAID MODE, either software RAID in most Ubuntu (mdadm). The problem is as follows, if to autopartition the disk in that and in other case we get either black screen with cursor after installing or fall grub / grub rescue depending on the installation option. Something similar was already described here – How to run Ubuntu Server Supermicro Intel Raid 10? .

Through various experiments, it was found two working solutions:

1. Create a RAID1 array controller devices:
How to setup software raid for a simple file server on ubuntu
2. Use boot in UEFI. When you install Ubuntu in a RAID seen as a separate device:

3. Mark up your device manually, creating one partition ext4 with mount point / . I.e. without ESP and swap partitions.

After that, everything loaded successfully. If you give the installer to partition the device properly, i.e. with the establishment of ESP, ext4 (/) and swap – after reboot’and get a blinking cursor, as described here. The decision not to place swap on RAID peeped there. The presented version working, but I want to hear the opinion of All, is that a viable such a configuration without swap’and combat server? Or is it worth it to build then swap in the form of a paging file in already running system? On the server 96 Gb RAM.

With sitovym RAID in Ubuntu I had to dig deeper. I tried different boot options in UEFI and Legacy BIOS. Basically the problem boils down to the inability to install GRUB during the installation phase, or again with the inability to boot with the RAID and:

How to setup software raid for a simple file server on ubuntu

By hook or by crook when softova RAID was found to have the following layout (tested on UEFI):

How to setup software raid for a simple file server on ubuntu

I mean, first mark both disks as usual, with the establishment of ESP, ext4 (/) and swap. All this happens after a few reboots, then change the partition type ext4 (/) on linux-raid on both drives. Then create two partitions linux raid RAID softwary, and it’s already ext4 (/) partition. After that grub will be installed normally and normally is loaded with one disk (to load the second, it is clear that ESP you just have to bow to another disk). Option working, but again, seems not particularly correct.

In this connection a question – maybe someone has a similar platform to operate. Which option would you use that is described in solution #1 or #2? Maybe you have some working configuration that would be more “competent” and “failover”. I will appreciate any tips and suggestions.

Install linux on software raid. Logical Volume Manager is now included with most Linux distributions. Any software RAID levels are always applied to all drives. This is a form of software RAID using special drivers, and it is not necessarily faster than true software RAID. 0 port), select linux distribution (I downloaded ubunto 16 LTS version from ubuntu site, and pointed to its ISO). Improve this question. 04 LTS USB How do i Install Windows 10 on a Software RAID 0? Its not possible. This video tutorial demonstrates how to install Ubuntu in a RAID 0 array. Boot from the Ubuntu install CD to … Installing Ubuntu 20. I’m a bit lost about the process. legacy ATA mode) so drives are seen as individually, not in RAID setup. Intel has enhanced MD RAID to support RST metadata and OROM and it is validated and supported by Intel for server Trying to install Ubuntu server on a hardware raid 1 array instead of the native linux software raid. 9. I don’t think windows software RAID plays well with linux. The Welcome to Oracle Linux 8 screen is displayed. org) Yannick Loth: Installing Archlinux with software raid1, encrypted filesystem and LVM2 (via archive. /dev/sdb1 1 48632 390636508+ 83 Linux. Step 5: Click “Next” to Software RAID. Raid 10 is one of the options. With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18. raid system-installation. You cant move info from a drive when there is no software to manage it. Next, to install Ubuntu Linux on Hyper-V, simply follow the recommendations of the Ubuntu Linux installation wizard and the Linux OS will be configured on your I have been trying to install Linux on my new Aspire 5 A515-55 laptop. ZFS. The server won’t boot in this case. Check the progress: cat /proc. Get used to it – you will love it. eg. Installed successfully CentOS, Debian, and Win2K (used KVM and 2. Redundant Array of Inexpensive Disks (RAID) is a technology to combine multiple disks in order to improve their reliability and/or performance. Download the alternate server installer. Proceed through the installer until you get to Filesystem setup. 04 is. 04 LTS Software RAID is done through Operating System level. View RAID information: mdadm -D /dev/md0 RAID information . Installing Ubuntu 20. Re: Installing grub2 with /boot on software raid, Jordan Uggla, 2013/07/01. I tried installing on both of the drives separately and it seemed to work fine. Reboot and enjoy a fully functioning RAID 1 install. (That means drives marked with DRIVE, as discussed above. But the standard installer dose not seem to like playing with it, and I can find Software RAID in the man page for Architect Installer. 0 thumb drive on usb 3. So maybe you could instead start with a regular Ubuntu/Debian installation on the host, configure software raid the usual way and then setup the appliance as a virtual machine. Next logical step – create partition and format hard disk using Linux command line utilities. A combination of drives makes a group of disks to form a RAID array or a set of RAID which can be a minimum of 2 disks connected to a RAID controller and making a logical volume or more, it can be a combination of more drives in a group. RAID status is reported through the file /proc/mdstat. 1. Software RAID implements the various RAID levels in the kernel block device code. [email protected]:

# pacman -S grub2 In this blog, we will check out how to install Arch Linux by setting software RAID 1. Return to “CentOS 6 – Hardware Support” How to install MySQL server on CentOS 8 Linux; How to identify network speed between two machine. For more information about RAID and what RAID mode should you choose, see Wikipedia. It is designed to remain operable even if one of the disks fails. none A RAID setup can be achieved via dedicated hardware or via software. Verify the changes. Install the bootloader (GRUB) In Jessie (8. Shrink the last partition, if needed, in order to obtain the 1 MB of free space at the end In this section we will install a complete Linux operating system from scratch, with RAID 0 and RAID 1 used on the partitions on two local hard disks. amd/vmlinuz video=vesa:ywrap,mtrr vga=788 \ — quiet. In this article I will share the steps to configure software raid 1 with and without spare disk i. You are restricted to the RAID levels that your OS can support. Installing Ubuntu server with software RAID 1. Install Ubuntu in a RAID 0 array. In this blog, we will check out how to install Arch Linux … Install Grub on the Primary Hard Drive (and save the RAID config) This is the last and final step before you have a bootable system! As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you’re effectively inside your new system. you might not have sufficient space to install the software if the disk is already partitioned. If you have 2 hard disk set as an unique volume using RAID 0, you will not able to install Ubuntu 14. /dev/md2 1. I always use /dev/md0 for the /boot partition. The example describes how to create an RAID 5

We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them.

I’ve got a box set up with Ubuntu Server and 4 x 500 GB drives. They’re currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this:

  1. 500 GB is not really big enough (some projects are larger)
  2. It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. (“the project is on sever2 in share4” etc)

So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I’ve done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively “new”.

What is the most efficient and least risky way to to combine the hdd’s that is convenient for my users?

Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one “folder” per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together.

My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system.

I haven’t used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We’ve got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however)

I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that.

But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn’t mind it terribly, though.

On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn’t really solve the how-to-combine-drives issue.

I plan on setting up a simple file server for my home
network, and I chose to use Ubuntu Server because of
past experience that I’ve had with Ubuntu plus all of
the great help that I’ve recieved from so many other
users in these forums.

My main purpose is to have one place to store all of my
music, software, videos, and other misc. data files.
I will mainly be accessing these files from my Windows
computers, and most likely be using Putty to work with
the linux system through remote access, so I plan on
installing the Samba File server and the OpenSSH Server.

My server computer has:
– (1) 10 gig HD for the OS, and
– (2) 750 gig HD’s connected to a RAID controller that I’m
wanting to set up in a RAID1 Array for Storage.

My question is:
When I’m partitioning my HD’s during the linux installation
using the “Guided – use entire disk” option, I’ll view the
layout of the partitioned drives and it shows all three drives
listed. Should it not just show 2 drives (the 10 gig HD and
the 750 gig RAID Array)? If it is supposed to show all 3, how
do I set it up where the 2 drives in the RAID array will mirror
each other?

Any help would be appreciated.

Many ways to go, but if you are thinking of using the BIOS raid on your motherboard, that’s another matter. Here’s some links that will get you going:

NOTE: You must use Ubuntu Server 18.04 LTS, not Ubuntu 18.04-live-server. The live server ISO does not provide all of the utilities for installing RAID and LVM. (http://cdimage.ubuntu.com/releases/18.04/release/)

These steps also work for ubuntu Server 16.04 LTS.

These steps describe how to set up a software RAID 5 at installation time using the Ubuntu Server ncurses installer. The drive sizes here reflect my testing efforts on a VM, but I have implemented this on hardware where the “data” partition is

45T in size. I have tested the ability to remove a drive from the array and reboot the system, which survived the reboot. However, the UEFI boot menu was modified by the system. Not sure why. I need to read more on various things taking place in this configuration. But it’s working and appears to be stable thus far. I’m sure a few “improvements” could be made. All in time.

** Being a newb with WP, I may have screwed something up trying to format the page. Most of these notes should be used as reference, not as exact instructions anyway. **
Step 1
In Virtualbox:

Step 2
Create the physical disk partitions

At “Partition disks” screen:

At this point, we will configure the first disk (sda):

The first partition will be the GPT boot partition:

The second partition will be the “/” partition:

The third partition will be the “/data” partition:

The swap partition:

Now repeat all of these setup items again for each remaining disk:
sdb, sdc, sdd, sde

Step 3
Configure the software RAID 5

**Important** When creating the RAID devices, DO NOT create a RAID device for the GPT boot partitions!

I am setting raid up for the following:
“/”, “/data”, and “swap” – so 3 independent raid arrays.

Now repeat the steps again for the remaining raid arrays (i.e. “/data”, and “swap”)

Step 4
Enable the RAID 5 arrays

Do these steps for the remaining two raid devices (making sure you choose the right mount points/file systems).

Step 5
Continue with the OS install.

Step 6
Lets look at what we have so far, making sure the arrays are healthy and done initializing.

Step 7
What partition did we boot from?

Match the UUID from fstab to a UUID in the blkid output. Then match the PARTUUID to the output from efibootmgr -v.

Step 8
The boot information is currently only on one disk (see Step 7). We need to copy this information to all disks so that we can survive a single drive failure. The following commands can destroy everything done thus far. Make sure you get it right. Snapshoting if running a VM is a good idea.

Step 9
Now we add all of the boot partitions to the efibootmgr.
This command will show you what you currently have in the setup.

We want to setup entries like “Boot0007* ubuntu”.

Step 10
If you want to test the boot menu setup, do the following for each drive:

After reboot, run efibootmgr to verify (it booted so you must be ok).

The installimage script provided by Hetzner is an easy and fast method of installing various Linux distributions.

You can run installimage directly from the Rescue System on your server. It’s menu interfaces makes it easy to select the Linux distribution you want. You have full control over the how to partition your drive(s). And you can use a simple editor to define how you want to use sofware RAID and LVM.

To use installimage, you first need to activate the Rescue System and then boot into it Rescue System.

Use the password displayed on Robot to log into the Rescue System as “root”. Then type installimage to start the installimage script:

In the following menu, you should see:

How to setup software raid for a simple file server on ubuntu

After choosing an image, you will receive a note that the editor will be started, and this will open the configuration file.

We offer a number of standard images that you can use. These are typically the latest version of the particular distribution.

Advanced users can also install older versions of these distributions, by going to the old\_images folder. Important note: We don’t offer any support for these older images.

In addition, advanced users can also create their own OS images and install them. Please check the guide on how to install your own OS images for information on how this is possible and for a list of the requirements.

If installimage finds an /autosetup file in the Rescue System, it will automatically use this as the configuration file. Unless there are errors in the files, you will not see a menu and or editor.

You can adjust the following variables to customize the installation.

The drives that are present in the server are identified in the first row with the variable DRIVE. Above each line, you can see the type of drive.

Here, you can select which drives you want the OS to be installed on. The drives will be completely wiped, and all data currently on them will be lost.

If you want to leave a drive in its current state and not make any changes to it, you can leave it out (remove it) by placing a # before it. Important note: Doing this means that you need to properly adjust the number after the next DRIVE variable.

If the server has multiple drives, you can use the variables SWRAID and SWRAIDLEVEL to create different software RAID levels. Any software RAID levels are always applied to all drives. (That means drives marked with DRIVE, as discussed above.) drives. If you don’t want software RAID on a particular drive, you’ll need to remove it accordingly.

The script can create software RAID with levels 0, 1, 5, 6 or 10.

The bootloader Grub is pre-configured. (In the past we also offered Lilo). Depending on the operating system, GRUB2 or GRUB1 (legacy Grub) is installed.

The variable HOSTNAME sets the corresponding host name in the system.

Partitions / file systems

The installimage also supports adjustments to the partitioning scheme (including the use of LVM). You can find the designated syntax in the examples in the editor.

Operating system image

This is the full path to the operating system image; you only need to specify it if you are installing a custom image.

After leaving the editor with F10 (save and quit), the syntax of the config file is checked. Should it contain errors, you will be returned to the editor.

How to setup software raid for a simple file server on ubuntu

If you see this output after 1-5 minutes (depending on the image and partitioning you’re using), the system is ready and bootable.

The root password is set to the current password of the Rescue System.

After a reboot in the Rescue System,

the newly installed system is booted and you can log in with the previous Rescue System password.

When installing Debian or Ubuntu using the installimage script, the times for the cronjob in /etc/cron.d/mdadm are set randomly.

Frequently Asked Questions

Why can’t I create partitions larger than 2 TiB?

You can create partitions larger than 2TiB only with a GUID partition table (GPT). Thus, you can only install operating systems which include GRUB2; it supports booting from GPT drives.

The installation script shows one or more errors. What should I do?

Re-run the installation. If you get the same error again, please send the complete screen output and the contents of the file /root/debug.txt to [email protected]

Do I have to put “all” at the end of the partition table or can I put this line further at the top?

The size all in the config file means use the rest of the available space on the drive. Since partitions are created one after another, the partition table will end after using all because there will be no space available afterwards. Of course, it is also possible not to use “all” at all.

Pressing F10 does not work. Instead

21 (or something similar) is displayed.

Press ‘Escape’ and then 0 . In most cases, this has the same effect as F10.

Who is the author of the script? Can I use it freely?

The scripts were written by developers of Hetzner Online GmbH, who amaintain and extend them. The scripts are written in bash and are available in the rescue system. You can modify and use them freely. Hetzner Online GmbH assumes no liability for any damage caused by changing the scripts and excludes any support for guides that include changes to the script.

What is the MySQL Root password when LAMP has been installed?

How to setup software raid for a simple file server on ubuntu

Jul 25, 2019 · 7 min read

Installing a mirrored Ubuntu 18.04 linux on AC922 is dead simple. Here is how to.

POWER processor is a different architecture than x86, namely ppc64el . This is why it rocks. Therefore, you need to get an Ubuntu version compiled for this architecture, just like you would if you were installing on Rasberry Pi which runs ARM processor arm64 . Note that the parallel between IBM Power and ARM ends here!

Use your favorite search engine with keywords “ ubuntu 18.04.2 ppc64el” and you will get the proper link : PowerPC64 Little-Endian server install image. Download it and burn it to USB key. Running my desktop under linux, I use gnome-disks for that purpose.

Netw o rk install is of course supported, but setting up a http server for that goal is not in the scope of this article.

Now you need to get access to the server, to plug the usb key on the front or rear USB port, behind the front bezel clipsed to the facade.

While you are there, and if you enjoy the regulated temperature of the server room during hot summer days, you can plug a VGA display and USB keyboard on the rear ports of the server.

If your badge doesn’t open the door of the datacenter, use IPMI protocol to get a Serial Over LAN connection to the virtual terminal of the server:

If the server is not powered up, you see nothing. Power it up ! Either press the top-right blinking button on the front of the chassis, or use again IPMI:

Petitboot is a bootloader and plus for OpenBMC Firmware. It will boot a linux and ASCII interface that will help you managing the server before you boot your linux OS. By scanning the devices, petitboot will discover the bootable USB key:

Move the * to Install Ubuntu Server and hit Enter, the installation menu starts. Set your langage, locale, keyboard, IP address as usual for Ubuntu.

Disks must be manually partionned to create the layout we need. Choose Manual in the main menu:

How to setup software raid for a simple file server on ubuntu

If your disks are not clean from a previous partionning, it is time to remove any partition. The installer is not so good for that task, specially if you inhenerit from disks with LVM. The easiest way I found to do that is to start an installation on the entire disk: Guided — use entire disk , and interrupt it after it completed the stage of disk partitionning. This process removes the LVM. Then, after rebooting, you can delete the software RAID md device with the Ubuntu installer. Then you can delete existing partitions. Anyhow… Let’s say your disks are clean:

How to setup software raid for a simple file server on ubuntu

Historically, the Power systems running linux need a small 8 MB partition with a specific format. It is called PReP boot partition, and it hosts the stage1 binary of the linux boot process. This page will give you more details on this. The good news is that petitboot fulfills that task, therefore PReP boot partition is useless with OpenPOWER / OpenBMC based servers. With servers not using petitboot (PowerVM-based servers), you may encounter this partition.

I’ll make it simple, this layout is sufficient for my purpose : survive to a disk crash. I’ll create a Sofware RAID1, a LVM on top of it, with /boot and / logical volumes. No swap space, because if I run out of memory with the 512 GB RAM I have on a non-virtualized server, the cause will be such that a swap file won’t help.

This chapter describes RAID features, with special focus on the use of software RAID for storage redundancy.

3.1В About Software RAID

The Redundant Array of Independent Disks (RAID) feature provides the capability to spread data across multiple drives to increase capacity, implement data redundancy, and increase performance. RAID is implemented either in hardware through intelligent disk storage that exports the RAID volumes as LUNs, or in software by the operating system. The Oracle Linux kernel uses the multidisk (MD) driver to support software RAID to create virtual devices from two or more physical storage devices. MD enables you to organize disk drives into RAID devices and implement different RAID levels.

The following software RAID levels are commonly implemented with Oracle Linux:

Combines drives as a larger virtual drive. This level provides no data redundancy nor performance benefit. Resilience decreases because the failure of a single drive renders the array unusable.

Increases performance but does not provide data redundancy. Data is broken down into units (stripes) and written to all the drives in the array. Resilience decreases because the failure of a single drive renders the array unusable.

RAID-5 (striping with distributed parity)

Increases read performance by using striping and provides data redundancy. The parity is distributed across all the drives in an array, but it does not take up as much space as a complete mirror. Write performance is reduced to some extent as a consequence of the needto calculate parity information and to write the information in addition to the data. If one disk in the array fails, the parity information is used to reconstruct data to satisfy I/O requests. In this mode, read performance and resilience are degraded until you replace the failed drive andrepopulate the new drive with data and parity information. RAID-5 isintermediate in expense between RAID-0 and RAID-1.

RAID-6 (Striping with double distributed parity)

A more resilient variant of RAID-5 that can recover from the loss of two drives in an array. RAID-6 is used when data redundancy and resilience are important, but performance is not. RAID-6 is intermediate in expense between RAID-5 and RAID-1.

Provides data redundancy and resilience by writing identical data to each drive in the array. If one drive fails, a mirror can satisfy I/O requests. Mirroring is an expensive solution because the same information is written to all of the disks in the array.

RAID 0+1 (mirroring of striped disks)

Combines RAID-0 and RAID-1 by mirroring a striped array to provide both increased performance and data redundancy. Failure of a single disk causes one of the mirrors to be unusable until you replace the disk and repopulate it with data. Resilience is degraded while only a single mirror remains available. RAID 0+1 is usually as expensive as or slightly more expensive than RAID-1.

RAID 1+0 (striping of mirrored disks or RAID-10)

Combines RAID-0 and RAID-1 by striping a mirrored array to provide both increased performance and data redundancy. Failure of a single disk causes part of one mirror to be unusable until you replace the disk and repopulate it with data. Resilience is degraded while only a single mirror retains a complete copy of the data. RAID 1+0 is usually as expensive as or slightly more expensive than RAID-1.

3.2В Creating Software RAID Devices

Run the mdadm command to create the MD RAID device as follows:

Name of the RAID device, for example, /dev/md0 .

Level number of the RAID to create, for example, 5 for a RAID-5 configuration.

Number of devices to become part of the RAID configuration.

Devices to be configured as RAID, for example, /dev/sd[bcd] for 3 devices for the RAID configuration.

The devices you list must total to the number you specified for –raid-devices .

This example creates a RAID-5 device /dev/md1 from /dev/sdb , /dev/sdc , and dev/sdd :

The previous example creates a RAID-5 device /dev/md1 out of 4 devices. One device is configured as a spare for expansion, reconfiguration, or replacement of failed drives:

(Optional) Add the RAID configuration to /etc/mdadm.conf :

Based on the configuration file, mdadm assembles the arrays at boot time.

For example, the following entries define the devices and arrays that correspond to /dev/md0 and /dev/md1 :

For more examples, see the sample configuration file /usr/share/doc/mdadm-3.2.1/mdadm.conf-example .

An MD RAID device is used in the same way as any physical storage device. For example, the RAID device can be configured as an LVM physical volume, a file system, a swap partition, an Automatic Storage Management (ASM) disk, or a raw device.

To check the status of the MD RAID devices, view /proc/mdstat :

To display a summary or detailed information about MD RAID devices, use the –query or –detail option, respectively, with mdadm .

For more information, see the md(4) , mdadm(8) , and mdadm.conf(5) manual pages.

Here is an example of migrating a running Ubuntu system to a software RAID1.
In the process, you will need to perform two reboots.

The first step is to switch to the root user if not yet:

Let’s see a list of disks and partitions:

Suppose that the system uses one disk, for example /dev/sda and has one main partition, /dev/sda1.
For the test, I installed a clean Ubuntu Server 18.04, the disk was parted by default, swap was the file on the same partition.

To create a raid, we connect another disk of the same size, it will be called /dev/sdb.

Install mdadm and necessary utilities (they are usually installed by default):

In order to make sure that all necessary modules and components are installed, execute the following command:

If the necessary modules are not loaded, then load them:

Let’s divide the new disk /dev/sdb in the same way as:

In the next step, change the partition type of the new hard disk /dev/sdb to “Linux raid autodetect” (since partition 1, then after “t” it will not be asked to specify the partition number):

Make sure that the partition type /dev/sdb is Linux raid autodetect:

Create an array md0 using the missing:

If something does not work, then you can remove the raid and try again:

Let’s specify the file system of the array:

Let’s make a backup copy of the configuration file mdadm and add information about the new array:

Mount /dev/md0 into the system:

At me it was displayed at the bottom of the list:

In the /etc/fstab file comment the lines about /dev/sda and add about the array:

Let’s see the file /etc/mtab whether there is a record about the raid:

Let’s look at the exact names of the files /vmlinuz, /initrd.img:

Create a file from the GRUB2 boot menu and open it in the editor:

Add the contents (instead of /vmlinuz and /initrd.img, we’ll specify the correct names if they are different):

Open the file /etc/default/grub in the text editor:

Uncomment a couple of lines:

Update the loader:

Install the bootloader on both disks:

Copy all the data to the previously mounted md0 array:

Restart the system:

When the system starts, in the boot menu it will be the first menu /etc/grub.d/09_raid1_test, if there are problems with the download, you can choose to boot from /dev/sda.

Make sure that the system is started with /dev/md0:

Again, switch to the root user if not under it:

Change the partition type of the old hard disk:

Add to the array the old disk:

Wait until the synchronization is completed and make sure that the raid is in order – UU:

Update the array information in the mdadm configuration file:

Remove our temporary GRUB menu, it’s no longer necessary:

Update and install GRUB again:

Restart the system to make sure it runs successfully:

At this, the migration of the running Ubuntu system to the software RAID1 is complete.
If one of the disks, /dev/sda or /dev/sdb stops working, the system will run and boot.
For stability, you can add more disks of the same size to the array.

Сводка: Our charter is to deliver solutions that simplify IT by providing database solutions, custom development, dynamic datacenters, flexible computing Свернуть Our charter is to deliver solutions that simplify IT by providing database solutions, custom development, dynamic datacenters, flexible computing

  • Содержание статьи
  • Свойства статьи
  • Оцените эту статью

Содержание статьи

Симптомы

Applies to:
Operating System(s) – Oracle Linux 6.x, RHEL 6.x

Server Platform(s) – PowerEdge R720, R820

Author: Naveen Iyengar

Problem: How to configure software RAID on Dell Express Flash PCIe SSDs

Solution:
1. Identify the Express Flash block devices – Dell’s Micron Express Flash drives show up as the following block devices in EL6.x OS

Major minor #blocks name
251 256 341873784 rssda
251 512 341873784 rssdb

2. Create a Partition – Use the fdisk linux utility as follows to create an ‘fd’ type partition on the Flash drives

$> fdisk –u /dev/rssda

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p

Partition number (1-4): 1
First sector (56-683747567, default 56: 128
Last sector, +sectors or +size…, default 683747567:

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): wq
The partition table has been altered!
$>

3. Repeat step 2 for all the other PCIe SSD block devices to be included in the software RAID

4. Create software RAID – Use the Linux utility tool called mdadm as follows to create the software RAID array on the Express Flash drives. The following example create a RAID1 using two Flash drives /dev/rssda1 and /dev/rssdb1.

$> mdadm –create /dev/md0 –level=mirror –raid-devices=2 /dev/rssd[ab]1

5. Viewing the details of the array: View the status of the multi disk array md0.

$> mdadm –detail /dev/md0

6. Make the array persistent across reboots: To add md0 to the configuration file so that it is recognized next time on boot, do the following:

$> mdadm -Es | grep md0
Check if the above command displays the details of the md0 array created. If not, try
$> mdadm -Es | grep “md/0”

Depending on which of the above two command works, run the appropriate command below to add info to the mdadm.conf file
$> mdadm -Es | grep md0 >> /etc/mdadm.conf
Or
$> mdadm -Es | grep “md/0” >> /etc/mdadm.conf

7. Check for resync complete: Run the –detail option to make sure that the two SSDs in the array are not in the Resync Status, or wait until it finishes resyncing before you run a test against them.

$> mdadm –detail /dev/md0

A. Deleting the array:

  • To halt the array:
  • Jun 16, 2017
  • #1
  • dgingeri

    I have a server at work that is acting up. The raid controller, a LSI 9261-8i, is frequently dropping all the logical drives offline. (Actually, this has been happening every since it was put in place, and we’d just reboot the thing, but only recently my boss wants to resolve it.) Looking up info, it is a frequent problem with the LSI driver and some of the raid controllers and motherboard combinations cause this issue, and Avago/Broadcom isn’t going to work on a driver level fix for this since it is an older controller. So, my boss wants to change out the raid driver with a different one. We’re buying a Broadcom 9361-8i controller to replace it, along with BBU and cables.

    The problem is that the new controller uses different drivers, and I have no idea how to change out the boot storage drivers in Linux. My boss knows, but is unwilling to tell me, saying I need to look it up to learn it. (He does this a LOT.) I can’t find a thing on this through multiple Google searches.

    How would I go about doing this? Does anyone here know?

    • Jun 16, 2017
  • #2
  • BulletDust

    Supreme [H]ardness

    You should probably move this post to the Linux subforum.

    It’s one thing to learn and another to teach, but forcing someone to learn where precious data is involved is bound to end in tears.

    I hate pseudo hardware raid cards under Linux for this reason, preferring to use mdadm in the case of Ubuntu (Debian) derivatives for the simple fact that software raid is so much simpler where hardware failure is a problem. Due to this fact it’s difficult for me to offer advice, although sometimes you can get lucky and install the hardware (software) raid card, install the drivers, connect the drives in the same order and everything just works – Although there is no way I would offer this as advice where precious data is concerned, your boss is an idiot and should not be putting you in this position.

    • Jun 25, 2017
  • #3
  • [H]ard|Gawd

    Consideration #1 immediately backup
    Consideration #2 very rarely do hardware RAID (especially pseudoRAID) use compatible disk layout.

    I know server admins that buy a couple of RAID cards, even from mainstream hardware controllers to mitigate their aweful compatibility.
    Unless you need the throughput software RAID is more then viable and is compatible across setups

    • Jun 28, 2017
  • #4
  • dgingeri

    Consideration #1 – this is storage for large projects for our post production group. It was specifically put up with the contingent that it would not be backed up. (Not my choice.) It is actually rarely used, and has frequently been found to be locked up after months of idle time.
    Consideration #2 – I have worked with these controllers very frequently, and I know for certain that it is easy to migrate groups from one to another. Disk sets can move from the 9261 to the 9361 without issue, and backwards is only a problem if they used RAID 6 on the newer controller. This works great under Windows, even when the boot drive is on the RAID controller. I just install the newer drivers before the change out, and as soon as it boots up, it re-recognizes the devices, reboots, and all is at 100%. The Dell versions of each work that way well, too. It is easy to import a disk set from a Dell H730 on a Broadcom 9361 or a LSI 9261, as long as they’re SATA drives. SAS makes things a bit more complicated, but not much. Of course, using 12Gb SAS drives on a Dell H710 or LSI 9261 is not possible, but that’s the only complication.

    Oh, and it wasn’t my choice in how this machines was built, as it was built before I ever started with this company. So, software RAID wasn’t a choice for me. Also, with the performance needed for this, editing large video files over a Linux SMB share, software RAID would not work. It just doesn’t perform well enough. I know this from direct experience.

    • Jun 28, 2017
  • #5
  • BulletDust

    Supreme [H]ardness

    That doesn’t stand to reason.

    If you have to install drivers to use the raid card, you are using a variant of software raid. With the level of CPU performance that’s been around for the last ten years or so I’ve rarely seen a situation where software raid via mdadm didn’t perform adequately. If the cards (replacement vs original) are compatible just bung the card in, connect the drives, install the drivers and you’re good to go.

    Your boss is still a flog using ridiculous teaching methods.

    • Jun 29, 2017
  • #6
  • Frobozz

    [H]ard|Gawd

    Consideration #1 needs to be revisited if possible. It doesn’t need to be in a super disaster recovery capacity. However, if it’s conceivable to push a copy of the contents (at least work data) over to an external drive or system while you learn whatever (or decide to yolo it), it’ll take a lot of the pressure off.

    Do you have any similar hardware to practice this scenario or will no one really care if you mess something up? I haven’t played with RAID cards all that much, but don’t they have their own schemes for writing metadata across the drives? (making portability between different controller types practically impossible) That would make the drivers at the linux level (if boot volume is on the array) not really the first problem to overcome.

    If not, at the OS level, I guess your reading adventure will be about the contents of /boot (grub, initrd/vmlinuz) and reading the documentation for setting up the new controller in Linux. I’m just pulling an idea from the air with no experience, but if it’s not baked into the default kernel, then you’ll have to install some sort of package and then regenerate initrd/initramfs so that their supplied driver will be available as the system starts to boot.

    Personally, I think your blocker will be before that at the RAID hardware & metadata level though. Unless the boss shares the secret handshake, the shortest path will likely be backing up data and starting with a new install. You could also pitch for bumping to a newer version (16.04?) for added value. Just my $0.02

    Edit: betcha another $0.02 that the guys in the SSD & Data Storage forum would have more first hand experience.

    Table of Contents

    In this article I will share the steps to configure software raid 1 with and without spare disk i.e. mirroring raid where the data is stored. I will explain this in more detail in the upcoming chapters. I have written another article with comparison and difference between various RAID types using figures including pros and cons of individual RAID types so that you can make an informed decision before choosing a RAID type for your system.

    How to setup software raid for a simple file server on ubuntu

    What is RAID 1?

    RAID-1 is usually referred to as “mirroring.” Each child object in a RAID-1 region contains an identical copy of the data in the region. A write to a RAID-1 region results in that data being written simultaneously to all child objects. A read from a RAID-1 region can result in reading the data from any one of the child objects. Child objects of a RAID-1 region do not have to be the same size, but the size of the region will be equal to the size of the smallest child object.

    How to setup software raid for a simple file server on ubuntu

    Create Software RAID 1 without Spare Disk

    The simplest RAID-1 configuration must contain at least two member disks. In this example, /dev/sdb1 and /dev/sdc1 are member disks of the RAID-1 at /dev/md0 :

    There are below certain steps which you must follow before creating software raid 1 on your Linux node. Since i have already perform ed those steps in my older article, I will share the hyperlinks here

    Important Rules of Partitioning

    Partitioning with fdisk

    Now we have our partitions available with us which we can validate using lsblk

    Configure software raid 1

    Now since we have all the partitions with us, we will create software RAID 1 array on those partitions

    Verify the changes

    Now since our software raid 1 array is created successfully. Verify the changes using below command

    Now /proc/mdstat reports information about the array and also includes information about the resynchronization process. Resynchronization takes place whenever a new array that supports data redundancy is initialized for the first time. The resynchronization process ensures that all disks in a mirror contain exactly the same data.

    The resynchronization is about 40 percent done and should be completed in some time based on your software raid 1 array size.

    Create file-system

    Now since our software raid 1 array is ready, we will create a filesystem on top of it so it can be used for storing data. For the sake of this article I will create an ext4 filesystem but you can create any other filesystem on your software raid 1 as per your requirement

    To check the detail of the software raid 1 array you can use below command

    Create mount point

    Next we need a mount point to access the software raid 1 array file-system.

    Now since we have our mount point and we have mounted our software raid 1 array on our mount point. Let us check the details of our software raid 1 array.

    So now this software raid 1 array can be used to store your data. But currently since we have temporarily mounted this filesystem, it will not be available after reboot.

    To make the changes reboot persistent, add the below content in your /etc/fstab

    Next save your file and reboot your node.
    Once the node is UP make sure your software raid 1 array is mounted on your mount point i.e.

    Configure Software RAID 1 with Spare Disk

    When a disk does fail, it’s useful to be able to automatically promote another disk into the array to replace the failed disk hence it is good to add a spare disk while configuring a software raid 1.

    The spare disk parameter is combined with the device parameter to define disks that will be inserted into the array when a member disk fails. In this article I have added new virtual disk to demonstrate creation of software raid 1 array with spare disk.

    If you are using mdadm , the -x flag defines the number of spare disks. Member disks are parsed from left to right on the command line. Thus, the first two disks listed in this example ( /dev/sdb1 and /dev/sdc1 ) become the active RAID members, and the last disk ( /dev/sdd1 ) becomes the spare disk.

    If a disk in this array failed, the kernel would remove the failed drive (either /dev/sdb1 or /dev/sdc1 ) from /dev/md0 , insert /dev/sdd1 into the array and start reconstruction. In this case /dev/sdb1 has failed (forcefully), as indicated by (F) in the following listing.

    The md driver has automatically inserted spare disk /dev/sdd1 and begun recovery.

    To check the detail of the software raid 1 array /dev/md0

    Lastly I hope the steps from the article to configure software raid 1 array with and without spare disk on Linux was helpful. So, let me know your suggestions and feedback using the comment section.

    Related Posts

    Didn’t find what you were looking for? Perform a quick search across GoLinuxCloud

    If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

    For any other feedbacks or questions you can either use the comments section or contact me form.

    Thank You for your support!!

    2 thoughts on “Step-by-Step Tutorial: Configure Software RAID 1 in Linux”

    Hi.
    Thanks for this tuto.
    I’m hosted with OVH on a Proxmox template on 2 x 480Gb SSD in RAID1 (md1 and md5).
    This server has 2 x 2Tb HDD (sdc and sdd) that I’ve put on RAID1 (md6). I can mount it and access the drive.
    My server can reboot fine, but as soon as I add the mount in FSTAB, it no more booting. I don’t have a KVM, so I need to reformat everything and reinstall my ProxMox.
    What can I do to get it working over a reboot?

    Hello, what are you using to mount the additional partition in /etc/fstab? Can you please share that line?
    You need not re-format, in such cases you should get a prompt for maintenance where you should be able to modify the fstab or you can login to emergency mode using Live DVD. But lets try to troubleshoot this problem first.
    Also I hope this /dev/md6 is accessible during boot?

    • Home
    • Forums
    • Proxmox Virtual Environment
    • Proxmox VE: Installation and configuration

    EinsGehtNoch

    New Member
    • Jan 5, 2017
  • #1
  • I have assembled a home-server. Mainly to replace my deprecated 2-bay NAS, give me a real web-server (e.g. NextCloud) and a place to store a remote desktop. Further applications will follow as needed. I like to keep my functions strongly seperated, since I tend to mess around sometimes. This messing around should now not affect my file-server and nextcloud, since other people need it too

    I have installed without bigger Problems Proxmox and for testing Ubuntu Server and Ubuntu Desktop (both as VMs) within Proxmox. No big problems here either.

    I am struggling now with the basic set-up for the main functions for file-server and data-server for nextcloud. I have been reading for the last weeks, but cannot figure out the right way

    What does make more sense to you?

    1. Let Proxmox handle the file-server storage system (ZFS mirror / Software RAID 1) and somehow let the file server VM (Ubuntu Server) access it.
    2. Pass the hard-disks directly to the file-server, and create a RAID there.

    The first option seems more straightforward to me. Also the host (Proxmox) claim unused system ressources as needed. But what would be the best way, to access the ZFS storage from within the VM?

    ==========================================
    Hardware (for now):
    Supermicro X11SSM-F
    G4400
    16 GB ECC RAM
    32 GB SSD (System HDD: Proxmox, VMs, ISOs, . )
    2×3 TB HDD from old NAS for data storage
    (2x 4 TB HDD for various use and for replacing one of the older 3 TB HDDs if necessary)

    There are several guides for creating software RAID’s on Ubuntu on the internet. Most of them we’ve found to be not very comprehensive or difficult to understand and follow. This is why we’ve created this tutorial as easy to use as we could. Pictures on every step and detailed instructions. In fact, it may be a little to comprehensive, but that’s ok. At least you’ll be confident you created the RAID correctly. If you do have any questions or run into a problem, feel free to leave a comment below and we’ll try to help.

    Linux software RAIDs work differently than normal hardware RAID’s. They are partition based, instead of disk based. This means that you must create matching partitions on all disks before creating the RAID. Hardware RAIDs have you add the disks to the RAID and then create the partition.

    This tutorial was created while installing Ubuntu 12.04 64 bit Server Edition. It’s intended to be the first in a series of Linux software RAID tutorials. Future tutorials will cover topics such as how to recover from a failed disk.

    This server has two 16GB disks installed. We will be creating 2 partitions: a 2GB swap partition and a 14GB root partition. After we are done, the server will stay in operation if one of the two disks fails. Most of the pictures in this tutorial are self-explanatory. The option you need to choose will be highlighted. We will provide comments on the picture if there is any special considerations.

    To begin, run the Ubuntu installer. When you get the ‘Partition disks’ menu, choose ‘Manual’:

    In this case, the disks are new and there are no partition tables on it. Select each disk to create a partition table:

    Select the free space on the first disk to create partitions on it:

    The first partition will be 2GB at the beginning of the disk (this will be used for swap space):

    You can leave partition settings the default. After the RAID is created, these partitions will be overwritten, so there is no need to configure them here:

    Select the remaining free space on the first disk to create the 2nd partition. In this case, we will be using the remaining free space for this partition:

    Again, do not worry about configuring the partition here. Leave it at the defaults:

    After creating the 2 partitions on the first disk, repeat the process and create identical partitions on the second disk.

    You should now see identical partition sizes on both disks. Choose ‘Configure software RAID’ to begin creating the software RAID:

    Again, the Linux software RAID is partition based, so we will need to create 2 RAIDs, 1 for each of our set of 2 partitions. Choose ‘Create MD device’ to begin creating the first:

    This step can be confusing for some people. Our first RAID will consist of 2 partitions (the 2GB partitions on each of the disks), so choose 2 active devices:

    We aren’t using any spare devices in this example:

    Only select the 2GB partitions. There should be one on each disk:

    You’ll be taken back to the RAID configuration menu. Choose ‘Create MD device’ to begin creating the 2nd RAID:

    Choose both of the 14GB partitions (again, there should be one on each disk):

    Choose ‘Finish’ to complete the RAID configuration.

    Now we partition the 2 RAIDs. You’ll see ‘RAID1 device #0’ and ‘RAID1 device #1’. These are the only to we need to partition.

    To configure the swap RAID partition, select the 2GB RAID device listed under ‘RAID1 device #0’:

    For ‘Use as’, select ‘swap area’ and then choose ‘Done setting up the partition’:

    You will be taken back to the partitioning menu. Select the 2nd RAID device (in this case, it’s the 14GB one) from the menu. You can configure the RAID device with whatever file system you need, but we are going with the default, Ext4. For the ‘Mount point’, make it the root by selecting “/”. Now choose ‘Done setting up the partition’:

    Your RAID devices should be partitioned similar to what is listed below. Choose ‘Finish partitioning and write changes to disk’:

    Typically, the reason why RAID is implemented is so the operating system will continue to operate in the event of a single disk failure. Choose ‘Yes’ here so you will not see any interruptions when booting with a failed disk:

    Almost done! The operating system will continue to install on the RAID you setup:

    After the operating system installs, you will be prompted to install GRUB. Choose YES to install it to the Master Boot Record:

    As you can see, installing GRUB to the Master Boot Record will install it to both hard disks, (/dev/sda & /dev/sdb).

    That’s it! After the install is complete, you should be able to boot into the OS. If you loose a hard disk, the OS will continue to run without interuption.

    Here are some links that you may find useful if you have questions about this process (or leave a comment below and we can try to help):

    While the focus of this guide is hardware, it’s worth first briefly discussing home file server operating system options.

    Windows Home Server 2011

    Microsoft launched its latest version of WHS earlier this year. It can regularly be found for $50 or less when it’s on sale. Of all the file server operating systems available, WHS2011 is the easiest to both set up and administer for users familiar with the Windows series of desktop operating systems and less familiar with Unix or Linux. If you’ve installed and configured Windows XP, Vista, or 7, you can install and configure WHS2011 with a minimal (or even no) extra research. The downside to this ease of use for the home file server novice is, of course, cost – WHS2011 is not free.

    How to setup software raid for a simple file server on ubuntu

    FreeBSD and FreeNAS

    FreeBSD is, of course, free. Because it is a Unix operating system, it requires time and effort to learn how to use. While its installation uses an old text-based system and its interface is command line-based, you can administer it from a Windows PC using a terminal like PuTTY. I generally do not recommend FreeBSD to users unfamiliar with Unix. However, if you are intrigued by the world of Unix and are interested in making your first foray into a non-Windows OS, setting up a file server is a relatively easy learning experience compared to other Unix projects.

    FreeNAS is based on FreeBSD but is built specifically to run as a file server. It features an intuitive, easy to use web interface as well as a command line interface. Both FreeBSD and FreeNAS support ZFS, a file system like NTFS and FAT32. ZFS offers many benefits to NTFS such as functionally (for the home user) limitless file and partition size caps, autorepair, and RAID-Z. Though it is aimed more at enterprise and commercial users than consumers, Matt wrote an article that has lots of useful information about ZFS last year.

    Ubuntu and Samba

    Ubuntu is arguably the easiest Linux distribution for Windows users to learn how to use. Unsurprisingly, then, it has the largest install base of any Linux distro at over 12 million. While there is an Ubuntu Server Edition, one of the easiest ways to turn Ubuntu into a home file server is to install and use Samba. (Samba can be used on not only Ubuntu, but also FreeBSD.) Samba is especially useful if you’ll have mixed clients (i.e. Windows, OS X, and Unix/Linux) using your home file server. Though FreeNAS certainly works with Windows clients, Samba sets the standard for seamless integration with Windows and interoperability is one of its foci.

    Succinctly, WHS2011 is very easy to use, but costs money. Installing Ubuntu and Samba is not particularly difficult, and even if you’ve never used any type of Linux before, you can likely have a Samba home file server up and running in a morning or afternoon. FreeNAS is arguably a bit more challenging than Ubuntu with Samba but still within a few hours’ grasp of the beginner. FreeBSD is potentially far more capable than WHS, Ubuntu/Samba, and FreeNAS, but many of its features are mostly irrelevant to a home file server and its learning curve is fairly steep. When properly configured, all of the above solutions are sufficiently secure for a typical home user. Most importantly, all of these options just plain work for a home file server. An extensive comparison of each OS’s pros and cons in the context of a home file server is beyond the scope of this article, but now that we’ve covered a few OS options worth your consideration, let’s get to the hardware!

    RAID 0 will create striping to increase read/write speeds as the data can be read and written on separate disks at the same time. This level of RAID is what you want to use if you need to increase the speed of disk access.You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.

    How to setup software raid for a simple file server on ubuntu
    These commands must be done as root or you must add the sudo command in front of each command.

    # mdadm –create /dev/md0 –level=0 –raid-devices=2 /dev/sdb5 /dev/sdb6

    –create
    This will create a RAID array. The device that you will use for the first RAID array is /dev/md0.

    –level=0
    The level option determines what RAID level you will use for the RAID.

    –raid-devices=2 /dev/sdb5 /dev/sdb6
    Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
    mdadm: array /dev/md0 started.

    Check the development of the RAID.

    md0 : active raid0 sdb6[1] sdb5[0]

    995712 blocks 64k chunks
    unused devices:

    # tail /var/log/messages
    You can also verify that RAID is being built in /var/log/messages.

    May 19 09:08:51 ub1 kernel: [ 4548.276806] raid0: looking at sdb5

    May 19 09:08:51 ub1 kernel: [ 4548.276809] raid0: comparing sdb5(497856) with sdb6(497856)

    May 19 09:08:51 ub1 kernel: [ 4548.276813] raid0: EQUAL

    May 19 09:08:51 ub1 kernel: [ 4548.276815] raid0: FINAL 1 zones

    May 19 09:08:51 ub1 kernel: [ 4548.276822] raid0: done.

    May 19 09:08:51 ub1 kernel: [ 4548.276826] raid0 : md_size is 995712 blocks.

    May 19 09:08:51 ub1 kernel: [ 4548.276829] raid0 : conf->hash_spacing is 995712 blocks.

    May 19 09:08:51 ub1 kernel: [ 4548.276831] raid0 : nb_zone is 1.

    May 19 09:08:51 ub1 kernel: [ 4548.276834] raid0 : Allocating 4 bytes for hash.

    Create the ext 3 File System
    You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.

    # mke2fs -j /dev/md0

    mke2fs 1.40.8 (13-Mar-2008)

    Block size=4096 (log=2)

    Fragment size=4096 (log=2)

    62464 inodes, 248928 blocks

    12446 blocks (5.00%) reserved for the super user

    First data block=0

    Maximum filesystem blocks=255852544

    32768 blocks per group, 32768 fragments per group

    7808 inodes per group

    Superblock backups stored on blocks:

    32768, 98304, 163840, 229376

    Writing inode tables: done

    Creating journal (4096 blocks): done

    Writing superblocks and filesystem accounting information: done

    This filesystem will be automatically checked every 39 mounts or

    180 days, whichever comes first. Use tune2fs -c or -i to override.

    Create a Place to Mount the RAID on the File System

    In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

    Mount the RAID Array

    # mount /dev/md0 /raid

    You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

    /dev/md0 /raid defaults 0 2

    Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.

    Hope you find this article helpful.

    2 comments

    “This is really interesting, You’re a very skilled blogger. I’ve joined your feed and look forward to seeking more of your magnificent post. Also, I have shared your web site in my social networks!”

    The simplest way to install Raid 0 ubuntu 20.04 is.
    1) Install ubuntu in a smallest partition first as usual
    2) Activities ->disk, use disk tool to create a raid partition on disk 1, delete all partition on disk 2 and automatically disk tool will assign disk 2 as raid member
    3) reinstall ubuntu, selecting the raid partitiion on disk 1

    You will find the installation speed 4 X faster..
    This shall be easiest way…ever!

    Пишу для себя, чтобы не забыть как делал. 95 % рабочее. На комментарии отвечаю, когда увижу.

    четверг, 15 августа 2019 г.

    Ubuntu 16.04 удаление RAID 1

    Статьи по ссылкам относятся относятся к установкам на gpt и efi

    Установка на mbr обычная при помощи инсталятора ubuntu server в соответствии с
    Ubuntu 16.04 настройка программного RAID, замена неисправного жесткого диска

    $ sudo apt install gdisk
    $ sudo gdisk -l /dev/sda
    $ sudo gdisk -l /dev/sdb

    1. Демонтаж существующих RAID устройств (например для переустановки или использования дисков) картинки соответствуют Ubuntu 16.04

    Вид с рабочей системы:

    $ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT

    How to setup software raid for a simple file server on ubuntu

    $ sudo mdadm –detail /dev/md0

    How to setup software raid for a simple file server on ubuntu

    Грузимся с livecd

    $ sudo add-apt-repository universe
    $ sudo apt update
    $ sudo apt install ssh mc mdadm

    ==========================================
    Можно пропустить
    Для удобства можно подключиться удаленно:
    (пользователь ubuntu)
    Установить пароль
    $ passwd
    $ ip a
    . 192.168.1.239 .

    I recently discovered Turnkey and its plethora of app options. it is truely exciting. I have built quite a few file servers, so my first project was to try the Turnkey File Server. The install went extremely smooth using the ISO to install to a bare metal server. The server has 3 drives installed, so I installed the OS to a single drive with the intention of creating a RAID 1 with the other two drives.

    Here is where I am stumped. I have tried looking through the docs for how to setup the RAID drive, but cannot find clear info on how this is done from Webmin. Having built a number of file servers using OpenMediaVault, the RAID setup in OMV is very straight forward and streamlined. I am having difficulty figuring out how to build the RAID using Turnkey.

    I found this link for setting up RAID with Webmin.

    The tutorial is clear, but I am unable to move past step 1 as I see no way to change a disk partition to “Linux RAID” without going to CL or booting to a USB drive and changing it with GParted.

    So. is there any tutorial for Turnkey that explains a step by step for setting up a RAID array from Webmin??

    Thanks in advance. Ken

    One step forward, one step back.

    OK. just found out how to change the partition type to Linux RAID from Webmin. So, I was able to change the partition type to Linux RAID on drives 2 & 3. However, I noticed when clicking on Linux RAID in the menu, I get the following error.

    The kernel RAID status file /proc/mdstat does not exist on your system. Your kernel probably does not support RAID.

    Is there a reason the RAID option does not exist on the system.

    One more step

    A little more research and I discovered the “mdadm” module apparently is not installed in Turnkey File Manager. So. I logged in SSH and installed mdadm on the system. After a reboot, the linux RAID options finally showed up.

    However, even after the mdam install I tried creating a RAID 1 array and the system never moves to the next phase to select the partitions for the array. step 3 in the tutorial.

    So. am still stuck.

    Hi Ken

    Sorry to hear of your troubles setting up RAID.

    Unfortunately, I don’t think I’m going to be much help to you.

    I’ve never set up software RAID (actually I’ve never set up any sort of RAID). It’s always been something that I’ve intended to play with, but never had the need (I’ve always just used LVM and relied on backups).

    Also, I have a personal preference for the commandline, so don’t have a ton of experience with Webmin. I use Proxmox as a Hypervisor and run everything as VMs or containers, so have very little experience with Linux on bare metal too.

    Having said all that, it should certainly be possible. Under the hood, TurnKey is Debian. v14.x is based on Debian Jessie and our upcoming (and well overdue) v15.0 release will be based on Debian Stretch. We build the Webmin packages ourselves, but other than our TKLBAM module, all the code comes from Webmin themselves.

    I see that you’ve installed mdadm. AFAIK that should be the only dependency/requirement.

    Out of interest though, your request for assistance has actually assisted me to discover a bug in our upcoming v15.0! So thanks for that! (FWIW to you. ).

    I tried following the tutorial you linked to myself (using a v14.2 VM I had handy). I think I may have missed something, but it won’t work for me either. However, my experience sounds a little different to yours. I got as far as the “RAID device options” screen but I’m only seeing one “Partitions in RAID” (and the same one showing under “Spare partitions”). If I try to continue, then I get an error “Failed to create RAID : At least 2 partitions must be selected for mirroring”. So that’s a bit of a fail.

    I really need to get back to v15.0 development so we can push it out the door, so I can’t really afford to spend any more time on this right now sorry. The only other thing that I could suggest, is trying from the commandline. One of the (many) reasons why I prefer the commandline is that unless you watch a video, I find that it’s quite easy to misinterpret things, miss vital steps, or click the wrong thing when using a GUI. I find using commandline tutorials is generally more reliable as IMO it’s harder to misinterpret things and often error messages will give clear signs on anything that isn’t working as it should (and often point to why, or at least give an error message to google). It’s also much easier to find out exactly what the commands are actually doing (e.g. via the man pages, or even google). FWIW I did a quick google for you and found a tutorial that might be worth a try. It’s for a previous version of Debian (Squeeze was the one before Jessie) but I would expect it to be similar, if not the same (as hinted above, anything that doesn’t work, try googling explict error messages).

    If you’d rather persist with Webmin, then perhaps consider posting on the Webmin support forums. They will probably be able to help you out much more than me. Please note that the Webmin theme we provide by default is different to what Webmin use by default (although we’ll be switching to the default theme for v15.0) and is an older version, but otherwise, everything should be as expected. As I noted above, the basis of TurnKey v14.x is Debian Jessie.

    Sorry, I couldn’t just guide you to the exact steps required, but hopefully I’ve been of some assistance. Good luck with it and if you manage to work it out, please post back as I’m sure other users will find it useful.

    Linux Software RAID (often called mdraid or MD/RAID) makes the use of RAID possible without a hardware RAID controller. For this purpose, the storage media used for this (hard disks, SSDs and so forth) are simply connected to the computer as individual drives, somewhat like the direct SATA ports on the motherboard.

    In contrast with software RAID, hardware RAID controllers generally have a built-in cache (often 512 MB or 1 GB), which can be protected by a BBU or ZMCP. With both hardware and software RAID arrays, it would be a good idea to deactivate write caches for hard disks, in order to avoid data loss during power failures. SSDs with integrated condensers, which write the contents of the cache to the FLASH PROM during power failures, are the exception to this (such as the Intel 320 Series SSDs).

    Contents

    • 1 Functional Approach
    • 2 RAID Superblock
      • 2.1 Superblock Metadata Version 0.90
      • 2.2 Superblock Metadata Version 1.*
    • 3 Creating a RAID Array
      • 3.1 Preparing Partitions
    • 4 Creating a RAID 1
      • 4.1 Testing the Alignment
      • 4.2 Adjusting the Sync Rate
    • 5 Deleting a RAID Array
    • 6 Roadmap
    • 7 References
    • 8 Additional Information

    Functional Approach

    How to setup software raid for a simple file server on ubuntu

    A Linux software RAID array will support the following RAID levels: [1]

    • RAID 0
    • RAID 1
    • RAID 4
    • RAID 5
    • RAID 6 [2]
    • RAID 10

    RAID Superblock

    A Linux software RAID array will store all of the necessary information about a RAID array in a superblock. This information will be found in different positions depending the metadata version.

    Superblock Metadata Version 0.90

    The 0.90 version superblock is 4,096 bytes long and located in a 64 KiB-aligned block at the end of the device. Depending on the device size, the superblock can first start at 128 KiB before the end of the device or 64 KiB before the end of the device at the latest. To calculate the address of the superblock, the device size must be rounded down to the nearest 64 KiB and then 64 KiB deducted from the result. [3]

    Version 0.90 Metadata Limitations:

    • 28 devices maximum in one array
    • each device may be a maximum of 2 TiB in size
    • No support for bad-block-management [4]

    Superblock Metadata Version 1.*

    The position of the superblock depends on the version of the metadata: [5]

    • Version 1.0: The superblock is located at the end of the device.
    • Version 1.1: The superblock is located at the beginning of the device.
    • Version 1.2: The superblock is 4 KiB after the beginning of the device.

    Creating a RAID Array

    The following example will show the creation of a RAID 1 array. A Fedora 15 live system will be used in the example.

    Preparing Partitions

    The software RAID array will span across /dev/sda1 and /dev/sdb1. These partitions will have the Linux raid autodetect type (fd):

    Creating a RAID 1

    The progress of the initialization process can be requested through the proc file system or mdadm:

    Testing the Alignment

    The version 1.2 metadata will be used in the example. The metadata is thus close to the beginning of the device with the actual data after it, however aligned at the 1 MiB boundary (Data offset: 2048 sectors, a sector has 512 bytes):

    Depending on the version of mdadm the size of the data offset varies:

    • Note: mdadm’s current development version allows to specify the size of the data offset manually (for –create, –grow, not for –add): Add –data-offset flag for Create and Grow
    • since mdadm-3.2.5: 128 MiB Data Offset (262144 sectors), if possible: super1: fix choice of data_offset. (14.05.2012): While it is nice to set a high data_offset to leave plenty of head room it is much more important to leave enough space to allow of the data of the array. So after we check that sb->size is still available, only reduce the ‘reserved’, don’t increase it. This fixes a bug where –adding a spare fails because it does not have enough space in it.
    • since mdadm-3.2.4: 128 MiB Data Offset (262144 sectors) super1: leave more space in front of data by default. (04.04.2012): The kernel is growing the ability to avoid the need for a backup file during reshape by being able to change the data offset. For this to be useful we need plenty of free space before the data so the data offset can be reduced. So for v1.1 and v1.2 metadata make the default data_offset much larger. Aim for 128Meg, but keep a power of 2 and don’t use more than 0.1% of each device. Don’t change v1.0 as that is used when the data_offset is required to be zero.
    • since mdadm-3.1.2: 1 MiB Data Offset (2048 sectors) super1: encourage data alignment on 1Meg boundary (03.03.2010): For 1.1 and 1.2 metadata where data_offset is not zero, it is important to align the data_offset to underlying block size. We don’t currently have access to the particular device in avail_size so just try to force to a 1Meg boundary. Also default 1.x metadata to 1.2 as documented. (see also Re: Mixing mdadm versions)

    Adjusting the Sync Rate

    A RAID volume can be used immediately after creation, even during synchronization. However, this reduces the rate of synchronization.

    In this example directly accessing a RAID 1 array spanning two SSDs (without partitions on /dev/sda and /dev/sdb), synchronization starts at roughly 200 MB/s and drops to 2.5 MB/s as soon as data has been written to the RAID 1 array’s file system:

    The synchronization can be accelerated by manually increasing the sync rate: [6]

    Deleting a RAID Array

    If a RAID volume is no longer required, it can be deactivated using the following command:

    The superblock for the individual devices (in this case, /dev/sda1 and /dev/sdb1 from the example above) will be deleted by the following commands. By doing this, you can re-use these partitions for new RAID arrays.

    Roadmap

    Neil Brown published a roadmap for MD/RAID for 2011 on his blog:

    Support for the ATA trim feature for SSDs (discard-support von Linux Software RAID) is periodically discussed. However this feature is still an the end of the list for future features (by end of June 2011):