Categories
Device

How to network boot (pxe) the ubuntu livecd

How to network boot (pxe) the ubuntu livecd

With Ubuntu’s latest release out the door, we thought we’d celebrate by showing you how to make it centrally available on your network by using network boot (PXE).

Overview

We already showed you how to setup a PXE server in the “What Is Network Booting (PXE) and How Can You Use It?” guide, in this guide we will show you how to add the Ubuntu LiveCD to the boot options.

If you are not already using Ubuntu as your number one “go to” for troubleshooting, diagnostics and rescue procedures tool… it will probably replace all of the tools you are currently using. Also, once the machine has booted into the Ubuntu live session, it is possible to perform the OS setup like you normally would. The immediate up shut of using Ubuntu over the network, is that if your already using the CD version, you will never again be looking for the CDs you forgot in the CD drives.

Prerequisites

  • It is assumed that you have already setup the FOG server as explained in our “What Is Network Booting (PXE) and How Can You Use It?” guide.
  • All the prerequisites for the FOG setup guide apply here as well.
  • This procedure has been used to make Ubuntu 9.10 (Karmic Koala) up to and including 11.04 (Natty Narwhal) network bootable. It may work for other Ubuntu like distributions (like Linux Mint) but hasn’t been tested.
  • You will see me use VIM as the editor program, this is just because I’m used to it… you may use any other editor that you’d like.

How does it work?
In general the Ubuntu LiveCD boot process that we all know is like so:

  • You put a CD into the cdrom drive the BIOS knows how to use the cdrom enough to get the boot program on the cdrom (isolinux).
  • Isolinux is responsible for the menu options. Once you select a boot entry like “Start or install Ubuntu”, it calls the kernal + initrd (initial ram disk) files, copies them into memory and passes parameters to them.
  • The now in RAM and in control kernel + initrd start the boot process, while using the parameters that where passed to them to determine things like: should the splash screen be shown? should the output be verbose?.
  • When the inirtrd scripts have finished loading drivers and device information, they look for the Ubuntu liveCD files to continue the boot process. The normal behavior is to look in the local physical cdrom drive.

For network boot:

  • Instead of a local media such as a CD, the client is booted using it’s network card (PXE) and is supplied with PXElinux over TFTP.
  • Just like Isolinux, PXElinux is responsible for the menu options. Once you select a boot entry, it calls the Ubuntu kernal + initrd files, copies them into memory and passes parameters to them.
  • The now in RAM and in control kernel + initrd start the boot process, with our additional information that they should not be looking for the boot files in the client’s local physical cdrom drive, but rather in an NFS share on our FOG server.

This is possible because the Ubuntu creators have enabled networking by integrating, network cards drivers and protocols into the kernel + initrd files. For such an act, we can only say thank you to the Ubuntu team.

Make the Ubuntu files available on the server

The first step is to make the Ubuntu files available on the server. You may opt to simply copy them from the CD drive, or extract them from the ISO, and that will work just fine. With that said, we will make the ISO auto-mounted. While not a must, doing this will enable you to use our “How to Upgrade your Ubuntu ISO Without Re-downloading” guide, to upgrade the Ubuntu version of your network boot without going through all the procedures from scratch or alternatively, replace a single file to update the entire entry.

With the above said, This author likes keeping a couple of past versions around, until the new one has been proven absolutely stable and issues free. That is why we will make a sub-directory and mount point according to version, but know that you could bypass that to have your single point of update.

  1. Copy the ISO into the “/tftpboot/howtogeek/linux” directory
  2. Create the mount point:

sudo mkdir -p /tftpboot/howtogeek/linux/ubuntu/

/tftpboot/howtogeek/linux/ubuntu-11.04-desktop-amd64.iso /tftpboot/howtogeek/linux/ubuntu/11.04 udf,iso9660 user,loop 0 0

Note: Despite representation, this is one unbroken line.
Test that the mount point works by issuing:

ls -lash /tftpboot/howtogeek/linux/ubuntu/11.04/

Create an NFS share

While the boot procedure starts by using PXE, the actual heavy lifting is done by the NFS share on the server. As we are basing this guide on our FOG server, the NFS components and some configurations have already been done for us by the FOG team, and all we have to do is add to them our Ubuntu share.

    Edit the “exports” file to add the new share:

sudo /etc/init.d/nfs-kernel-server restart

PXE menu setup

Edit the “Linux stuff” menu:

sudo vim /tftpboot/howtogeek/menus/linux.cfg

Append to it the following:

LABEL Ubuntu Livecd 11.04
MENU DEFAULT
KERNEL howtogeek/linux/ubuntu/11.04/casper/vmlinuz
APPEND root=/dev/nfs boot=casper netboot=nfs nfsroot= :/tftpboot/howtogeek/linux/ubuntu/11.04 initrd=howtogeek/linux/ubuntu/11.04/casper/initrd.lz quiet splash —

The above may look messy at first glance but all you have to do is replace * with the IP of your server NFS/PXE server.

For a clearer geek understanding, the text above will:

  • Create a new PXE entry in the“Linux” sub-menu called “Ubuntu 11.04”.
  • Because of the “MENU DEFAULT” parameter, this entry will be automatically selected when entering the “Linux” sub-menu.
  • Point the client to take the kernel + initrd files usinf TFTP from the relative path in the “/tftproot” directory of “howtogeek/linux/ubuntu…”
  • Point the initrd scripts to mount the “root” filesystem from the NFS share on the absolute path of “ :/tftpboot/howtogeek…”

Note: I have tried (and failed) to use a DNS name instead of an IP for the “ ”, I’m guessing that at that stage of the boot process there simply still isn’t support for DNS… success stories are welcomed.

Possible procedures

You should now be able to boot a client into Ubuntu from PXE (Usually F12).

At this stage we suggest you take the time to review some of the things you can do with this outstanding tool:

One last thing, If you create your Ubuntu ISO, using this online builder, you will be able to slipstream all of the articles above into your PXE bootable Ubuntu.

LiveCDNetboot

Introduction

It is possible to boot an Ubuntu live CD from the network, although the process is quite experimental. There are two parts to this: booting vmlinuz+initrd.gz from a PXE boot server, and accessing the root filesystem (containing filesystem.squashfs) over NFS. These are independent, and you can do either one without the other. e.g. for testing you could boot vmlinuz and initrd.gz from a disk or USB drive with GRUB or syslinux, as long as you pass the right kernel command line. NFS root is the tricky part, since it depends on scripts in the initramfs’s /init to do the right thing; current status:

* Gutsy and earlier: working. Previous versions of this page only documented this case, so if it’s wrong now, try looking back at previous versions.

* Karmic : working (And, if you are installing Karmic you can use the super-simple process described in the “Automation script” section below.)

However, this only works when eth0 is your boot interface, unless you modify the initramfs image. This bug dates back to at least Gutsy, and affects Hardy. Probably Intrepid, but untested.

Preparation

You will need a PXE boot server, which means DHCP and TFTP servers. dnsmasq makes a good boot server, as it has a DNS cache, DHCP server, and TFTP server all in one, but most examples you’ll see are for ISC DHCPD config files, not dnsmasq. Otherwise, go with dhcp3-server and tftpd-hpa ( atftpd works too). Search the web for more info if you haven’t done this before.

Put /casper/vmlinuz and /casper/initrd.gz (from the live CD) where your TFTP server can serve them up. (The former is just the same as the stock Ubuntu kernel; the latter is a special initial RAM disk that contains casper, Ubuntu’s set of live CD boot scripts.)

Mount the live CD and copy its contents to a path exported by your NFS server (you can also loop-mount it directly on such a path, although you will have to arrange to mount it on boot, and NFS export the loopback mount):

Ensure that this path is listed in /etc/exports, if necessary. You will need to use the no_root_squash option. Run /etc/init.d/nfs-kernel-server reload to export it.

Arrange for vmlinuz to be booted (by pxelinux.0) with the livecd initramfs (initrd.gz), and with the following kernel arguments:

where server and /path are set appropriately for your NFS server.

All var=value arguments set environment vars for /init, but /init parses /proc/cmdline only for lower-case options. Sometimes it does more than just setting the uppercase equivalent. quiet and splash are optional. text would stop GDM from running. (This isn’t specific to netbooting, though)

(NetworkManager bug workaround): append break=init, and when the initramfs drops you to a shell, touch /cow/etc/init.d/NetworkManager to replace N-M’s init script with an empty file under the union FS of the main system. (or do anything else to prevent N-M from even temporarily taking down the interface needed to access filesystem.squashfs.)

Example

My syslinux livecd label looks like this with the following details:

Server: vansen or 192.168.1.100

Path: /data/images/diskless/ubuntu-7.10-desktop. This is where i put the livecd files.

/etc/exports:

Notes

    You can save space (only about 9MB, but space all the same) by using the livecd copies of the kernel (vmlinuz) and the initramfs image (initrd.gz) instead of copying them someplace else on your tftp server. This is shown in the example above.

You can export a loopback iso9660 mount, if you want to keep the iso image without wasting disk space. It is a separate mount, so does need to be specifically exported (/etc/exports, or run e.g. sudo exportfs -v 192.168.0.0/16:/srv/tftpboot/intrepid-alpha5-i386/mnt). If your tftp server doesn’t chroot, and or if you mounted the CD under your tftproot, you may be able to just symlink to vmlinuz and initrd.gz, or put the path to them (. mnt/casper/vmlinuz) into the pxelinux config file.

If you run into problems, you can delete the quiet and splash options to see kernel boot messages on screen vs the splash screen. Removing only the splash option will give you less verbose messages via the splash screen, which is cool too.

If your image fails to boot and drops you to a busybox initramfs command prompt, the file casper.log may shed some light on why it died. You will likely be here because the livecd couldn’t find a root filesystem to mount (filesystem.squashfs). Try cat casper.log at the prompt.

* If your image hangs during the boot sequence just after the message about “squashfs: version 3.3”, after a minute try the key combination Alt+Enter. It will then continue booting.

Automation script

This script completely automates the aforementioned method, with a few modifications (you don’t need to download it from this link, the ‘wget’ described below does this all automatically). Steps to use it:

Boot a PC with the desktop (ed)ubuntu CD (either i386 or amd64). Karmic or Lucid are required because this script uses dnsmasq >= 2.49 instead of dhcp3-server. Also, an external DHCP server (e.g. a router) on the local network is not only allowed, but required, because dnsmasq is used in proxyDHCP mode so that it doesn’t interfere with the existing network setup.

  • From the live session, run this command:
    • Internet connectivity is needed because the script temporarily installs nfs-kernel-server.
    • That’s all, now you should be able to netboot your clients. Press Ctrl+C to stop the script when you’re done.

    It’s also possible to use that script in a normal session, but novice users are advised to use it only in live sessions.

    Discussion

    Basically, a DHCP server includes a “filename”, like pxelinux.0, in its DHCP reply, and the PXE client loads it from the TFTP server (also pointed to by the DHCP reply). TFTP paths are usually relative to a directory the tftp server chroots to, or at least acts like an http server with a DocumentRoot. e.g. /srv/tftpboot is a good choice. (the pxe client just uses paths like /pxelinux.0 and /pxelinux.cfg/default)

  • If the question “Please provide a name for this Disc, such as ‘Debian 2.1r1 Disk 1′:” appears during boot, it’s because the hidden folder named “.disk” is missing. (Chances are your “cp -a . ” command included a “*”, which didn’t match that dot-directory.) Copy this folder over from the source to keep the boot sequence going on without the need of user interaction. This question are only visible if the splash are disabled, else it’ll just stand there doing nothing.
  • FIXME: put the PXE howto somewhere else. Every page about something to do with PXE should _not_ become its own howto on setting up a PXE server. It would be good to link to a good page about it, though.
  • LiveCDNetboot (последним исправлял пользователь alkisg 2011-06-27 17:11:50)

    Technology in Developing Regions

    • Home
    • About
    • Links

    Ubuntu Live CD/Network Boot

    April 29, 2009

    Live CDs are great. In particular, they’re a great way to try out software, knowing that the chances of damaging the host system are minimal and you can throw away the entire system if you want to.

    Sometimes you want to use a live CD environment without a CD. CDs are slow, get lost and scratched, and require a CD drive. If you’re going to use live environments a lot, you’d probably prefer to boot them over the network from a machine with a hard disk and a cache.

    Luckily, Ubuntu’s live CD includes all the necessary support to do this easily, if you know how to use it. Unfortunately, it’s not really documented as far as I can tell. Please correct me if I’m wrong about this.

    I managed to make the live CD boot over the network on a PXE client using the following steps.

    • set your DHCP server up to hand off to a TFTP server. For example, add the following lines to your subnet definition in /etc/dhcp3/dhcpd.conf:
    • get a copy of pxelinux.0 from the pxelinux package and put it in the tftproot of your TFTP server.
    • copy the casper directory off the CD and put it into your tftproot as well.
    • get an NFS server on your network to loopback-mount the Desktop ISO (e.g. ubuntu-8.04.2-desktop-i386.iso) and export the mount directory through NFS. Let’s say your NFS server is 1.2.3.4 and the ISO is mounted at /var/nfs/ubuntu/live . Edit /etc/exports on the server and export the mount directory to the world by adding the following line:
    • put the following section into your tftproot/pxelinux.cfg/default file:
    • test that the PXE client boots into the live CD environment
    • if it doesn’t, remove the “quiet splash” from the end of the “append” line and boot it again, to see where it gets stuck.

    I hope this helps someone, and that NFS-booting a live environment will be properly documented (better than this!) one day.

    LiveCDNetboot

    Introduction

    It is possible to boot an Ubuntu live CD from the network, although the process is quite experimental. There are two parts to this: booting vmlinuz+initrd.gz from a PXE boot server, and accessing the root filesystem (containing filesystem.squashfs) over NFS. These are independent, and you can do either one without the other. e.g. for testing you could boot vmlinuz and initrd.gz from a disk or USB drive with GRUB or syslinux, as long as you pass the right kernel command line. NFS root is the tricky part, since it depends on scripts in the initramfs’s /init to do the right thing; current status:

    * Gutsy and earlier: working. Previous versions of this page only documented this case, so if it’s wrong now, try looking back at previous versions.

    * Karmic : working (And, if you are installing Karmic you can use the super-simple process described in the “Automation script” section below.)

    However, this only works when eth0 is your boot interface, unless you modify the initramfs image. This bug dates back to at least Gutsy, and affects Hardy. Probably Intrepid, but untested.

    Preparation

    You will need a PXE boot server, which means DHCP and TFTP servers. dnsmasq makes a good boot server, as it has a DNS cache, DHCP server, and TFTP server all in one, but most examples you’ll see are for ISC DHCPD config files, not dnsmasq. Otherwise, go with dhcp3-server and tftpd-hpa ( atftpd works too). Search the web for more info if you haven’t done this before.

    Put /casper/vmlinuz and /casper/initrd.gz (from the live CD) where your TFTP server can serve them up. (The former is just the same as the stock Ubuntu kernel; the latter is a special initial RAM disk that contains casper, Ubuntu’s set of live CD boot scripts.)

    Mount the live CD and copy its contents to a path exported by your NFS server (you can also loop-mount it directly on such a path, although you will have to arrange to mount it on boot, and NFS export the loopback mount):

    Ensure that this path is listed in /etc/exports, if necessary. You will need to use the no_root_squash option. Run /etc/init.d/nfs-kernel-server reload to export it.

    Arrange for vmlinuz to be booted (by pxelinux.0) with the livecd initramfs (initrd.gz), and with the following kernel arguments:

    where server and /path are set appropriately for your NFS server.

    All var=value arguments set environment vars for /init, but /init parses /proc/cmdline only for lower-case options. Sometimes it does more than just setting the uppercase equivalent. quiet and splash are optional. text would stop GDM from running. (This isn’t specific to netbooting, though)

    (NetworkManager bug workaround): append break=init, and when the initramfs drops you to a shell, touch /cow/etc/init.d/NetworkManager to replace N-M’s init script with an empty file under the union FS of the main system. (or do anything else to prevent N-M from even temporarily taking down the interface needed to access filesystem.squashfs.)

    Example

    My syslinux livecd label looks like this with the following details:

    Server: vansen or 192.168.1.100

    Path: /data/images/diskless/ubuntu-7.10-desktop. This is where i put the livecd files.

    /etc/exports:

    Notes

      You can save space (only about 9MB, but space all the same) by using the livecd copies of the kernel (vmlinuz) and the initramfs image (initrd.gz) instead of copying them someplace else on your tftp server. This is shown in the example above.

    You can export a loopback iso9660 mount, if you want to keep the iso image without wasting disk space. It is a separate mount, so does need to be specifically exported (/etc/exports, or run e.g. sudo exportfs -v 192.168.0.0/16:/srv/tftpboot/intrepid-alpha5-i386/mnt). If your tftp server doesn’t chroot, and or if you mounted the CD under your tftproot, you may be able to just symlink to vmlinuz and initrd.gz, or put the path to them (. mnt/casper/vmlinuz) into the pxelinux config file.

    If you run into problems, you can delete the quiet and splash options to see kernel boot messages on screen vs the splash screen. Removing only the splash option will give you less verbose messages via the splash screen, which is cool too.

    If your image fails to boot and drops you to a busybox initramfs command prompt, the file casper.log may shed some light on why it died. You will likely be here because the livecd couldn’t find a root filesystem to mount (filesystem.squashfs). Try cat casper.log at the prompt.

    * If your image hangs during the boot sequence just after the message about “squashfs: version 3.3”, after a minute try the key combination Alt+Enter. It will then continue booting.

    Automation script

    This script completely automates the aforementioned method, with a few modifications (you don’t need to download it from this link, the ‘wget’ described below does this all automatically). Steps to use it:

    Boot a PC with the desktop (ed)ubuntu CD (either i386 or amd64). Karmic or Lucid are required because this script uses dnsmasq >= 2.49 instead of dhcp3-server. Also, an external DHCP server (e.g. a router) on the local network is not only allowed, but required, because dnsmasq is used in proxyDHCP mode so that it doesn’t interfere with the existing network setup.

  • From the live session, run this command:
    • Internet connectivity is needed because the script temporarily installs nfs-kernel-server.
    • That’s all, now you should be able to netboot your clients. Press Ctrl+C to stop the script when you’re done.

    It’s also possible to use that script in a normal session, but novice users are advised to use it only in live sessions.

    Discussion

    Basically, a DHCP server includes a “filename”, like pxelinux.0, in its DHCP reply, and the PXE client loads it from the TFTP server (also pointed to by the DHCP reply). TFTP paths are usually relative to a directory the tftp server chroots to, or at least acts like an http server with a DocumentRoot. e.g. /srv/tftpboot is a good choice. (the pxe client just uses paths like /pxelinux.0 and /pxelinux.cfg/default)

  • If the question “Please provide a name for this Disc, such as ‘Debian 2.1r1 Disk 1′:” appears during boot, it’s because the hidden folder named “.disk” is missing. (Chances are your “cp -a . ” command included a “*”, which didn’t match that dot-directory.) Copy this folder over from the source to keep the boot sequence going on without the need of user interaction. This question are only visible if the splash are disabled, else it’ll just stand there doing nothing.
  • FIXME: put the PXE howto somewhere else. Every page about something to do with PXE should _not_ become its own howto on setting up a PXE server. It would be good to link to a good page about it, though.
  • LiveCDNetboot (последним исправлял пользователь alkisg 2011-06-27 17:11:50)

    How to do an fully automated Ubuntu 20.04 Server install using PXE and the live server image?

    Reason

    With the 20.04 release, it seems clear Ubuntu is further pushing the live server installer (subiquity) option. The debian-installer (d-i) image has been renamed legacy. So has the netboot installer I typically prefer. The 20.04 release also introduces a new automated installation option for the live server installer.

    2 Answers 2

    These are steps to do a fully automated Ubuntu 20.04 Server install using PXE with the live server image. I found the process to be lightly documented and filled with issues. In these steps I am installing 20.04 on a UEFI based server.

    There are many variations to these steps possible. They can be customized and tailored to suit one’s needs. The goal is to provide one example of how to accomplish this and to help other users overcome the issues encountered.

    links about the installer

    • https://wiki.ubuntu.com/FocalFossa/ReleaseNotes#Installer
    • https://ubuntu.com/server/docs/install/autoinstall
    • https://discourse.ubuntu.com/t/server-installer-plans-for-20-04-lts/13631
    • https://discourse.ubuntu.com/t/netbooting-the-live-server-installer/14510

    config references

    • https://ubuntu.com/server/docs/install/autoinstall-reference
    • https://curtin.readthedocs.io/en/latest/topics/config.html

    source code

    • https://github.com/CanonicalLtd/subiquity
    • https://github.com/canonical/curtin

    Build a tftp server

    All the following steps are run as root. These were tested on an Ubuntu 18.04 server.

    Install the tftp server and a web server

    Configure apache to serve files from the tftp directory

    Download the live server iso

    Extract the kernel and initramfs from the live server iso

    Download the grub image to load via PXE

    Configure grub. This configuration will provide a fully automated boot option as well as a manual boot option

    Configure cloud-init with the autoinstall configuration. I first ran the install manually to get the generated /var/log/installer/autoinstall-user-data file to use as the basis. I then made modifications based on my needs and errors encountered.

    Configure DHCP

    Set the DHCP Options 66,67 according to the documentation for your DHCP server.

    Boot your server

    At this point, you should be able to boot your UEFI based server and perform a completely automatic install.

    • The server being installed requires over 2 GB of RAM. I ended up creating a VM with 3 GB for testing
    • The generated /var/log/installer/autoinstall-user-data file was broken in the following ways
      • There is no version property, which caused a validation failure. I added the property
      • The network section required another level of nesting. This bug is mentioned in the config reference
      • The preserve property on each item in storage config needed to be set to false. Otherwise curtin would not install on a blank disk
      • The keyboard property toggle was set to null, which caused a validation failure. I simply removed the property
    • When curtin installs on a UEFI device, it reorders the boot order so the current boot option is first in the list. The result is that network boot becomes the first option the next reboot. So when the install is done and the reboot happens. you end up in the PXE environment again instead of booting from disk. I found an undocumentedcurtin option reorder_uefi . Luckily, subiquity happens to pass this configuration to curtin
    • The apt config option geoip doesn’t seem to work. There were always logs for geoip requests
    • Using human readable values for partition sizes (e.g. size: 512M ) resulted in the size being stored as a float, leading to errors when sizing LVM volumes as a percentage. Avoiding human readable values seems to fix this

    I didn’t dig into these as much. They are based on what my preseed files would do. Most of them could probably be fixed with clever use of early-commands , late-commands , and cloud-init. I may have also missed something

    • A way to set the timezone
    • A way to set the root password
    • A way to configure an apt only proxy. I like to use apt-cacher-ng for apt, but it does not work as a general proxy. The installer assumes any proxy you configure is for everything
    • A way to pause at the end of the install instead of automatically rebooting. The workaround is to add a value to interactive-sections , but that results in 3 pauses
    • Allow direct curtin configuration. You have to create yaml for cloud-init to provide yaml to subiquity, which then generates yaml for curtin. It would provide more configuration flexibility to be able to provide the curtin yaml directly
    • Allow direct cloud-init configuration. You have to create yaml for cloud-init to provide yaml to subiquity, which then generates yaml for cloud-init on the installed machine. These files should be easy to modify with late-commands , but I did not try it
    • Ability to choose the kernel package. I found that the kernel image installed is based on what is written to /run/kernel-meta-package . This is hardcoded to linux-generic in the initramfs. I prefer to use the linux-virtual package for VMs. I was able to use the cloud-init configuration to overwrite the file

    The resulting /target/var/lib/cloud/seed/nocloud-net/user-data file used by cloud-init during first boot. The replies indicate the lock-passwd property has a typo and may affect some users

    How to network boot (pxe) the ubuntu livecd

    May 18, 2020 · 5 min read

    This guide is dedicated to install Ubuntu 18.04 Server LTS via PXE.

    In a sandbox approach two Virtual Machines with Ubuntu 18.04 Server were used. First an internal network was established between to two machines to test proper DHCP behavior.

    A secondary network interface was created for internet access by NAT. This is only important for the PXE-Master VM.

    We will go through step-by-step what needs to be done on the PXE-Master.

    How to network boot (pxe) the ubuntu livecd

    Netplan

    Since we have two interfac e s in out VM (one for internal VM-to-VM networking, and one NAT device) we need to configure them.

    Check the interfaces with sudo ip link and make note of the names.
    Then open the configuration file for netplan.

    Here you need to add a static IP for the internal networking interface since here we will serve our DHCP server.

    Here we have chosen the 192.168.1.1 as IP for the DHCP Server.

    DNSMASQ

    Next we need to install dnsmasq. This will be our DHCP server with PXE boot functionalities.

    Afterwards we edit the configuration file. Be sure to make a backup if you have custom settings.

    If this is a fresh install just add the following lines at the bottom:

    Be sure to select the correct interface! The DHCP option adjust the range and gateway settings for devices that get their IP from this server.
    If you are planning to use this for a single system it would be advisable to limit the leases to certain MAC-addresses. For our sandbox VM example it wont make any difference.

    Now the important part is that dnsmasq will also serve a TFTP server. This will be used by PXE to serve the resources required for network boot.
    The other points can be changed to your liking.

    After you saved the edit we need to create the serving folder for TFTP.

    Now we can start the dnsmasq service:

    and check the status:

    Ideally, it should be “active”. Now it is a good time to check the proper function of the DHCP.

    On our second machine, PXE-Client, we need to set the netplan settings of the internal network interface to dhcp4: true. After a reboot, both VMs should be able to see each other in the 192.168.1.X netrange. If this does not work you have issues with the DHCP.

    Hint: Check the correct network settings with your VM manager first. I used VirtualBox.

    Now we need to populate the TFTP to be able to boot from network. For this, we download the netboot image from Ubuntu.

    This will provide us netbooting capabilities. The pxelinux.0 config will already allow the netboot. You can check this if you reboot the client VM with the boot order: Network boot.

    Ideally, there should be a Ubuntu splash screen with the netboot install option.

    Apache

    At this point it would be possible to perform a network install from the minimal netboot image. However if we want to customize our install and use local resources and configurations we need to serve the OS image that we want to install.

    It is possible the serve these by HTTP or NFS. NFS seems to be the more current approach but the HTTP approach was already working for me.

    First, we need to install Apache Webserver:

    Check the status with:

    Secondly, we download the ISO image for Ubuntu 18.04 Server:

    To copy all the files we mount the ISO file:

    And lastly, copy the content to the HTML root:

    Now to use this custom install configuration we need to provide a so-called “preseed” file. In the simplest case it merely tells PXE where to find the correct resources:

    And put the following content into it:

    PS: Make sure that all the files are readable in the /var/www folders. PPS: The preseed configuration file can be used to fully automate your installation. More infos: https://github.com/trn84/pxe-install/blob/master/local-sources.seed

    Finally, we change the PXE config to account for our newly created preseed file.

    PXE Config

    In the netboot folder that is being served with TFTP some adjustments are required. For this we open the default config (remember the dnsmasq.conf file).

    This file should already has some content. Append at the end the following:

    Make sure to set the desired HOSTNAME and INTERFACE name. Also the “url” gives the path to the preseed cfg file.

    This is a good point to check the internet for other options. For example you can change the timeout value to 1 if you would like to automatically install the OS.

    In the append part of the config we added our config file. You can either add more configurations there or provide a so-called “preseed” file to more or less achieve an unattended installation.

    If you now restart your PXE-Client the system should automatically try to install Ubuntu 18.04 Server. It will not be completely unattended since we only provided a minimal config.

    All the relevant config files can be found in this repo:

    How to network boot (pxe) the ubuntu livecd

    We’ve already shown you how to use the BitDefender Rescue CD to clean your infected PC, but what if you wanted to achieve the same thing only without a CD over the network? In this guide, we’ll show you how.

    Prerequisites

    • It is assumed that you have already setup the FOG server as explained in our “What Is Network Booting (PXE) and How Can You Use It?” guide.
    • You will see the “VIM” program used as the editor, this is mainly because it is widely available on Linux platforms. You may use any other editor that you’d like.

    Overview

    In the The 10 Cleverest Ways to Use Linux to Fix Your Windows PC, one of the things we’ve shown, was that it is possible to install an antivirus and scan your computer from an Ubuntu LiveCD. With that said, what if you wanted to make absolutely sure that your computer is not infected by scanning it with another antivirus?

    To that end, you could use another antivirus rescue CD, and there are some out there that we have reviewed in the past like Kaspersky and Avira. The clever thing is, what if you wanted to add this additional tool to your PXE server, so you’d never again have to look for the CD of the utility?

    We’ve done the legwork and found that, even though it requires some TLA post boot, the BitDefender Rescue CD is by far the easiest to get PXEable from the above options.

    In the “How to Setup Network Bootable Utility Discs Using PXE” guide, we’ve promised that we will give another example for the “Kernel + Initrd + NFS method” and we shall deliver. The principle here is just the same as for the How To Network Boot (PXE) The Ubuntu LiveCD.

    We will take the files off of the CD, make them available through an NFS share, and point the PXE client to this NFS share as its “root filesystem”.

    Server side setup

    What you would do is repeat the steps taken in the How To Network Boot (PXE) The Ubuntu LiveCD guide, which were:

    • Download the latest ISO from bitdefender’s site and put it in the “/tftpboot/howtogeek/utils/”.
    • Create the mount point:

    sudo mkdir -p /tftpboot/howtogeek/utils/bitdefender

    /tftpboot/howtogeek/utils/bitdefender-rescue-cd.iso /tftpboot/howtogeek/utils/bitdefender udf,iso9660 user,loop 0 0

    Note: Despite representation, this is one unbroken line.
    Test that the mount point works by issuing:

    ls -lash /tftpboot/howtogeek/utils/bitdefender/

    sudo /etc/init.d/nfs-kernel-server restart

    sudo vim /tftpboot/howtogeek/menus/utils.cfg

    label BitDefender Rescue Live
    kernel howtogeek/utils/bitdefender/casper/vmlinuz
    append file=/cdrom/preseed/ubuntu.seed boot=casper initrd=howtogeek/utils/bitdefender/casper/initrd.gz splash vga=791 lang=us root=/dev/nfs netboot=nfs nfsroot= :/tftpboot/howtogeek/utils/bitdefender

    That is it on the server’s side, your client should be ready to boot into the rescue CD via PXE.

    Client side usage

    As we said in the overview, this antivirus requires some intervention, when you actually boot into it using PXE vs the client booted from CD mode.

    The problem is in the way the network is setup/detected when the Linux’s rescue CD is booted, but the fix is rather simple.

    When you boot into the rescue environment, you will be greeted by an update error like:
    How to network boot (pxe) the ubuntu livecd

    Click OK and close this message.

    Next, click on the “Dog” icon to bring up programs menu.
    How to network boot (pxe) the ubuntu livecd
    Once in the terminal bring up the Midnight commander with root privileges, by issuing:

    How to network boot (pxe) the ubuntu livecd

    Once in the midnight commander, go into “/etc/network” and edit (use F4) the “interfaces” file.

    Find the line which reads “iface eth0 inet manual”, and replace manual with “dhcp”.

    So that your end configuration should look something like:

    Quit “edit mode” while saving your changes by hitting “F10” and selecting “Yes” when prompted.

    Restart the clients networking, by issuing:

    If all went well you should see that you obtained an IP address and now you can use the update function of the BitDefender application.

    How to network boot (pxe) the ubuntu livecd
    From here on out, the instructions are the same as with the How to Use the BitDefender Rescue CD to Clean Your Infected PC guide.

    Its easy once you get the hang of it… and as always, Enjoy your virus-free PC

    The main image is by baronsquirrel, the rest were captured by Aviad Raviv.

    How to network boot (pxe) the ubuntu livecd

    May 18, 2020 · 5 min read

    This guide is dedicated to install Ubuntu 18.04 Server LTS via PXE.

    In a sandbox approach two Virtual Machines with Ubuntu 18.04 Server were used. First an internal network was established between to two machines to test proper DHCP behavior.

    A secondary network interface was created for internet access by NAT. This is only important for the PXE-Master VM.

    We will go through step-by-step what needs to be done on the PXE-Master.

    How to network boot (pxe) the ubuntu livecd

    Netplan

    Since we have two interfac e s in out VM (one for internal VM-to-VM networking, and one NAT device) we need to configure them.

    Check the interfaces with sudo ip link and make note of the names.
    Then open the configuration file for netplan.

    Here you need to add a static IP for the internal networking interface since here we will serve our DHCP server.

    Here we have chosen the 192.168.1.1 as IP for the DHCP Server.

    DNSMASQ

    Next we need to install dnsmasq. This will be our DHCP server with PXE boot functionalities.

    Afterwards we edit the configuration file. Be sure to make a backup if you have custom settings.

    If this is a fresh install just add the following lines at the bottom:

    Be sure to select the correct interface! The DHCP option adjust the range and gateway settings for devices that get their IP from this server.
    If you are planning to use this for a single system it would be advisable to limit the leases to certain MAC-addresses. For our sandbox VM example it wont make any difference.

    Now the important part is that dnsmasq will also serve a TFTP server. This will be used by PXE to serve the resources required for network boot.
    The other points can be changed to your liking.

    After you saved the edit we need to create the serving folder for TFTP.

    Now we can start the dnsmasq service:

    and check the status:

    Ideally, it should be “active”. Now it is a good time to check the proper function of the DHCP.

    On our second machine, PXE-Client, we need to set the netplan settings of the internal network interface to dhcp4: true. After a reboot, both VMs should be able to see each other in the 192.168.1.X netrange. If this does not work you have issues with the DHCP.

    Hint: Check the correct network settings with your VM manager first. I used VirtualBox.

    Now we need to populate the TFTP to be able to boot from network. For this, we download the netboot image from Ubuntu.

    This will provide us netbooting capabilities. The pxelinux.0 config will already allow the netboot. You can check this if you reboot the client VM with the boot order: Network boot.

    Ideally, there should be a Ubuntu splash screen with the netboot install option.

    Apache

    At this point it would be possible to perform a network install from the minimal netboot image. However if we want to customize our install and use local resources and configurations we need to serve the OS image that we want to install.

    It is possible the serve these by HTTP or NFS. NFS seems to be the more current approach but the HTTP approach was already working for me.

    First, we need to install Apache Webserver:

    Check the status with:

    Secondly, we download the ISO image for Ubuntu 18.04 Server:

    To copy all the files we mount the ISO file:

    And lastly, copy the content to the HTML root:

    Now to use this custom install configuration we need to provide a so-called “preseed” file. In the simplest case it merely tells PXE where to find the correct resources:

    And put the following content into it:

    PS: Make sure that all the files are readable in the /var/www folders. PPS: The preseed configuration file can be used to fully automate your installation. More infos: https://github.com/trn84/pxe-install/blob/master/local-sources.seed

    Finally, we change the PXE config to account for our newly created preseed file.

    PXE Config

    In the netboot folder that is being served with TFTP some adjustments are required. For this we open the default config (remember the dnsmasq.conf file).

    This file should already has some content. Append at the end the following:

    Make sure to set the desired HOSTNAME and INTERFACE name. Also the “url” gives the path to the preseed cfg file.

    This is a good point to check the internet for other options. For example you can change the timeout value to 1 if you would like to automatically install the OS.

    In the append part of the config we added our config file. You can either add more configurations there or provide a so-called “preseed” file to more or less achieve an unattended installation.

    If you now restart your PXE-Client the system should automatically try to install Ubuntu 18.04 Server. It will not be completely unattended since we only provided a minimal config.

    All the relevant config files can be found in this repo:

    Technology in Developing Regions

    • Home
    • About
    • Links

    Ubuntu Live CD/Network Boot

    April 29, 2009

    Live CDs are great. In particular, they’re a great way to try out software, knowing that the chances of damaging the host system are minimal and you can throw away the entire system if you want to.

    Sometimes you want to use a live CD environment without a CD. CDs are slow, get lost and scratched, and require a CD drive. If you’re going to use live environments a lot, you’d probably prefer to boot them over the network from a machine with a hard disk and a cache.

    Luckily, Ubuntu’s live CD includes all the necessary support to do this easily, if you know how to use it. Unfortunately, it’s not really documented as far as I can tell. Please correct me if I’m wrong about this.

    I managed to make the live CD boot over the network on a PXE client using the following steps.

    • set your DHCP server up to hand off to a TFTP server. For example, add the following lines to your subnet definition in /etc/dhcp3/dhcpd.conf:
    • get a copy of pxelinux.0 from the pxelinux package and put it in the tftproot of your TFTP server.
    • copy the casper directory off the CD and put it into your tftproot as well.
    • get an NFS server on your network to loopback-mount the Desktop ISO (e.g. ubuntu-8.04.2-desktop-i386.iso) and export the mount directory through NFS. Let’s say your NFS server is 1.2.3.4 and the ISO is mounted at /var/nfs/ubuntu/live . Edit /etc/exports on the server and export the mount directory to the world by adding the following line:
    • put the following section into your tftproot/pxelinux.cfg/default file:
    • test that the PXE client boots into the live CD environment
    • if it doesn’t, remove the “quiet splash” from the end of the “append” line and boot it again, to see where it gets stuck.

    I hope this helps someone, and that NFS-booting a live environment will be properly documented (better than this!) one day.