Categories
Planning

How to use pipes on linux

I’m doing a course in Operating Systems and we’re supposed to learn how to use pipes to transfer data between processes.

We were given this simple piece of code which demonstrates how to use pipes,but I’m having difficulty understanding it.

What does the write function do? It seems to send data to the pipe and also print it to the screen (at least it seems like the second time the write function is called it does this).

Does anyone have any suggestions of good websites for learning about topics such as this, FIFO, signals, other basic linux commands used in C?

3 Answers 3

The function creates a pipe and stores its endpoing file descriptors in pipefd[0] and pipefd[1] . Anything you write to one end can be read from the other and vice versa. The first write() call writes “hello world” to pipefd[1] , and the read() call reads that same data from pipefd[0] . Then, the second write() call writes that data to file descriptor 1 , which is STDOUT by default, which is why you see it on the screen.

Pipes can be confusing at first. As you read / write more code that use them, they’ll become much easier to understand. I recommend W. Richard Stevens Advanced Programming in the UNIX Environment as a good book to understand them. As I recall, it has good code examples.

The program creates a pipe via the pipe(2) call. The pipe has a file descriptor open for reading ( pipefd[0] ) and one open for writing ( pipefd[1] ). The program first writes “hello worldn” to the write end of the pipe and then reads the message out of the read end of the pipe. The message is then written out to the console (stdout) via the write(2) call to file descriptor 1.

The first argument to write() is the file descriptor to write to.

In the first call, the code is writing to one end of the pipe ( pipefd[1] ). In the second call, it is writing to file descriptor 1, which in POSIX-compliant systems is always standard output (the console). File descriptor 2 is standard error, for what it’s worth.

Not the answer you’re looking for? Browse other questions tagged c linux pipe or ask your own question.

Related

Hot Network Questions

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

  • Blog
  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. rev 2020.12.10.38158

How to use pipes on linux

The tr (translate) command is used in Linux mainly for translating and deleting characters. It can be used to convert uppercase to lowercase, squeeze repeating characters and deleting characters.

Tr command requires two sets of characters for transformations and also can be used with other commands using Unix pipes for advanced translations.

In this tutorial, we learn how to use tr command in the Linux operating systems through some examples.

tr command and syntax

Tr command uses the following syntax which requires two sets of characters to action.

where SET’s are a group of characters. You use interpreted sequences, listed some of them.

NNN -> character with octal value NNN (1 to 3 octal digits)

t -> horizontal tab

[:alnum:] -> all letters and digits

[:alpha:] -> all letters

[:blank:] -> all horizontal whitespace

[:cntrl:] -> all control characters

[:digit:] -> all digits

[:lower:] -> all lower case letters

[:upper:] -> all upper case letters

The following variations of tr syntax can be applied:

-c , -C , –complement -> Complement the character set in ‘SET1″
-d , –delete -> Delete characters in SET1.
-s , –squeeze-repeats -> Replace each input sequence of a repeated character that is listed in SET1 with a single occurrence of that character.

1) Convert lower case to upper case

We can use tr for case conversion, ie to convert sentences or words from lower case to upper case or vise versa.

You can either [:lower:] [:upper:] or “a-z” “A-Z” to convert lower case to upper case.

The following examples convert the characters from lower case to upper case and print results on the standard output.

This example translates the contents of a file named ‘input.txt’ and prints the result only in the standard output of the console.

In the following, the contents in the ‘input.txt’ will be converted to upper case and saved to a file called ‘output.txt’

How to use pipes on linux

Note: The sed command has the y option that works like tr, that replace all occurrences of characters in ‘set1’ with the corresponding characters in ‘set2’.

2) Removes characters

The -d option is used to remove all occurrences of characters that have been specified. Let’s check different -d options with examples.

The following command will remove all occurrences of characters ‘cawe’ from the first set.

The following command will remove all digits from a sentence:

Note [:digit:] stands for all digit characters.

The below command will remove newlines from a text file

3) Remove non-matching characters (complement)

With -c option you can replace the non-matching characters with another set of characters.

In the following example, it replaces all the non-matching characters ‘bc123d56E’ with ‘t’.

Another real-time example where we want to extract only digits from a set of characters:

4) Translate white space to tabs

We can have tr command to translate all white space to tabs, apply tr with [:space:] and t .

Check the following example

5) Squeeze repetition of characters

With -s option we can squeeze repeated occurrence of characters.

In the following examples, the repeated spaces (continuous) is converted to a single space.

Here we remove continuous spaces and place character ‘#’ in all single spaces.

6) Translates to a single newline

The following translates each sequence of space into a single newline character.

Note [:alpha:] stands for all letters.

7) Generate a list of unique words from a file

This is a very useful practical example where we can use tr to generate unique words from a file.

8) Encode letters using ROT

ROT (Caesar Cipher) is a type of cryptography wherein encoding is done by moving the letters in the alphabet to its next letter.

Let’s check how to use tr for encrypting.

In the following example each character in the first set will be replaced with the corresponding character in the second set.

The first set specified is [a-z], which means abcdefghijklmnopqrstuvwxyz . The second is [n-za-m], which turns into pqrstuvwxyzabcdefghijklmn .

Simple command to shows the above theory:

This is very useful when you need to encrypt an email. Check below

Conclusion

It is a very powerful linux command when using Unix pipes and very commonly used in the shell script.

You can always refer man page for more information about this command line utility. If you have any questions or feedback, feel free to leave a comment.

How to use pipes on linux

Like most commands on Linux, SSH can be used with input/output redirection via | (Unix Pipe). SSH can be used with this pipeline too. The basic concept here is understanding how the Unix pipeline works.

When you understand the way pipes work, you can get seriously creative. This article covers what happens when you combine Unix pipes and SSH. It should be noted that since Unix pipes can be just about anything, there are no doubt going to be commands not on this list would also be useful.

Understanding the Unix Pipeline

Pipes on Unix (and by extension, Linux) are used to chain programs together and make them work together. For example, using cat , you can show the contents of a file, but if you used a pipe, you could chain the cat command to the more command to make the file easier to read through.

How to use pipes on linux

The basic idea here is this: program1 fileX | program2 . It’s not just limited to one file and two programs, though. Piping can get about as advanced as you need it to be with as many modifiers as you can think of.

Note: Some types of pipes can be done without using the | . Some may use > instead.

5 Useful SSH pipe commands

Now that the Unix pipeline makes a little sense, let’s see what we can do with the SSH protocol and pipes. Here’s a list of some really great pipes that most will find useful when combining with SSH.

1. Compressed file transfer

Forget using scp to transfer files; you can do it with a simple SSH pipe command. No need to install anything.

How to use pipes on linux

This uses the tar program to compress your data locally and then is piped over SSH. From there, the remote machine receives the file and extracts it to the folder you specified. You’ll never actually see a .tar archive, but it makes use of one.

2. Running a local script on a remote machine (or remote on local)

Got a script written on your computer and want to test it out really quickly? No need to push the file to it or anything like that. Just pipe your local file through SSH and run it this way instead!

With this command you remove the need to push files around to remote machines to execute shell scripts. It saves a lot of time in the long run.

3. Remote hard drive backup

Want to back up your computer to your remote machine without taking the hard drive out physically and hooking it up? It’s easy to do, and with an SSH pipe, no less. Here’s how it works:

This makes use of the dd command. It uses your local drive (sda) as the source, and then it pipes the output over SSH to be written to a raw image file.

How to use pipes on linux

Note: The drive you may want to back up might have a different denotation. Use the lsblk command to figure out what drive you’re looking to back up. This command will tell you what /dev/ to use in the if= part for the command above.

4. Remote hard drive restoration

Want to restore that image you just backed up to your machine? It’s easy. This time the command works in reverse. Again, if the drive you are restoring to is named differently than what is listed in the example, use the lsblk command to find out what /dev/ it’s listed as.

How to use pipes on linux

Run this command, and the .img file you created will be restored over the network to the hard drive that you specify.

5. Send a file

Sending a single file over SSH is easy. Here’s how to do it with pipes.

This command makes use of the cat command to send a file through a pipe. You can also retrieve that file with the following command:

Conclusion

Though it might not seem that impressive, pipes can simplify and transform the way you use commands on Linux. Though this list highlights some of the most useful, it’s only the tip of the iceberg. With how versatile the vertical bar is, the possibilities for piping things through SSH are endless.

Know any good SSH piping commands? Tell us below!

Derrik Diener is a freelance technology blogger.

  • Facebook
  • Tweet

4 comments

The last command is actually an alternate way of doing the previous command. To retrieve the file you would do:

ssh [email protected] “cat remote” > file

Yeah it’s same I guess

No it is not the same. Try it. 🙂

“The basic concept here is understanding how the Unix pipeline works.”

More exactly, the basic concept here is understanding ssh essentially creates two (encrypted) pipes between two machines:

1) The local standard input (stdin), i.e. the local terminal input stream that is executing the ssh command, is piped to the remote stdin of the terminal ssh opens on the remote machine.

2) The remote standard output and standard error (stdout and stderr) are piped back to the local terminal stdout and stderr.

Naturally any additional pipes or redirects (lt char and gt char) you set up apply to those streams. Getting the quotes right is important to distinguish what is local (unquoted) and what is remote (quoted).

Comments are closed.

Popular Posts

How to Set Up Bluetooth in Linux

Understanding File Permissions: What Does “Chmod 777” Mean?

How to Fix Broken Packages in Linux

8 Tools to Easily Create a Custom Linux Distro

12 of the Best Linux Games in 2020

How to Record System Sound on Linux

Best Conky Themes for Linux in 2020

Do You Need a Boot Partition in Linux?

How to Set Up a Virtual On-Screen Keyboard in Linux

How to Install and Configure Openbox Window Manager

Affiliate Disclosure: Make Tech Easier may earn commission on products purchased through our links, which supports the work we do for our readers.

Time for some Linux Basics. Because the most important thing in every field is: Have your Basics straight! So let’s talk about Pipes and Redirection in Linux.

What are Pipes and Redirection in Linux?

Redirection

Every single process in Linux has at least 3 communication channels available:

  • Standard Input – STDIN
  • Standard Output – STDOUT
  • Standard Error – STDERR

The Kernel itself sets up those channels on the behalf of the process. The process itself doesn’t necessarily know where they lead. Most Linux commands accept input from STDIN and write output in STDOUT. Error messages are written to STDERR. This allows you to connect commands together to create pipelines.

The Shell uses the symbols and >> as instructions to reroute the instructions of a command input or output to or from a file. The and the >> symbols redirect STDOUT. > replaces the file’s existing contents and the >> symbols append to them.

Let’s look at a couple of examples.

The following command would store the text you type between the ” ” in a file. If the file doesn’t exist, it will be created.

The next command would send an email with the contents of that file, so only the text, not the file itself, to the user Peter.

An example with the find command

If we use the find command we get a nice demonstration of why you would want to handle STDOUT and STDERR separately. If we run the following command:

How to use pipes on linux

We usually get a lot of Permission Denied error messages. To discard all of those error messages you can run the following command instead:

How to use pipes on linux

Which gives us a much cleaner result.

Pipes

If we want to connect certain commands, or more specifically the STDOUT of one command and the STDIN of another, we can use the Pipe symbol | to do that. Let’s do an example:

How to use pipes on linux

What this does is, it connects the ls command with the head command through the Pipe |. Meaning, it’s running the ls command with the head -4 extensions, listing only the first 4 files of that folder. You could also go ahead and pipe another command at the end of this one.

If you want that the second command only gets executed when the first one was successful, you can use the && symbols for that. For example:

Would only remove the test file if it first was successfully queued for printing.

On the other hand, the || command would only execute the second command if the first command failed.

Conclusion

If you work with Linux on a regular basis, knowing what Pipes and Redirection in Linux does is very important. You will use them a lot if you need to work on the Command Line. I will make more of those shorter Linux Basics bits in the future. I don’t want those articles to get too long so that you can take the information in easier. Also, check some other Linux & Open Source Tutorials!

In this chapter, we will discuss in detail about pipes and filters in Unix. You can connect two commands together so that the output from one program becomes the input of the next program. Two or more commands connected in this way form a pipe.

To make a pipe, put a vertical bar (|) on the command line between two commands.

When a program takes its input from another program, it performs some operation on that input, and writes the result to the standard output. It is referred to as a filter.

The grep Command

The grep command searches a file or files for lines that have a certain pattern. The syntax is −

The name “grep” comes from the ed (a Unix line editor) command g/re/p which means “globally search for a regular expression and print all lines containing it”.

A regular expression is either some plain text (a word, for example) and/or special characters used for pattern matching.

The simplest use of grep is to look for a pattern consisting of a single word. It can be used in a pipe so that only those lines of the input files containing a given string are sent to the standard output. If you don’t give grep a filename to read, it reads its standard input; that’s the way all filter programs work −

There are various options which you can use along with the grep command −

Prints all lines that do not match pattern.

Prints the matched line and its line number.

Prints only the names of files with matching lines (letter “l”)

Prints only the count of matching lines.

Matches either upper or lowercase.

Let us now use a regular expression that tells grep to find lines with “carol”, followed by zero or other characters abbreviated in a regular expression as “.*”), then followed by “Aug”.−

Here, we are using the -i option to have case insensitive search −

The sort Command

The sort command arranges lines of text alphabetically or numerically. The following example sorts the lines in the food file −

The sort command arranges lines of text alphabetically by default. There are many options that control the sorting −

Sorts numerically (example: 10 will sort after 2), ignores blanks and tabs.

Reverses the order of sort.

Sorts upper and lowercase together.

Ignores first x fields when sorting.

More than two commands may be linked up into a pipe. Taking a previous pipe example using grep, we can further sort the files modified in August by the order of size.

The following pipe consists of the commands ls, grep, and sort

This pipe sorts all files in your directory modified in August by the order of size, and prints them on the terminal screen. The sort option +4n skips four fields (fields are separated by blanks) then sorts the lines in numeric order.

The pg and more Commands

A long output can normally be zipped by you on the screen, but if you run text through more or use the pg command as a filter; the display stops once the screen is full of text.

Let’s assume that you have a long directory listing. To make it easier to read the sorted listing, pipe the output through more as follows −

The screen will fill up once the screen is full of text consisting of lines sorted by the order of the file size. At the bottom of the screen is the more prompt, where you can type a command to move through the sorted text.

Once you’re done with this screen, you can use any of the commands listed in the discussion of the more program.

Welcome to LinuxQuestions.org, a friendly and active Linux Community.

You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

Are you new to LinuxQuestions.org? Visit the following links:
Site Howto | Site FAQ | Sitemap | Register Now

If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.

Having a problem logging in? Please visit this page to clear all LQ-related cookies.

This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter. For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author’s experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.

Search

One of the fundamental features that makes Linux and other Unices useful is the “pipe”. Pipes allow separate processes to communicate without having been designed explicitly to work together. This allows tools quite narrow in their function to be combined in complex ways.

A simple example of using a pipe is the command:

When bash examines the command line, it finds the vertical bar character | that separates the two commands. Bash and other shells run both commands, connecting the output of the first to the input of the second. The ls program produces a list of files in the current directory, while the grep program reads the output of ls and prints only those lines containing the letter x.

The above, familiar to most Unix users, is an example of an “unnamed pipe”. The pipe exists only inside the kernel and cannot be accessed by processes that created it, in this case, the bash shell. For those who don’t already know, a parent process is the first process started by a program that in turn creates separate child processes that execute the program.

The other sort of pipe is a “named” pipe, which is sometimes called a FIFO. FIFO stands for “First In, First Out” and refers to the property that the order of bytes going in is the same coming out. The “name” of a named pipe is actually a file name within the file system. Pipes are shown by ls as any other file with a couple of differences:

The p in the leftmost column indicates that fifo1 is a pipe. The rest of the permission bits control who can read or write to the pipe just like a regular file. On systems with a modern ls, the | character at the end of the file name is another clue, and on Linux systems with the color option enabled, fifo| is printed in red by default.

On older Linux systems, named pipes are created by the mknod program, usually located in the /etc directory. On more modern systems, mkfifo is a standard utility. The mkfifo program takes one or more file names as arguments for this task and creates pipes with those names. For example, to create a named pipe with the name pipe1 give the command:

The simplest way to show how named pipes work is with an example. Suppose we’ve created pipe as shown above. In one virtual console1, type:

and in another type: Voila! The output of the command run on the first console shows up on the second console. Note that the order in which you run the commands doesn’t matter.

If you haven’t used virtual consoles before, see the article “Keyboards, Consoles and VT Cruising” by John M. Fisk in the November 1996 Linux Journal.

If you watch closely, you’ll notice that the first command you run appears to hang. This happens because the other end of the pipe is not yet connected, and so the kernel suspends the first process until the second process opens the pipe. In Unix jargon, the process is said to be “blocked”, since it is waiting for something to happen.

One very useful application of named pipes is to allow totally unrelated programs to communicate with each other. For example, a program that services requests of some sort (print files, access a database) could open the pipe for reading. Then, another process could make a request by opening the pipe and writing a command. That is, the “server” can perform a task on behalf of the “client”. Blocking can also happen if the client isn’t writing, or the server isn’t reading.

Create two named pipes, pipe1 and pipe2. Run the commands:

On screen, it will not appear that anything is happening, but if you run top (a command similar to ps for showing process status), you’ll see that both cat programs are running like crazy copying the letter x back and forth in an endless loop.

After you press ctrl-C to get out of the loop, you may receive the message “broken pipe”. This error occurs when a process writing to a pipe when the process reading the pipe closes its end. Since the reader is gone, the data has no place to go. Normally, the writer will finish writing its data and close the pipe. At this point, the reader sees the EOF (end of file) and executes the request.

Whether or not the “broken pipe” message is issued depends on events at the exact instant the ctrl-C is pressed. If the second cat has just read the x, pressing ctrl-C stops the second cat, pipe1 is closed and the first cat stops quietly, i.e., without a message. On the other hand, if the second cat is waiting for the first to write the x, ctrl-C causes pipe2 to close before the first cat can write to it, and the error message is issued. This sort of random behavior is known as a “race condition”.

Bash uses named pipes in a really neat way. Recall that when you enclose a command in parenthesis, the command is actually run in a “subshell”; that is, the shell clones itself and the clone interprets the command(s) within the parenthesis. Since the outer shell is running only a single “command”, the output of a complete set of commands can be redirected as a unit. For example, the command:

writes two copies of the current directory listing to the file ls.out.

Command substitution occurs when you put a in front of the left parenthesis. For instance, typing the command:

results in the command ls -l executing in a subshell as usual, but redirects the output to a temporary named pipe, which bash creates, names and later deletes. Therefore, cat has a valid file name to read from, and we see the output of ls -l, taking one more step than usual to do so. Similarly, giving >(commands) results in Bash naming a temporary pipe, which the commands inside the parenthesis read for input.

If you want to see whether two directories contain the same file names, run the single command:

The compare program cmp will see the names of two files which it will read and compare.

Command substitution also makes the tee command (used to view and save the output of a command) much more useful in that you can cause a single stream of input to be read by multiple readers without resorting to temporary files—bash does all the work for you. The command:

counts the number of occurrences of foo, bar and baz in the output of ls and writes this information to three separate files. Command substitutions can even be nested:

works as a very roundabout way to list the current directory.

As you can see, while the unnamed pipes allow simple commands to be strung together, named pipes, with a little help from bash, allow whole trees of pipes to be created. The possibilities are limited only by your imagination.

How to use pipes on linux

Two powerful features of the Linux command line shell are redirection and pipes which allow the output (or even input) of a program to be sent to a file or another program. You may have already used this features without being aware of it. Whenever you have used the “ > ” sign in a command or “ | ” then you have used redirection or a pipe, respectively.

On all Unix-like operating systems, like Linux and FreeBSD, the output from a command line program automatically goes to a place known as standard output (stdout). By default, standard out is the screen (the console) but that can be changed using pipes and redirection. Likewise the keyboard is considered the standard input (stdin) and as with standard out, it can be changed.

Pipes

Pipes allow you to funnel the output from one command into another where it will be used as the input. In other words, the standard output from one program becomes the standard input for another.

The “ more ” command takes the standard input and paginates it on the standard output (the screen). This means that if a command displays more information than can be shown on one screen, the “ more ” program will pause after the first screen full (page) and wait for the user to press SPACE to see the next page or RETURN to see the next line.

Here is an example which will list all the files, with details ( -la ) in the /dev directory and pipe the output to more . The /dev directory should have dozens of files and hence ensure that more needs to paginate.

How to use pipes on linux

Notice the –More– prompt at the bottom of the screen. Press SPACE to see the next page and keep pressing SPACE until the output is finished.

Here is another pipe example, this time using the “ wc ” (word count) tool.

wc counts the numbers of lines, words and characters in the standard input. If you use the -l parameter it will display only the number of lines, which is good way to see how many files are in a directory!

In this case the hyphen after the f option tells tar to send its output to the standard out and not to a file. The output from tar will be fed down the pipe into 7zr which is waiting for input from standard in due to the -si option.

Redirection

Redirection is similar to pipes except using files rather than another program. The standard output for a program is the screen. Using the > (greater than) symbol the output of a program can be sent to a file. Here is a directory listing of /dev again but this time redirected to a file called listing.txt

There won’t be anything displayed on the terminal as everything was sent to the file. You can take a look at the file using the cat command (which can be piped into more ) or for convenience you can just use the more command on its own:

If listing.txt had already existed, it will be overwritten. But you can append to an existing file using >> like this:

The first redirection will overwrite the file listing.txt while the second will append to it.

The cat command can be used to create a file using redirection, for example:

Now whatever text you type will be sent to the file atextfile.txt until you press Control-D, at which point the file will be closed and you will be returned to the command prompt. If you want to add more text to the file use the same command but with two greater than signs ( >> ).

Conclusion

Many of Linux command line programs are designed to work with redirection and pipes, try experimenting with them and see how they interact. For example the output of the ps command, which lists the current processes, can be piped into grep . See if you can work out how to list the processes owned by root.

Gary has been a technical writer, author and blogger since 2003. He is an expert in open source systems (including Linux), system administration, system security and networking protocols. He also knows several programming languages, as he was previously a software engineer for 10 years. He has a Bachelor of Science in business information systems from a UK University.

Learn how processes synchronize with each other in Linux.

How to use pipes on linux

Subscribe now

Get the highlights in your inbox every week.

This is the second article in a series about interprocess communication (IPC) in Linux. The first article focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a write end for writing bytes, and a read end for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.

Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.

The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.

Consider the man pages for the mq_open function, which belongs to the memory queue API. These pages include a section on Attributes with this small table:

Interface Attribute Value
mq_open() Thread safety MT-Safe

The value MT-Safe (with MT for multi-threaded) means that the mq_open function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the same process, such a condition cannot arise among threads in different processes. The MT-Safe attribute assures that a race condition does not arise in invocations of mq_open. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.

Unnamed pipes

Let’s start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar | represents an unnamed pipe at the command line. Assume % is the command line prompt, and consider this command:

The sleep and echo utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting Hello, world! appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the sleep and echo processes have exited. What’s going on?

In the vertical-bar syntax from the command line, the process to the left (sleep) is the writer, and the process to the right (echo) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.

[Download the complete guide to inter-process communication in Linux]

In the contrived example, the sleep process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the echo process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the sleep and echo processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.

Here is a more useful example using two unnamed pipes. Suppose that the file test.dat looks like this:

pipes the output from the cat (concatenate) process into the sort process to produce sorted output, and then pipes the sorted output into the uniq process to eliminate duplicate records (in this case, the two occurrences of the reduce to one):

The scene now is set for a program with two processes that communicate through an unnamed pipe.

Example 1. Two processes communicating through an unnamed pipe.

The pipeUN program above uses the system function fork to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function fork works:

  • The fork function, called in the parent process, returns -1 to the parent in case of failure. In the pipeUN example, the call is: The returned value is stored, in this example, in the variable cpid of integer type pid_t. (Every process has its own process ID, a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full process table, a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
  • If the fork call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the same code that follows the call to fork. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to fork returns:
    • Zero to the child process
    • The child’s process ID to the parent
  • An if/else or equivalent construct typically is used after a successful fork call to segregate code meant for the parent from code meant for the child. In this example, the construct is:

If forking a child succeeds, the pipeUN program proceeds as follows. There is an integer array:

to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element pipeFDs[0] is the file descriptor for the read end, and the array element pipeFDs[1] is the file descriptor for the write end.) A successful call to the system pipe function, made immediately before the call to fork, populates the array with the two file descriptors: