Any number of commands can be pipelined together.
command1 | command2
The above command creates a pipe: the standard output of command1 is connected to the standard input of command2.
Any command that can accept Standard Input and produce Standard Output is called a filter command.
This is functionally identical to
command1 > /tmp/foo command2 < /tmp/foo
except that no temporary file is created, and both commands can run at the same time.
Start a new script named pipes. Be sure to include the usual shebang.
Prompt the user for a user name, and capture that user name.
Check the file /etc/passwd to see if that user is present using grep.
command1 && command2
Executes command1. Then, if it exited with a zero (true) exit status, executes command2.
Modify the pipes script to pipe the output of grep to wc -l.
Use && to echo a message back to the user if the user name they supplied is in the list.
Test the script with valid user names.
command1 || command2
Executes command1. Then, if it exited with a non-zero (false) exit status, executes command2.
Modify the pipes script to echo a message back to the user if the user name they supplied is NOT in the list.
Test the script with invalid user names.
Need to “split up” Standard Output so you can send it to both Standard Output and a file? The tee command is for you.
cat /var/log/messages | tee newfile | less
Just pass stdout through tee, write it to the file of your choice, and send the same stdout to the next command. See intricate examples at
or a brief discussion at
Develop a command that first creates some output, pipes it to tee, which writes it to a file and also back to the terminal display.
Modify the pipes script to echo a message back to the user if the user name they supplied is NOT in the list, and also write an error to an error log, errors.log.
Debug Hint: Whenever you can, log everything, especially during development: failures, successes, and to any degree possible, variable values and executed actions.
The && and || conditionals make this much easier.