Shell by Example: Spawning Processes POSIX + Bash

Shell provides several ways to spawn and control processes. This covers subshells, command execution, and process management.

This example shows how to run one command based on the output of another command.

Edit
#!/bin/sh
touch /tmp/hello.txt /tmp/world.txt
touch /tmp/hello_world.txt /tmp/hello_world_2.txt
find /tmp -maxdepth 1 -type f -exec ls {} + 2>/dev/null | head -3
Output:
/tmp/hello.txt
/tmp/hello_world.txt
/tmp/hello_world_2.txt

$() can be nested to achieve more complex operations.

Edit
#!/bin/sh
echo "Nested: $(basename "$(dirname /usr/local/bin)")"
Output:
Nested: local

Commands in a subshell with (). Commands are executed in a subshell, so the changes to the environment are not reflected in the parent shell.

Edit
#!/bin/sh
cd /root || exit 1
echo "Subshell demo:"
(
    cd /tmp || exit 1
    echo "  In subshell: $(pwd)"
    x="subshell_value"
)
echo "  After subshell: $(pwd)"
echo "  x is: ${x:-unset}"
Output:
Subshell demo:
  In subshell: /tmp
  After subshell: /root
  x is: unset

Command groups with {} run in current shell, which is useful when you want to run multiple commands in a single group.

Variables defined in a command group are visible to the parent shell.

Edit
#!/bin/sh
echo "Command group demo:"
{
    y="group_value"
    echo "  In group"
}
echo "  y is: $y"
Output:
Command group demo:
  In group
  y is: group_value

exec replaces current process with the new command.

If executed in a subshell, any remaining code in the subshell will not run.

Edit
#!/bin/sh
demo_exec() {
    (
        echo "Before exec"
        exec echo "This replaces the subshell"
        echo "This never runs"
    )
    echo "After subshell"
}
demo_exec
Output:
Before exec
This replaces the subshell
After subshell

Bash

$$ returns the process ID of the original shell that started the script, while $BASHPID returns the process ID of the current shell.

Only $$ is available in all shells, while $BASHPID is only available in Bash.

Edit
#!/bin/bash
echo "Script PID: $$"
echo "Script BASHPID: $BASHPID"

{
    echo "Command group PID: $$"
    echo "Command group BASHPID: $BASHPID"
}

(
    echo "Subshell PID: $$"
    echo "Subshell BASHPID: $BASHPID"
)
Output:
Script PID: 7
Script BASHPID: 7
Command group PID: 7
Command group BASHPID: 7
Subshell PID: 7
Subshell BASHPID: 8

$! is the PID of the last background command:

Edit
#!/bin/sh
sleep 1 &
echo "Background PID: $!"
wait
Output:
Background PID: 8

Bash

Pipes connect stdout to stdin. Note that this example is bash-specific due to the use of echo -e.

Edit
#!/bin/bash
echo "Pipeline:"
echo -e "cherry\napple\nbanana" | sort | head -2
Output:
Pipeline:
apple
banana

Pipeline exit status is the last command’s status:

Edit
#!/bin/sh
echo "apple" | grep -q "banana"
echo "Pipeline exit: $?"
Output:
Pipeline exit: 1

Bash

Bash provides a PIPESTATUS array for all pipe exit codes:

Edit
#!/bin/bash
echo "test" | false | true
echo "PIPESTATUS: ${PIPESTATUS[*]}"
Output:
PIPESTATUS: 0 1 0

Execute command from string. Be careful with eval - avoid if possible.

Edit
#!/bin/sh
cmd="echo 'Hello from eval'"
eval "$cmd"
Output:
Hello from eval

xargs is used for batch processing. When the -I flag is used, it replaces the {} with the input from stdin.

Edit
#!/bin/sh
printf "file1\nfile2\nfile3\n" | xargs -I {} echo "Processing {}"
Output:
Processing file1
Processing file2
Processing file3

xargs can be used to run commands in parallel. The -n1 flag specifies that each argument should be passed to the command as a separate word, and the -P3 flag specifies that up to 3 jobs should run at the same time.

Note that when processing in parallel, the order of the output is not guaranteed.

Edit
#!/bin/sh
printf "file1\nfile2\nfile3\n" | xargs -n1 -P3 -I {} echo "Processing {}"
Output:
Processing file1
Processing file2
Processing file3

Background jobs let you run commands asynchronously. This is shell’s simple form of concurrency.

Run a command in background with &. $! holds the PID of the last background command.

Edit
#!/bin/sh
echo "Starting background job..."
sleep 2 &
bg_pid=$!
echo "Background job started with PID: $bg_pid"
Output:
Starting background job...
Background job started with PID: 7

Wait for specific process can be done with wait command.

The syntax is: wait

If the process is finished, the exit status is 0. Otherwise, the exit status is 1.

Edit
#!/bin/sh
echo "Starting background job..."
sleep 2 &
bg_pid=$!
echo "Background job started with PID: $bg_pid"

echo "Waiting for job $bg_pid..."
wait $bg_pid
echo "Job finished with status: $?"
Output:
Starting background job...
Background job started with PID: 8
Waiting for job 8...
Job finished with status: 0

If no PID is provided, wait will wait for all processes.

Edit
#!/bin/sh
echo "Starting multiple background jobs:"
sleep 1 &
pid1=$!
sleep 1 &
pid2=$!
sleep 1 &
pid3=$!

echo "PIDs: $pid1, $pid2, $pid3"
echo "Waiting for all jobs..."
wait
echo "All jobs finished"
Output:
Starting multiple background jobs:
PIDs: 8, 9, 10
Waiting for all jobs...
All jobs finished

kill -0 checks if a process is running without sending it a signal. If the process is running, the exit status is 0, otherwise the exit status is 1.

Edit
#!/bin/sh
sleep 10 &
pid=$!
if kill -0 $pid 2>/dev/null; then
    echo "Process $pid is running"
    kill $pid # Clean up
fi
Output:
Process 8 is running

The jobs command can be used to get statistics about background jobs.

Note that running jobs -p in a subshell - $() - will not work as it returns background jobs of the current shell, not the parent shell.

Edit
#!/bin/sh
count_jobs() {
    sleep 5 &
    sleep 5 &
    sleep 5 &

    jobs -p >/tmp/job_pids.txt
    job_count="$(wc -l </tmp/job_pids.txt)"
    echo "Running jobs: $job_count"

    # Clean up
    kill "$(cat /tmp/job_pids.txt)" 2>/dev/null
    wait 2>/dev/null
}
count_jobs
Output:
Running jobs: 3

Subshells also have their own PIDs that can be waited for.

Edit
#!/bin/sh
(
    sleep 1
    echo "  Process 1 done"
) &
pid1=$!
(
    sleep 2
    echo "  Process 2 done"
) &
pid2=$!

echo "Waiting for process $pid1..."
wait $pid1
echo "Process $pid1 finished"

echo "Waiting for all, including process $pid2..."
wait
Output:
Waiting for process 7...
  Process 1 done
Process 7 finished
Waiting for all, including process 8...
  Process 2 done

Bash

Disowned job will keep running after shell exits.

For job control in interactive shells:

  • ctrl+z suspends foreground job
  • jobs lists jobs
  • fg brings job to foreground
  • bg resumes job in background
  • fg %1 brings job 1 to foreground
Edit
#!/bin/bash
sleep 100 &
disown $!

nohup can be used to keep a command running even after the shell exits.

Edit
#!/bin/sh
nohup long_command >output.log 2>&1 &

This example shows how to run multiple tasks in parallel.

The & operator runs the command or function in the background, and the wait command waits for all background jobs to complete.

Edit
#!/bin/sh
echo "Parallel execution:"
parallel_demo() {
    task() {
        sleep 1
        echo "Task $1 complete"
    }

    # Start tasks in parallel
    for i in 1 2 3; do
        task "$i" &
    done

    # Wait for all to complete
    wait
    echo "All tasks done"
}
parallel_demo
Output:
Parallel execution:
Task 1 complete
Task 2 complete
Task 3 complete
All tasks done

Bash

Limit concurrent jobs can be done with wait -n command.

The syntax is: wait -n

If the job is finished, the exit status is 0. Otherwise, the exit status is 1.

This example shows how to limit concurrent jobs.

Edit
#!/bin/bash
echo "Limited concurrency:"
max_jobs=2
running=0

run_with_limit() {
    for i in 1 2 3 4 5; do
        # Wait if at max jobs
        while [ "$running" -ge "$max_jobs" ]; do
            wait -n 2>/dev/null || sleep 0.1
            running=$((running - 1))
        done

        # Start new job
        (
            sleep 1
            echo "  Job $i done"
        ) &
        running=$((running + 1))
    done
    wait
}
run_with_limit
Output:
Limited concurrency:
  Job 1 done
  Job 2 done
  Job 3 done
  Job 4 done
  Job 5 done

Subshells also have their own traps that can be used to clean up resources.

Edit
#!/bin/sh
cleanup_jobs() {
    (
        sleep 10 &
        sleep 10 &
        sleep 10 &
        jobs -p >/tmp/job_pids.txt
        trap 'cat /tmp/job_pids.txt' EXIT INT TERM

        echo "Started 3 background jobs"
        echo "The pids will be printed out on exit"
        echo "In the normal case, one would kill the pids instead"
        sleep 1
    )
    echo "Subshell finished"
}
cleanup_jobs
Output:
Started 3 background jobs
The pids will be printed out on exit
In the normal case, one would kill the pids instead
11
10
9
Subshell finished

Bash

Process substitution in bash can be performed via <(command) or >(command). This is usually done in order to avoid creating temporary files.

Edit
#!/bin/bash
touch /tmp/test.txt
touch /tmp/test2.txt

while read -r line; do
    echo "Line: $line"
done < <(ls)
Output:
Line: test.txt
Line: test2.txt

Named pipes can be used for inter-process communication.

The mktemp -u command creates a temporary file name, and the mkfifo command creates a named pipe. The subshell then writes to the named pipe, and the main shell - or another process - reads from the named pipe.

Edit
#!/bin/sh
echo "Named pipe communication:"
fifo=$(mktemp -u)
mkfifo "$fifo"

# Producer
(echo "Hello from producer" >"$fifo") &
producer_pid=$!

# Consumer
message=$(cat "$fifo")
echo "Received: $message"

wait $producer_pid
rm "$fifo"
Output:
Named pipe communication:
Received: Hello from producer

This example shows how to use a named pipe to implement the producer-consumer pattern more generally. The same patter used in the previous example is used here.

Edit
#!/bin/sh
fifo=$(mktemp -u)
mkfifo "$fifo"
trap 'rm -f "$fifo"' EXIT

# Producer
(
    for i in 1 2 3; do
        echo "item$i"
        sleep 0.5
    done
) >"$fifo" &
producer=$!

# Consumer
while read -r item; do
    echo "Consumed: $item"
done <"$fifo"

wait $producer
Output:
Consumed: item1
Consumed: item2
Consumed: item3

An alternative to a named pipe is to use a temp file.

A named pipe is a more efficient way to communicate between processes as the data does not get written to disk and data is consumed when read, while a temporary file persists past the end of the script (if not cleaned up) and can have it’s data read by multiple processes.

Edit
#!/bin/sh
tmpfile=$(mktemp)
(
    sleep 1
    echo "Background result" >"$tmpfile"
) &
pid=$!

echo "Doing other work..."
sleep 0.5
echo "Still working..."

wait $pid
result=$(cat "$tmpfile")
rm "$tmpfile"
echo "Got result: $result"
Output:
Doing other work...
Still working...
Got result: Background result

You can also use a pipeline to run commands in background.

This example adds a newline to the end of the pipeline to ensure that the last line is processed.

Edit
#!/bin/sh
echo "Background in pipeline:"
{
    echo "one"
    echo "two"
    echo "three"
    echo ""
} | while read -r line; do
    if [ -n "$line" ]; then
        (echo "Processing: $line") &
    fi
done
wait
Output:
Background in pipeline:
Processing: one
Processing: two

Bash

In bash, you can use coproc to start a background jobs that avoids named pipes and provides an advanced way to perform bidirectional communication between processes. It is similar to starting a command with & in the background.

The coproc function sets a special variable for the process ID with the suffix _PID. We can use this to wait for the process to finish.

Edit
#!/bin/bash
# start a calculator coprocess in the background
coproc calculator {
    bc -l
}

# send a computation to the calculator coprocess
echo "2 + 3" >&"${calculator[1]}"
#
# read the result from the calculator
read -u "${calculator[0]}" result

# optionally close stdin ([0]) and stdout ([1])
# to avoid the need to wait for the calculator coproc to finish
# note the special syntax for closing file descriptors
# that requires no spaces between the curly braces and
# the redirection operator
exec {calculator[0]}<&-
exec {calculator[1]}>&-

# wait for the calculator to finish
wait "$calculator_PID"
echo "Result: $result"
Output:
Result: 5

« Line Filters | Random Numbers »