Dry run
At my place of employment we have to build a bunch of different
software components to put together our product. Most of these
components use the configure && make && make install pattern but
require a lot of customization and environment setup. This is handled
with Bash scripts. I suspect I'm not the only one who is in a similar
situation.
One thing you generally want in the build logs are all the commands
being run. However, if you set -x (or set -o xtrace) in all your
scripts there is a lot of noise. You can avoid this by echo'ing all
the commands of interest before you run them, but this has a few
drawbacks.
- If you don't write a wrapper function, you have a lot of
duplication. You get something like
echo prog opt1 ...thenprog opt1 ...in the code. If you have pipelines, it's even more annoying. -
Even if you write a wrapper function there is the problem of getting the quoting correct. It can be very difficult to get it right. To see why, consider this:
arr=("a b c" 1 2 3); echo "${arr[@]}"If you execute that then the output will be
a b c 1 2 3. If those are the arguments to a command, it will look like there are six but there are really only four.Ideally, you would like the command that is written out to be something you can cut and paste to run yourself, if needed. So in this case you'd like to see something like
'a b c' 1 2 3. If youechoeverything, you have to deal with that.
I use a technique based on metaprogramming that has proven to be pretty good in many cases so I thought I'd share it in case anyone wants to use it or riff on it.
Start from this core idea: there are some commands in your script that
perform an action and there are others that set up the environment or
arguments for that action. An action would be something that changes
the state of the system, like cp file1 file2. Setup might be
something along the lines of determining the actual names of file1
and file2.
What we want is to be able to run the script such that only these "action commands" are printed when run, with the option to show the command without actually running it. That is, a dry run mode. Additionally, in the dry run mode, we want the action commands to be printed such that we can clearly discern the arguments. If we want to copy them and run them directly, it should work.
These requirements rule out the use of echo because of the quoting
problems demonstrated earlier. We will need to use set -x.
But turning on set -x globally won't work since that will print out
everything. It will need to be targeted. This means a wrapper
function. Here's a first attempt.
function action { set -x; "$@"; set +x }
This almost works, but has the problem that set +x will be printed
too. That's kind of annoying.
There is a way around this, but it requires being less efficient. Since efficiency it not much of a requirement here, we can use a feature of Bash functions that I don't see used very often: make the function run as a subshell.
function action ( set -o xtrace; "$@"; ) # Note the parens, not braces
Using a subshell means that the settings are local to the subshell and we don't have to worry about an extra command at the end. It's less efficient and you may have to be mindful of how you use environment variables, but it is simple and effective. And it supports pipelines when you use it for each command.
We're still missing the dry run option, though. And there is the issue nesting.
First, look at dry run. Here's a way to handle it.
function action (
if [[ -n "${dryrun}" ]];
set -o xtrace
: "$@"
else
set -o xtrace
"$@"
fi
)
The : command is very useful here because it will expand everything
but execute nothing. It's how you get correct quoted expansions.