This is the kind of thing you don’t need until you really do:
Here’s a scenario that recently came up:
Essentially, you need to “grep” the original file for the keys that failed. The
problem is that you might have thousands of keys and millions of entries.
Depending on the exact size of the data you are dealing with and the amount of
time available, you might be able to “brute force” the solution. It might look
like this 1:
It spawns 1 grep per key – but it’s a one-liner. Compare with the following,
which accomplishes the same thing with 1 process:
It is much faster.
Did I miss anything? How would you tackle this?
Your grep might need qualifiers (-w, for example), but this will depend on your data.