I wanted to gather some stats on the activity of the Premium Consultants listings. It's easy enough to get the raw activity: each passes through my "conlinkp.pl" script (that just records a little bit and does a "Location" to send it on to the real link). So to find out clicks on "conlink.pl", I can just "grep conlinkp.pl access_log".. well, except these all aren't clicks. A lot of them are search engine and other 'bots tracing their way through my pages.
I could easily stop the 'bots from accessing those links, but I really don't want to as there may be value in their finding the sites. However, those are not people, so I'd like to filter those out for reporting.
Well, one way to do that would be to have a list of 'bot ip addresses, but that's a big, big list and is constantly changing. A better way is to look at something 'bots don't usually care about: Javascript. Unfortunately that's not foolproof either. However, 'bots should look at "robots.txt", so if we filter out those also, we should have what we want: real users (maybe).
However, that's not really my point here. In the process of playing with this, I constructed a fairly long command line and realized that breaking it down could be helpful for those of you just leaning your way around Unixish shells. To make it easier to read, I broke it down into one command per line first:
Now let's look at that in detail. I'm going to show a few sample lines from each step of the pipeiine so you can see what actually happens each step of the way.
That's simple enough, right? The "grep" just pulls matching lines from our web access log. Nothing complicated there. I've shown just six lines from the output, though actually there would be hundreds.
By running that output through "uniq", we cut away the duplicate ip's. Again, here we only show four, but the original hundreds of lines would be down to about 150 in the full output. So what do we have now? Just a list of ip addresses, each of which had looked at "conlink.pl" at some point. The next line starts to get interesting.
This line produces a lot of output and is a bit tricky to understand. What it's doing is finding every match for every ip address the previous commands have produced. The output is every entry in the log for every ip that includes an access of "conlink.pl". How does it manage that from one big list of ip's? Well, there are several ways I could have done that, but here I used "xargs". Xargs is normally used to make commands more efficient; for examples see How can I recursively grep through sub-directories?" and Using xargs. Here, we're using it for a different purpose.
The first problem is to limit xargs to invoking grep with only one argument - normally it wants to use as many as possible. The "-n 1" tells it to do that. Next, we need to rearrange the command line a little: if we just used "args -n 1 -J grep logs/access_log" we'd end up with grep beimng called like this:
and so on, and that won't work. The "-J foo" provides the magic we need. We can see it at work if we momentarily change our command to subsitute "echo" for grep; the result would look something like this:
Another way to see what xargs would do is to use "-p" in the command line - xargs will echo each invocation and wait for you to confirm with "y" or "n" before proceeding.
The choice of "foo" is arbitrary; you can use any word at all to act as a place holder. What happens is the "foo" shows "xargs" where you want its input to appear in its output: you are controlling the command line it builds. This gives us what we need.
| grep ".js HTTP"
The next three lines are going to filter this output back to a smaller set of lines again. We're looking for only those lines that have ".js HTTP". Let's review: we found the lines that referenced "conlink.pl", we used the ip addresses from those to find all accesses, and now we're grepping out only the ".js HTTP" lines. Couldn't we have saved a step here?
Well, yes, but the quoting gets difficult. We want something like this:
I could solve that by writing a script that reads stdin and constructs the command line I want, but this isn't about writing scripts, so we'll live with the inefficiency - after all, if I were really concerned about how long this takes, I wouldn't be using command line tools at all. I'd write a Perl script to do the whole task.
The next two lines should be understandable as they do just what we did before:
| sed 's/- .*//' | uniq
We're back to a simple list of ip's again, but now it has been filtered down to only those ip's who accessed "conlink.pl" but also accessed one or more Javascript programs. Finally we go back to the logs once more to extract the original lines:
This is just a repeat of what was done earlier so you should understand it. If not, use "-p" with xargs to follow along. The end result is a listing of the actual "conlink.pl" lines where the ip origination had also accessed a Javascript file.
Now remember: this actually isn't a useful exercise. Some bot's can and do access Javascript and this pipeline would be very slow and clumsy to run. The purpose here is just to show how command lines can be manipulated with xargs, sed and uniq. To actually do this, I'll use a Perl script like this:
#!/usr/bin/perl open(I,"logs/access_log") or die "access_log $!"; while (<I>) { chomp; $ip=$_; $ip=~ s/- .*//; $isconlink{$ip}=$_ if /conlinkp.pl/; $isrobots{$ip}=1 if /robots.txt HTTP/; $isjavascript{$ip}=1 if /.js HTTP/; } foreach (keys %isconlink) { next if $isrobots{$_}; # Not in robots.. next if not $isjavascript{$_}; # and did get javascript.. print "$isconlink{$_}\n"; }