If you have an Internet-facing web application, you want to log the hits of your visitors to a database.
Typically, the log table will also include hits by web crawlers, and, depending on the number of users of your site, the crawlers will create far more log entries than your human visitors.
One way to cope with this scenario is to also log the ASP.Net SessionID (i.e. retrieving Request.Cookies[“ASP.NET_SessionId”]), and later delete the log entries with a unique SessionID (robots typically do not reuse cookie data set by a web server).
Rather than cleaning up huge log tables, we should just log the visits that we are interested in. Since a crawler might fetch anything referenced by an HTML page, such as counting pixels, and even request referenced files a long time after fetching the original HTML, I thought about logging via AJAX requests.
In this solution, the master pages contains a logging function, such as
<script type="text/javascript"> function Log(logData) { $.ajax('<%: Url.Action("Logger", "Log") %>', { type: 'POST', data: logData }); } </script>
which passes a JavaScript object to an MVC Logger action.
Each view with logging functionality thus include the following script:
<script type="text/javascript"> $(function () { Log({ data: [whatever you want to log], referrer: '<%: (Page.Request.UrlReferrer != null) ? this.Request.UrlReferrer.OriginalString : null %>', aspnetSessionId: '<%: (Request.Cookies["ASP.NET_SessionId"] != null) ? (object)Request.Cookies["ASP.NET_SessionId"].Value : null%>', }); }); </script>
This way, logging requires that the browser issues a jQuery POST request, and simple crawlers do not end up in your logging database.