

Sourcetype="arcserv_uag" (eventtype=arcserv-job-start OR eventtype=arcserv-job-stats OR eventtype=arcserv-skip-fs OR eventtype=arcserv-job-incremental) | convert num(files) | eval _time=if(searchmatch("Include files modified"),_time+1,_time) | sort -_time | transaction fields="host,pid" startswith=("Backing up") endswith="files written KB/min" keepevicted=t (I suppose I could fix that with rex mode=sed.

The one event now appears out of order, but that's not the end of the world. Then I have to sort on _time to get the events back in descending time order as transaction requires, and then I get my expected transaction grouping. This search moves the incremental event (that is, events with the text "Include files modified") forwards in time by 1 second (these event almost always occur on the same second, so 1 second seems to be enough). I can now show that the issues is somehow related to the order of the events. Update: This still occurs in 4.1.x as of 4.1.2. Adding keepevicted=t makes no difference. The first one containing just event #1 and would be marked with closed_txn=0, and the second would include events 2 and 3 just as before, and it should be marked with closed_txn=1. The best theory I have about why this isn't working has to do with transaction automatically discarding non-closed transactions, but if that were the case then adding keepevicted=t should output two transactions. I've tried playing with the different transaction options, but haven't found anything that works as of yet. My transaction still contains just event 2 and 3, just as if did with my first search. I thought that I should be able to add in the leading event by simply adding an OR to my startswith expression, like so:Įventtype=my-backup-job* | transaction fields="host,pid" startswith=("Backing up" OR "Include files modified") endswith="files written KB/min" (The prefix was added for reference and it not part of the actual log message)īefore I realized that I sometimes had incremental backup jobs, I used this search:Įventtype=my-backup-job* | transaction fields="host,pid" startswith=("Backing up") endswith="files written KB/min" In which case the event that indicates that the job is incremental appears before the standard starting event.Ġ4/20 20:18:28(11140) - Backing up /mnt/snap4bak/splunk_var_runĠ4/20 20:19:17(11140) - 4,926 files 120020.89 KB written to DATA-DAILY4 225039.17 KB/minĮxample 2: Incremental backup job (event numbers added) I have the basic case working, however I'm running into trouble when the backup job is incremental. The events I'm trying to group into a transaction are for a backup job. I think I can best explain this by example. The events I'm using don't have any helpful tracking fields that I need, so I have to rely on the startswith and endswith expressions to establish transaction boundaries. The first contains all events in the transaction while the second, the one I'm looking for, contains the events specified in the definition options.I'm trying to build transaction that has an optional leading starting event. When I run it, though, the output produces two results per transaction. I am passing a field and using startswith and endswith definition options. I am working with the transaction command.
