In a nod to the big data gods we have posted what I hope is the first of a series of code release for helping you push System Platform data out to Splunk.
Link to the repository is here
Also, I have moved this collection of thoughts below from the ReadMe to this blog post as a way to get more interaction and also keep the ReadMe a bit more on the technical side.
I feel like there are some huge opportunities for sending some of this data to Splunk in addition to a traditional Historian. For starters, and apologies in advance to Schneider/Invensys/Wonderware, but the cost of Splunk Enterprise is a fraction of the cost of the Historian. Yes the Historian may be more efficient at storing data but frankly these days disk space is dirt cheap so that’s not a great differentiator any more. Yes, you’ve got the Historian client, but the out of the box visualization tools provided with Splunk are really stunning and so much more powerful than anything Wonderware is currently providing. My general feeling on the topic is that the “rest of the world” has solved some really basic problems and it’s time the automation world started to recognize that others are doing some things much better, faster, and cheaper than the proprietary tools we have to slave with. Yes, the Historian Client is a very nice tool and is very powerful but I suspect with a little HTML5/CSS/jQuery magic someone could produce a web-based interface to this Splunk data that does about 95% of what the current Historian client tool does. Honestly how many users do you have that use more than about 10% of the capabilities of Historian Client. Do they know how to use the rubber band scaling? Have they ever fiddled with the retrieval methods?
Finally, it’s not so much the cost as it is the cost model. Should I really pay the same price for a boolean tag that changes 10 times a day vs a floating point that changes every 250ms? I think WW is on to something with the pricing model for Historian online being user based but still seems odd. To be fair this does give much more predictable pricing for customers and I can empathize. I think something all of the Industrial Automation companies have got to sit up and recognize is that the pace of innovation in these new technologies is orders of magnitude faster than what we’re seeing from our suppliers. It is my sincere hope that others will join me in democratizing access to the data that we’ve already paid to collect and then paid again to store. Why should I have to pay a third time to get it back out?
If you want to know more about what Splunk is doing with Industrial data, look up Brian Gilmore on Twitter. He’s been a great resource for me and very supportive in making sure I was successful with this tiny proof of concept. I think this could definitely be a symbiotic relationship for both sides. We want better places to store and analyze our data. They want more customers storing data in Splunk. Win Win if you ask me.
Finally now that I have a little win under my belt I might be attacking Log Files next. Watch out for that one if I am successful. Goodbye crappy single node log viewer. Hello global view across multiple platforms and galaxies. Frankly I’m amazed we’ve put up with that crap for so long. oooommpphh. That’s me hopping off the soapbox.