Saturday, July 4, 2009

OPC UA info

I just ran into this article dated May of last year. Simone Massaro of Iconics describes the direction that they went with OPC UA development. It's a bit technical, but a good read.

Friday, June 12, 2009

Is anyone buying this? Really...?

I stumbled across this article that, quite honestly, at first pissed me off. After a little reflection, I can only laugh. It reminds me of someone trying too hard to sell those $150, short gold plated digital Monster audio cables - Oxygen free or whatever. (If the engineer inside you doesn't laugh, then cry for the sake of the suckers, read on).

Maybe I read in to far, but I see snakeskin oil vendors grasping for air! The piece is an obvious response to Steve Hechtman's very different article on the same topic (hosted on Control Engineering) - he should be flattered. You see, the big vendors, GE Faunc in this case, but the exact same applies to Wonderware and Rockwell have been long committed to the concept of Historians, a glorified and expensive datalogger that includes, and is only meant to work with, a custom version of Microsoft SQL Server (of all product choices...). The problem is that now much cheaper products from companies like Inductive Automation and Software Toolbox can do a better job using any RDBMS (database) system. Being vendor-neutral, inexpensive add-on packages also do much better for things like trending, reporting, and data analysis. The biggest mistake of the current generation of Historian is that they tried to implement and include everything themselves - like making a giant Swiss Army knife with a spork, usb memory stick, and a wine glass. Now they're caught with their pants down, desperately scrambling to recovery their enourmous sunk costs (my favorite business term).

  • You're data is special and requires "plantwide historian" treatment...their example query “What was today’s hourly unit production average compared to where it was a year ago or two years ago?”. I won't even comment...
  • Your database needs to speak specialized industrial protocols (OPC) - There's separation of function by design and for a reason. Besides - this doesn't even make sense.
  • Faster speeds and higher data compression - no way! The historian is wasting CPU cycles in both directions, which obfuscates your data (can no longer use external applications), to do something better achieved by a RDBMS system that supports it.
  • Robust redundancy for high availability - is this a joke? Maybe should migrate their server farms over to GE-flavor SQL Server.
  • Enhanced data security - another losing battle for the historian. The white paper mentions SQL injection attacks - all platforms in question can use stored procedures, and are all subject to this sort of attack. When it comes to up to date patching, arguably the most common vulnerability, SCADA vendors have the absolute worst track record! IT keeps their servers patched as a matter of practice - they're typically afraid to touch the SCADA machines. Ultimately, the "do everything" approach provides many attack vectors.
I can't blame them for playing their hand. I just wonder - will anyone read this white paper and take it at face value?

Friday, May 22, 2009

SCADA security and cyberspace threats

I probably sound like a broken record by now, but SCADA security is not going anywhere. This applies to almost anything electronic that is connected to a network.

Due to the nature of the systems, and the fact that they most often can't be easily patched, it's becoming increasingly important to choose standards based products/technology and protect your network infastructure. It's imperative to mitigate risk where you can!

In recent news - military arming for cyber warfare. This blog post commenting on hacking infastructure.

Wednesday, May 20, 2009

Matrikon and Wurldtech Cooperative

Wurldtech Security Technologies has committed to apply their Achilles testing technology and certification methodology to Matrikon's OPC products. Successful completion will place the Matrikon OPC Tunneller and servers at the top of my recommendation list. This is a big plus for the world of SCADA security! Now, if only we could do something about our legacy systems...

Wurldtech blog announcement and press release.

Sunday, May 17, 2009

The Risks Digest

I stumbled upon the "Risks Digest - Forum On Risks To The Public In Computers And Related Systems" from another blog post. Some of the stories certainly made me laugh - I spent way too much time there!

Friday, May 15, 2009

An interesting conversation with a traveller in Bali

I had an interesting poolside conversation with "Jeff" at a resort in Bali. He works for Juniper networks, setting up core networks for huge accounts overseas. He told me of the $400 million project in Malaysia that will go on for the next 3 years, and about his project in Brazil. He said that he's gotten used to the long flight back that he takes monthly, but hey, how bad could business class be? More importantly he mentioned that Cisco's IOS is outdated - that engineers left there after being turned away with (then) cutting edge ideas of using ASICS (specialized integrated circuits as opposed to generalized processors) in routers. They maintain a monsterous market share and provide lots of enterprise services (like voice and conferencing over IP). It was weird to hear him refer to enterprise accounts as the small ones (compared to major telecoms and infastructure).

Poking into Jeff's past revealed a masters degree in Electrical Engineering, a CCIE certification about 10 years ago (distinguish youself - don't mess around with the little ones, he said), and "various others". I guess he did some defense contracting at the Pentagon earlier as well. But I got the usual, "certs and education get you in the door...specialize and learn the industry to move up" explanation.

So why mention any of this here? It's all about infrastructure! First, it used to surprise me that I get a 100 megabit Internet connection at home in Korea. That's not fast in terms of network equipment and Korea is on the cutting edge. Heck, I'm getting 700k at the resort in Bali! He was talking about OC768 (40 gigabit) core routing equipment in Malaysia! There's plenty of fiber under the ocean! We, the US, have piles of legacy equipment that we're dealing with. These new countries coming online get to engineer their solutions properly and deal with the latest and greatest! Mr. Obama - if you're reading this, I think investing in our digital infastructure would make a great part of the stimulus package! Our industrial control networks will benefit from such upgrades.

Friday, May 8, 2009

Getting the most out of your SCADA system

I literally visited Inductive Automation the day before Gary Mintchell did. I didn't get the opportunity to meet him, but I did get a glimpse of his insight. He quoted the company as a "database company" - as a foundation, which is an insightful perspective.

Here's what occurred to me - I've been involved in big projects and small projects, private sector, government, and military running a variety of platforms. Does anybody have issues creating a tinker toy HMI with a few setpoints and graphics that change color? I really doubt it. Which vendor would I recommend for that? Who cares - they all do it. That's what Walt refers to about the commodization of HMIs.

So what's valuable and where are we failing? My top choice, and they are together, are "customizability and interoperability", something we tend to suck at. Suppose I asked, "how much power have we used so far this month". I'd likely get, "I donno - but you can figure it out if you keep a log of readings from that meter". Or - "what's the status on that shipment we sent out last week"? It's available on the Fedex/USPS web site. But why not on our information/SCADA system? Isn't that what web services is all about? Who are we kidding - we have enough issues migrating/tying in our legacy systems. I bundled "customizability and interoperability" together because the point is to be able to tie your system to others easily. Managers shouldn't have to buy hardware and large amounts of integration services to make their systems work for them.

Great! So how does this work? The key is being able to pass data - through standardization. This is where technologies like OPC (UA) and web services come in. But another huge, often overlooked method is using SQL databases. Most applications, and nearly all business systems use them natively. You want to know anything about your process - inventory, QA, for example - past or present. That should be available in your SCADA system. It's a great connection point, provided that it's flexible, which is Inductive Automation's strength. Get that SCADA vendors - hint, hint - step away from the custom Microsoft SQL Server implementations! The royalties are great, but nobody belives that you need them for performance. Besides databases are useful for other reasons than being a historian! It's not hard to support Oracle, MySQL, DB2 and others - just swallow your pride and old company lines.

How do you get your existing or legacy system to interoperate with others? Simple, OPC <-> SQL database bridges exist for that purpose.

OPC Interoperability Conference, UA and Java

In the spirit of catching up with my backlogged blogging (recent personal Japan trip from January), I'll post about a few topics that I missed.

I had a chance to visit old friends at Inductive Automation. They gave me a demonstration of the working Java OPC UA stack that they unveiled back in the beginning of March, at the North American OPC Interoperability conference. The "test program" was a slick AJAX web page that browsed, read, and wrote tags to an AB SLC with no noticable delay.

The Java UA stack is significant for a number of reasons. First, the UA spec is notional. I'd guess that the OPC Foundation hoped, but didn't really expect, to see it implemented independently - at least not right off. (*a Java stack on their C/C++ implementation is planned with a pure jave stack in the dreamy future) - (*correction again - Randy Armstrong points out in a comment that a Java stack is currently available). This leads to the second point about Java being platform and Operating System independent - everything supports the Java Virtual Machine these days. The point is that we have millions of users across continents and lots of reasons to seek Windows alternatives. I'd bet that there's a dissociated army of programmers in the industrial space who are doing their own thing, but would jump on a standards based bandwagon. That's really what our industry needs for: efficiency, simplicity, and cost savings. The idea being that everything "speaks OPC UA" so historically dissimilar hardware, appliances, and applications can talk with ease - securely.

Which brings me to something I heard about at the conference. Reportedly, the UA guys were asked to go home the first day so that all the legacy apps could be set up. This makes me laugh and wince simultaniously! It's not uncommon for a room full of experts to spend an afternoon getting two nodes to talk to each other - it's all about Windows DCOM security, which is equally painful as it is full of gaping vulnerabilities. At the point where you're communicating with a friend, a third party can't see either.

New standards are a funny thing - everyone knows they're coming, everyone knows they'll benefit from them, but you're not ready to commit until the next guy has. Kudos to Inductive Automation for getting the ball rolling. Kudos to Kepware and Iconics for the same. Siemens has comitted to an entire product line! Wonderware's been talking the talk, as has Rockwell (both in 2006). Here's to them coding away in their secret labs! Don't believe me - here's a video of how great and mature OPC UA really is, complements of Eric Murphy of Matrikon! It's a riot - I promise :)!

Tuesday, May 5, 2009

Using Open Standards in Water and Wastewater

I hadn't expected to be blown away by Inductive Automation's Using Open Standards, web-based modern SCADA technology to manage your water operations webinar, but the collection of speakers and content was phonomenial!

Don Pearson, the moderator, opened up with a brief presentation on federal 'relief' monies specific to water and wastewater. Henry Palechek presented trends that he's taken Helix Water District of San Diego through from their $2 mil VAX system, to their $350k Wonderware system, to their existing FactoryPMI system. He had a lot of interesting insight into the decision process and business ramifications of his choices. The transcript can be read here (you can also listen to a recording of the interview from the webinar).

Patrick Callaghan of MCS Integrations then presented a system integrators prospective. He showed off a live system (which, by the way, made me unnecessarily nervous) that he wrote for the City of Largo Vista in Texas. His setup was INCREDIBLE!!! Operators run around with tablet PCs, connected via VPN over the cell network. Everything looked sharp and screens were linked together intuitively. He had screens where he could create groups of operators on the fly that recieve alarms at different intervals until they're acknowledged. He showed generic tanks and valves that would display different values based on their types, but used the same objects/windows. This included animation based on setpoints (levels in a tank), an alarm history, trends, and even a custom note field that would record the operator/date_time and create an overlay icon on the main screen that showed the existence of a note. The trend screen allowed you to create, save, and edit arbitrary groups of pens. He had all sorts of reports autogenerated that operators needed to generate, and had an integrated pdf library of the ones they had to fill out by hand. It's hard to fully describe how sweet his SCADA package is - it seemed more to me like the product of a $1mil well written custom application that was tailored to the client. I've never seen such a thing from Wonderware, Rockwell, GE, or the others.

That webinar was fantastic! I'll keep my eyes open for more.

Monday, May 4, 2009

Cheating in Online Poker

I just got back from a Las Vegas weekend trip. The poker gods were good on the cheapest ($1/2) tables Bellagio had to offer. My hourly return wasn't impressive, but I had a great time chatting with a variety of gamblers.

One particular story clung. A player said that his "friend" got a chance to witness someone "win" $13k in one night cheating at online poker. I'm not too impressed with the usual tactics, run a background application to gather statistics on opponents or even the recent Absolute Poker cheating scandal. This one caught my attention because it was so simple, yet incomprehensible to catch. You could multiply the benefit with automation/a program, but that's not necessary.
The scam involved playing 5 of 6 simultanious accounts at an online poker table. It shouldn't take a superstar to see that you could easily squeeze out single unsuspecting victims. You could even use a program to obtain more accurate odds since you see 1/5 of the deck. Connection details seemed obvious. I would use proxy services to route via different cities around the world, consistent from each account. The crux of the scam lies in the fact that you can easily create throw away identities - violating the security pricipal of integrity, that you can verify that someone is who they claim. Online gaming sites do monitor IP addresses (defeated with proxies) and users who constantly collaborate. However, you'd be pretty hard to spot with a pool of accounts that get used for short time periods.
What about the penalty if you do get caught? I can't imagine playing multiple online poker accounts getting you in as much trouble as stealing...
The best protection brings inconvenience - closely couple user accounts with real people. That requires you to give up all the personnal info that you don't want to share: valid ID, bank accounts, SSNs, etc. As an online player I'd feel much more safe if the site required heavy verification. Then again, I only play online for "points".

Tuesday, April 28, 2009

Doesn't Cyber Security Deserve a Stimulus? - Wurldtech

A perspective worth reading -

And another - Walt, I don't agree with every detail, but the message is spot on!

CISSP at last!

I've had Shon Harris' All-in-One CISSP book on my shelf for years. I've taken it on long drives and flights without so much as cracking the pages of that great volume. It finally took a week long class and the discipline to study before I was ready to commit. I took the six hour plunge in December and recently found out that I'm officially a "Certified Information System Security Professional". Yay!

The real significance is embodied by standards organazations and my new professional community. It's all about the articles, networking, and even the new forum that I already spend too much time on. My goal is to continue to push good security practice in the Industrial (SCADA, HMI, controls) space.

Sunday, April 5, 2009

Opinion: Do you need a $60,000 process historian to log data? (Control Engineering)

Interesting article featured on Control Engineering

-- Control Engineering, 3/26/2009
Steve Hechtman, Inductive Automation

I wish to register a complaint. There is a rumor that has been circulating for years that relational databases are too slow for fast process data and that only process historians are up to the job. Vendors of process historians will cite sluggish performance and the lack of data compression as the reasons standard off-the-shelf relational databases won’t work. Apparently the last time they used a SQL relational database was a few decades ago.

While there may be some specialized domains where process historians have a niche, they are not a practical choice for most industrial applications. In effect, historian vendors are saying your Toyota Camry is inappropriate transportation because it is incapable of going 180 mph or finishing the quarter mile in under 10 seconds.

The rumor denigrating relational databases for poor throughput is baseless. A standard, off-the-shelf Microsoft SQL Server coupled with FactorySQL can log in excess of 100,000 tags per second using a desktop machine. In all likelihood, other factors such as the industrial network would become bottlenecks before the database does. Furthermore, today’s generation of SQL relational databases are designed to scale gracefully to power high-volume Website traffic, whose load peaks dwarf those of industrial controls applications.

Data compression is an area where process historians do score a point. However, even this consideration can be handled with standard off-the-shelf SQL relational databases. Take a look at the MySQL 5.0 Archive Storage Engine which achieves on average a four to one compression ratio. Proprietary process historians may beat that, but let’s get back to the point of practicality. Hard disk space is so cheap these days that even considering this point is becoming an anachronism. For the rare application that demands it, table compression coupled with intelligent data logging allow databases to compete even in this regard.

One crucial question that process historian vendors omit is: what are IT departments willing to support? When I make initial contact with IT folks, I always ask which relational database they use. Then I assure them we’ll work with that. This generally makes them very happy. Believe me, you want IT on your side or your project will end up on a data island which is useless in an enterprise system. Think of it from their point view; they have the training and tools, generally, to support just one type of database. With these tools and training they can support the database with scheduled backups, tuning and other maintenance.

Okay, we’ve heard process historian rants about relational databases; let’s talk about the downside of process historians. Let’s start with support. Just check the Amazon bookstore for any one of the proprietary process historians and you’re likely to come up empty handed. On the other hand, check for “SQL configuration” and you’ll come up with hundreds of books. How about finding people to support these proprietary systems? Good luck.

Then there is the concern about supporting relational data with a process historian. Frankly, the middleware layer is all about relational data. Time-series data, which is what process historians deal with, is just a fraction of what is needed in the middleware layer. Correlating batches, shifts, inventory, orders, downtime, quality, etc., is purely relational in nature, and these are the features that today’s enterprise integration projects demand.

What about a cost comparison? The process historian is going to be ten to thirty times the cost of a relational database using a driver like FactorySQL depending on the number of tags required. The controls industry is still backwards on this point and prefers to price its software per tag as though the extra tags cost money to manufacture.

In summary, we’re talking about practical choices. The Ferrari may be great fun, but do you need a $500,000 vehicle to drive the kids to school or would the Camry suffice? Likewise, do you need a $60,000 process historian to log data? A relational database makes a great historian, but the reverse isn’t true. A process historian cannot process relational data. For the vast majority of systems, a relational database has more than enough power to service the historical and relational data requirements, making it not just the practical, but the wise choice.