Tuesday, February 19, 2008

The future is...Linux?

I just finished reading this year old article entitled, "Windows Vista, The best thing that ever happened to Linux?". Like many other pro-Linux (read anti-Microsoft) papers, it offers compelling arguments that I find myself agreeing with. In fact, my specific Vista complaints were very similar to theirs (all the hardware intensive eye candy that OS X does better, dropping important announced features: WinFS (relational database based file system), PowerShell (advanced scripting), SecurID (authentication for network resources), and PC to PC synchronization). They continue to enumerate Microsoft attrocities and go on to how Linux will dominate the future - think OLPC "One Child Per Laptop", the $100 PC.

I can't say what the future holds. I do know that every time I install Linux on one of my laptops (for anything other than when I used to program school projects), it ends up being too pesky to reasonably use. I get wireless networking/printing, DVD playback, a word processor, and everything else that I "need" straight - then never end up using it. But Linux always wins the theoretical argument - what could be better? I also know that well written Linux "appliances" work well. My M0n0wall router served me so well up until I bought a QoS enabled "gaming" router. We've been tossing around the idea of a CD bootable, lightweight Linux image designed to run a FactoryPMI client. Like Knoppix - maybe based on it.

In the end I always find myself going back to Microsoft - it's disgusting! They ultimately steal, buy, or reinvent the better technology, and it works well for them. Remember when SQL Server used to suck? I do - but SQL Server 2005 is a great product, thanks Sybase! They're getting into virtualization. It sucks now, but mark my words, they'll be giving VMWare a run for their money in a couple of years. What about Microsoft Office - do you really ever want to use anything else?

So this gets me to the recent official release of FactoryPMI Linux support. My first reaction was, "Who cares? You can already run FactoryPMI clients on Linux - they're Java based"! After thinking about it - it's the direction and comittment that matter. I align much better with Linux ideology, and the community is rapidly growing. Who else is tired of problems with every new major Microsoft release? Also, the Open Source community has a lot of software developed to bring to the table that's very powerful, but still a little rough around the edges. Microsoft makes good products that I like to use, but would rather not be stuck with. The percentage of HMI/SCADA vendors in bed with them makes me sick. Entrapment, not standardization becomes a predicament for end users.

In the end will it be Linux? Microsoft? Who knows - let the greater community decide. For me, the most useful products win. I'd like to see Microsoft include the power features that they've been advertising since 2004, and Linux to get progressively easier to use for all levels of user. Kudos to the companies that let users decide on what platform they prefer.

Friday, February 8, 2008

Industry news update


Quick summary of industry news, compliments of InTech Editor Greg Hale's Blog. Thanks for the great info!


Wonderware - Predicting growth in Africa. Still working on India, China and Eastern Europe, but Africa's where it's at in the next 30 years or sooner. Full entry here.

Kepware - Partnering with Oracle and adding an OPC Client to their flagship OPC Server KepServer Ex to all pass through support for 3rd party OPC servers including diagnostics. Exciting! Full entry here.

Rockwell/Cisco partnership - release Rockwell branded Cisco Catalyst switches that offer easier configuration, configuration via RSLogix 5000, and pre-canned support for common (Rockwell Centric) Industrial Networking protocols as CIP and Ethernet/IP (Industrial Protocol). It's about DAMNED TIME! I've always said that AB makes good PLCs. They're finally leveraging developed technology for industrial applications instead of reinventing the wheel over and over! That said, I'd fear the price tag! Full entry here.

Friday, February 1, 2008

Virtualization and SCADA, mini-SCADAs

Ever feel like a broken record? I get that feeling when "my" last 2 good post ideas came from following the crowd. Looking back, I haven't yet posted on SCADA security in response to the flurry of blog activity on the topic and the alleged "SCADA Internet attacks on the power grid" where the CIA keeps coming up - again and again. I've seen how the media quotes "the government", my 19 year old Seaman recruit sailor was "A Navy Spokesperson". The reporter was attractive - he didn't stand a chance.

Well, this post is supposed to be about Virtualization, an old topic in computing with renewed vigor! Other bloggers are talkin' about it, so why shouldn't I? The basic idea behind virtualization in this context is to work on logical hardware in a bit of a sandbox. Another nice feature is working from images (snapshots) instead of entire hard drives and machines. Imagine building your HMI exactly how you want, then taking a snapshot. With virtualization, you can run multiple instances of this. Your SCADA installation is an image file that can be run on any computer! Maybe you want to consolidate hardware, or maybe you want a similar environment for your QA department, or for development. The concept of "create once, use many" applies here.

Unless you're a software developer or running a computer lab, it's probably your servers that have the most to gain from virtualization. Servers are notorious for being resource underutilized, and are often fickle - how many of you would be comfortable "cutting over" most of the services that any one of your servers provide to another machine? You might not mind installing something new on a server, but I doubt that you nonchalantly move things around on production machines.

Let me paint a picture. You're starting a sizable new plant from scratch. You decide to buy a single $50k server from Dell as the main workhorse. It will be running "8 servers", (domain controller, database, web, email, etc) each with their own: memory, IP address(es), etc. Once the system is up, you decide that you don't want to install anything without testing it first. Fine, you use the same image on a different machine, install and test the software, then copy that image onto 'ol beefy. Suppose your email server needs more memory - you simply assign 4 gigs of the 32 total to that "instance" instead of 2. Now suppose that you're supporting something that's architecturally heavy, like Wonderware or RSView via thin clients on a Terminal Server. All of a sudden you need most of the processing power of your beast. Well, you can "move" instances to other servers. If we take this one step farther, you can actually have a virtualized infrastructure that would allow you to add hardware without changing anything. This type of setup can be cheaper, more flexible, and efficient than its traditional counterpart.

So we've covered how virtualization helps with servers in general. It can be a big help in supporting legacy HMI/SCADA technologies. It's really good for programs that are tough to configure (ah choo-Linux setups-oo). It seems less important for FactorySQL and FactoryPMI - they're already pretty good about being easy to install or move and having a lightweight footprint - especially on the client end with Java Web Start. You could set up a virtualized "production" and "testing" environment, both on the same computer, but this is pretty pointless since each installation would be better separate, and each could support the entire network on a desktop PC. I could see bigger setups greatly benefit from running virtualized instances.