Skynet is coming…


“Why process automation isn’t always a good thing.”

I’m *GOOD* at scripting and automation.  This is not to say I’m the best there is, I think I worked with him about 10 years ago in California…  I learned most of what I know about BASH from him.

There are some WONDERFUL uses for shell scripting.  Automating mind-searingly simple and repetitive processes.  Providing a little extra functionality and error-checking into a process or to augment the output of a program that probably could have been written better.

And yes, automating a complex task can save you HOURS in the long run.  (Though more often than not, you’re going to spend more time writing and debugging the script than you would have simply executing the commands manually.)

The best reason to do it is, for me at least, it’s fun.

Three Laws

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Ok, maybe fun isn’t the right word.  But when you can do with one simple command and a few arguments what would take you 14 different commands and a legal pad to do, it’s…an elegant solution to a convoluted problem.

The downside to process automation is that a script is only as good as the guy who wrote it, and in the absence of the guy who wrote it, it’s only as good as the guy running it.

When you write a script to do an advanced function, you can only plan ahead so far before something comes back to bite you.  Maybe you grep out the number 2140 and then a month later run into a situation where the output is going to include the number 52140.  (Hint always include surrounding spaces in an awk statement, or for beginning of line use “^” to ensure nothing precedes the value you’re looking for)

This is where it gets tricky, and where scripting isn’t always a good idea.

A script is never, EVER a replacement for actually knowing how to do the job.


“DARPA is apparently investing $20-million in a project to come up with bi-pedal combat robots that can be operated remotely or automatically.–Apparently the finest minds in the US Military have yet to learn ANYTHING from the finest minds in Hollywood.”

Here’s a real-world example:

I just wrapped up month 10 of a 12 month contract planning and implementing a datacenter move.  The move won’t be complete until LONG after I’m gone…  (It’s a slow, one at a time process)

So they commission a scripted move and I supply it for them.  Some beautiful scripts.  (In my opinion)

One set creates the SRDF pairing based on the source/target storage groups, choosing next available RDF Group, pairing up boot/luns, etc, and deletes them.

Another set to do the graceful SRDF/A Establish/Split, checking/changing modes, etc. (Made available to the end users)

Simple processes, right?

It is…To anyone who knows how to do it manually.

Here’s the catch – When we first started this process the proper disclaimers were made.  EMC will not support these scripts, they’re only temporary.  EMC’s official stance is they won’t support *ANY* scripts that don’t come out of the Cork, Ireland scripting team.  I don’t blame them.  They can’t be responsible for every Tom, Dick or…well…Jesse’s scripting, especially custom scripts that only apply to ONE environment.

Disclaimers were made to management, agreed to, settled.  Right?

Oops.  New management team comes in not even two months later.  One *MORE* month goes by and the Senior EMC guy leaves for another opportunity.

The new Management are very good people and seem to have a good understanding of what’s what, and all is fine.

Until I’m having a conversation with the new boss tonight and remind him that these scripts aren’t supported by EMC and as him if there is anyone on the Linux team I can sit down with and cross train to take over the scripts?

Blank Stare.  Um… not only isn’t there anyone *ON* the linux team…there really isn’t a “Linux Team” per se.  (I figured that despite it being primarily a windows shop there had to be SOMEONE there who knew Linux.  (There was, see the part about the Senior EMC guy leaving…)

So I’m hip deep in editing my scripts.  I shall comment EVERY. SINGLE. LINE. so that if something breaks, they at least have a fighting chance.

Scripting is a wonderful thing when the scripts are used to automate processes using knowledge that your storage team already has.  When it’s used to REPLACE knowledge your storage team should have…well that’s a problem.

Don’t automate a process you don’t know how to do from memory.

I am not a PC…

The new member of the family…

So here it is.  I bought a MacBook.  After literally 10+ years of being a “Dell Guy”, well Dell finally ran out of laptops that I found interesting, (you should *NOT* hear the display creak when moving it during a video call)

My last notebook, the Adamo13 (which doesn’t exist anymore) was one of my favorites.  Ultra slim, solid state just about everything.  Could do a 5 hour plane-ride almost without issue.

But I needed something else.  After flipping back and forth between Linux and Windows I realized I needed something that could go both ways.  The more I thought about it, Apple seemed like the way to go.  Apple runs on a BSD Linux kernel after all, has a linux command-line (if you know where to get to it) and pretty good compatibility.

So when I finally got it in my head to upgrade, well I went ahead and dropped the hammer on a 15″ macbook pro.  (so to speak, no actual hammers were involved.)

So far I’m pretty happy with my choice.  But when the first person at work saw me on it and asked me the idiot question I got pissy.

“Are you a Mac now?”

Under breath: “No idiot, I’m a person.  I’m *USING* a Mac.”

Let me break it down.  I have in my arsenal the following systems.

In my household and business I have:

3 Desktop PC’s running windows 7
3 Laptops running Windows 7
1 Dell 1850 running Windows 2003 Server . (That despite all my kajoling, refuses to survive a P2V)

4 VMWare ESXi hosts containing the following:

11 Windows 2008 Servers
2 Windows 2003 Servers
10 CentOS 5 Servers
5 CentOS 6 Servers
2 SUSE Enterprise 11 Linux

and now

1 15″ MacBook Pro

This is the thing.  I’m a technology pragmatist.  I use what works best and does what I need it to.  In the limited scope of a transportable computer, a Mac seems to do what I need nicely, and yes, it comes in an attractive and (so far) fairly durable package.

But I’m not a Mac.  Nor am I a PC.  I’m a *PERSON* who uses a computer.  (Several actually)

Religion has no place in technology.  Leave it in the church.

Oh, and I’m still not buying a #$!@!? iPhone.

Cloud Computing….

Ok, Chuck Hollis posted a great article on Cloud but I had to get my two-one-hundredths in.  Enough so that I’ve temporarily decided to come out of retirement.

Here goes.

The problem with “Cloud” is that most people don’t realize that while you might be gaining in areas of cost, possibly (but not likely) performance, and scalability, IMHO what you’re giving up is far worse.


Putting your application in “The Cloud” is the ultimate abdication of an IT Manager’s responsibility. It’s saying “If it breaks, I have someone else to blame.”

Cloud computing has been around for years. They just called it “Managed Services” or “Hosted Services” or any of a number of other marketing catch-words before.

Myself I’ve told people that if you “…can’t point to the system that is hosting your application, it’s technically in the cloud.”  A simple two-node VMWare cluster, technology that’s been around for years, is suddenly a “Private Cloud”

I’ve often said that Marketing is nothing but Sales without the ethics involved.  My wife (The marketing person) and I argue on this point regularly.  🙂

Cloud is ambiguity wrapped in uncertainty.  It’s a hope without proof that what you need will be there when you need it.

This is obviously an oversimplification but nonetheless it’s accurate.

Now don’t get me wrong – “Cloud” computing is perfect for small businesses who don’t want to host an IT staff or dedicate 250 square feet to a computer room. It’s good for medium businesses that are trying to control licensing costs or who expect random expansions/contractions of their user base.  Heck, even *I* provide similar services to a few local businesses simply because I’ve already got the disks spinning (and it helps pay the electric bill)

But if you have data-retention compliance requirements or any of a dozen other regulatory hurdles, or if you just plain want to *KNOW* where your data is, you’re often better off keeping the application in house.

*ESPECIALLY* if you already have a datacenter you’re using for other purposes.

As a consultant for a company that is experimenting with moving it’s corporate email system to Google. I’ve been asked a number of times what the backup policies are, what the retention policies are, what RTO and RPO are in the event of a failure.

The only answer I can (and will) give is “Well Google says it’s this, Google says it’s that.”

When they ask me what *I* think I simply tell them I not only don’t know, but that I can’t know.  It’s unknowable.  Since I don’t have control over the application as such I absolutely refuse to speculate as to the actions and abilities of others whom I don’t know and are not in my direct control.  I hope the managed services company has someone competent in their employ, but as most of them hire based on labor cost over skill-set (I’ve interviewed for the positions, I’ve seen the depth-charge offers they throw out) I won’t count on it.

Because the truth is, you don’t know. Sure you know what the marketing says, what the sales rep told you, what the tech-support person tells you when you call in. But if your hands aren’t the ones shuttling the tapes from the library to the vault, you don’t actually *KNOW*, you suspect.

Finally, “Cloud” services are fine until there is a failure. The problem with a failure, as last year’s EC2 failure illustrated, is that when there *IS* an enterprise failure, whether it be due to the lack of planning, infrastructure, or just a plain, old-fashioned act of god, you’re not in control.

You have to wait on the guys at Amazon to fix the problem. And if you’re one of a thousand customers affected by an outage, odds are pretty strong your application isn’t the first one that’s being worked on. And no amount of yelling or screaming is going to change that…

Personally, I would always prefer to be in a position to make a single phone call and get someone out of bed whose sole job it is to get *MY* Exchange server back online, or handle a failover, etc.  If I measure downtime in thousands of dollars per minute, I want to *KNOW* that my sites/applications are being worked on first.  The only way to know that is to sign the paycheck of the guy who is actually hands-on.  (Or in my case to *BE* the guy who is actually hands-on.)

Again, Just my .02 cents.

Signing off….

I started this blog in September, 2006. I was working as a storage administrator for “Loan To Learn” a small student loan company in Sterling, Virginia. It was an amazing challenge. We built an enterprise environment from the ground up in amazing time. Overcoming odds, battling beasts, etc. It was great fun.

It’s a pity that the pilot of that particular airliner didn’t see the mountain looming and smacked straight into the side of it without blinking..

The blog was a great place to vent, to talk about the discoveries and problems in running a day-to-day environment. (Something I had only done once before, in 1997 when I was just starting in the industry)

I received a tremendous amount of help from all of you, and I’m hoping the information contained within these “pages” was/is helpful.

Long story short, I think I’ve come to the point where I think this blog has outlived it’s usefulness, at least as a day-to-day diary, both to me and everyone around me. It’s been difficult for me of late because I don’t feel it’s fair to blog about my current clients. They haven’t consented to such public favor (or ridicule, as the case sometimes is.)

My last engagement, government agency, hurt my career.  Spending 2+ years working with a group who have an abhorrent fear of new technologies, or in fact doing ANYTHING new, kept me well behind the curve, technically.  It’s only through sheer luck (and a few people on the west coast that still know my name) that I got this engagement where I actually got my hands on my first VMAX…only one, a year later than just about everyone else I know.  Talked them into their first NAS device while I was there (Celerra)

Sadly, these days I’m still not as in the ‘cutting edge’ mix of things, but I’m diving in wherever I can, and I still have my home projects to keep my skills up.  I’m trying to dive into the design work wherever I can…  But all in all, I don’t have a lot of time to play with the newest and bestest things, it’s a lot of day-to-day crap.

That and I’m realizing that while I’m a good engineer…I kinda suck as a writer. 😉

I think the biggest thing is…I just don’t have that much to rant about anymore..  That’s good right?

Keep in touch, I can be reached at jg (at) 50micron dot com or via twitter @50micron

Should you virtualize a single host?

Um…yes?  Kind of an obvious question in the grand scheme…  In fact I’m really surprised it’s not done more often.

In the grand scheme of things.  Why wouldn’t a small business use something that’s FREE and gives them the ability to maximize their (usually minimal) hardware investment?

I have a customer, one of those little “side” gigs we all take on, a friend of a friend said “I have someone who needs a new server.” and it all went from there.

So I sell them an old Dell 2U (PowerEdge 2650) I’ve got floating around, migrate them off their antique HP ML160, their single server provided domain services, file-server.  SQL and a number of other minor services.

Now that it’s time to upgrade.  Should we spend hours and hours installing a new server, migrating files, SQL, changing all of the ODBC connectors on every workstation to point to the new server?

Or P2V the old server to the new server and be done with it.

VMWare offers a lot of options over and above the obvious.  Once the P2V is done, boot, and you’re done.  Then you can build a second domain controller, seperate off some of the minor functions, maybe even seperate off the fileserver and sql servers.

From an admin standpoint, dual network connections, thinly provisioned luns, and most importantly, the ability to power-cycle a server from 2700 miles away without having to wonder if it’s going to come back up, the ability to remotely mount ISO images as CDrom’s for upgrades.

How easy would it be to do remote software upgrades or install a new Windows server if you could insert a CD remotely?

VMWare ESXi is *FREE* people. 🙂

*MY* 9/11 post….

I’m not going to go into the details of what I was doing on the day it happened.  I’ve been there, and it doesn’t do anyone any good to wallow in their misery.

It amazes me that we choose to re-traumatize ourselves every year on the anniversary…complete with graphic visuals and droning, repetitive descriptions of what happened that day.

You don’t tell a rape victim to re-live the experience every year do you?  Ask any medical professional, there is an inherent danger in repeatedly re-opening old wounds.

I think remembering is important – it’s just how we choose to remember. I like to think about the strength, courage and caring so many people displayed during a devistating crisis. I think no victim forgets but they can see the fact that they are a survivor.

Sadly, that’s not what our media is portraying. Gazing into the smoking wound again and again and again in high-definition video doesn’t help us heal, it only serves to inflame anger and keep hatred alive unnecessarily.

Again to use the analogy, it’s like a rape victim being forced to watch a video of the attack again and again in some misguided hope that it will somehow harden them to it. It doesn’t and it never will.

We were violated.  It’s true.  But there comes a point when you  have to start moving towards the future, because nothing can be gained from constantly staring into the past.

An argument for unions…

Next time you want to slam unions remember this: “A rising tide lifts all boats.”

Even if your job is NOT union, remember that the business has to stay competitive with the union jobs in order to hire people. So a business has to offer a decent salary just to get people to sign on.

Part of what makes America great is the freedom for our workers to collectively bargain for better conditions, thereby raising the bar for other businesses if they want to hire, because the best and the brightest will take the jobs with the better working conditions.

This is also why big business LOVES high unemployment. It makes it easier for business to say “take it or leave it” and offer crap wages and working conditions.  An easy example will be the fact that I’ve seen contract rates fall by 10-15% in the last year alone.

The desperate can be easily pressured to “take it or leave it.”

My geekliness knows no bounds…

This is what happens when a computer-geek spends too much time in one hotel.

Do I really have to say anything to go along with this picture, or does that pretty much cover it?

Dear Congress..

Dear Congress –

I’m a small business owner. All of the tax cuts in the WORLD aren’t going to change the fact that until demand comes up I’m not in a position to hire someone, no matter how much I want to.

If I hired someone today they would be sitting on their ass waiting for work.

Demand isn’t going to come up until the consumers start spending money.

They aren’t going to start spending money until they have more in their pockets.

So a hint – leave me the hell out of your equation and give the tax cuts to those who need it, so they can spend it, so I can hire someone and start growing again.

Love, me.

To Cloud, or not to Cloud…

It really does seem to be the question…  the sad part is how many people I talk to in my travels don’t really understand what cloud even is, let alone what the pros and cons are of moving your applications into it.

Background – a company is considering moving probably 3,000-5,000+ users to gmail as a ‘corporate’ email system…  They are running exchange currently…

Apparently, they don’t read the news and have missed out on the multiple spectacular failures of services like Google, Amazon and the like.

Cloud services are GREAT if you are running a small business, don’t want to / can’t afford an IT budget, or just plain don’t want to deal with it.

If you’re a billion dollar corporation with a multi-million dollar IT infrastructure already in place.  Outsourcing email seems a bit…odd.

Granted, if you are this company, you are obviously going to get the top-of-the-line service, dedicated support personel, etc.  You’re also buying plausible deniability should data-loss put you in jeopardy under subpoena. (While “I disposed of the data” is bad, “The company I was outsourcing to lost it” is not as bad.)

“Honest your honor, we had the emails but Google deleted them by accident.”

*DISCLAIMER – I’m not implying that google would ever do something like this on purpose, using them as a generic, like Xerox.

** It’s Google’s fault…they’re big enough to have become the verb.

***Does anyone actually own a Xerox branded machine anymore?

So if you’re SuperMegaCorp, LLC…you pay for the real service.  You get dedicated support staff, a private line to call, etc.  But to be honest, you might as well keep it in house because hey, you already have the staff, the datacenter, the VMWare farm, etc.  At that point you’re talking a few dollars in licensing and you’ve got email address for your thousands of employees for pennies each.  (Ok, yes, add in replication, backup, etc and it gets a bit higher, but the point is you’ve already comoditized it. (is too a word))

But think about it this way.  The company you’re contracting too has to pay for the same things *YOU* have to pay for.  *PLUS* they have to make enough of a profit to keep their shareholders off their back.  They do get a bit of a discount for bulk licensing, hardware, etc…

But what you GET for hosting it in house is immeasurable.  You get control.

At my last gig I heard the following phrase over and over again.  “I want one neck to choke.” (Oddly enough it was the argument given for moving AWAY from their previously preferred vendor, but you get the idea.)

When the email admin works for you, you have one neck to choke.  You get immediate results. Or you get the pleasure of firing someone.  (Can be fun in the right circumstances, ask The Donald.)

Now say you hosted with Amazon, just for grins.

Not only are your hosts down, potentially THOUSANDS of other hosts are down as well.  Now while we would like to believe they have a thousand techs on staff to give each customer equal time…let’s face it.  it’s not going to happen.  They  have, EXTREMELY generously, 10 technicians per thousand customers.  The techs will bring hosts up as soon as they can…

In an egalitarian society, odds are quite simply about 1000:1 against your site being the first one brought up…  990:1 against it being the second, etc.  See where I’m getting?  Eventually they’ll get around to it, but unless they figured out time travel and can loop back and do them all at the same point in time…you’re out of luck.  Yes, you’ve probably got a 99.999% uptime guarantee…but read the small print of your contract…  Their liability to you cannot exceed the cost of the hosting, if that, or some similiar legalease that limits their liability for downtime and, god forbid, data loss.

But this is not an egalitarian society…  Pure capitalism and “he who has the most gold gets their email back first.” If you’re with Amazon, well they host some PRETTY big sites…including their own.  Netflix comes to mind.  So in a downtime event if it comes down to bringing Joe the Plumber’s CRM app or Netflix’s east-coast streaming…which one do you think is going to get priority?


I have one neck to choke…  50Micron is hosted by Catbytes… the company that I do my consulting through.  Reason being that I maintain the lab anyway for “play” (officially: self-education and training) purposes, it’s easy for me to spin up an extra VM and put Exchange on it, a couple of CentOS Mailscanners, a few webservers, etc, even off-site replication of backups over a 10MBit link to a “DR” site (that happens to be in my basement)  (If someone wants to donate another CX3-20i or a couple of FCIP bridges I’ll have block-level replication. 😉 )

When Amazon EC2 had their issues, suspiciously I had a pretty major crash as well… (As did the customer I was working for at the time, don’t get me started on my paranoid theories.)

But when my stuff breaks… It’s my fault, it’s my responsibility, and *I* am the only one in line.  If I had hosted with Google or Amazon I might have been down for weeks…

I was back up in about 2 hours.  The time it took me to cycle the environment remotely. 🙂

Yes…building an IT infrastructure if you already have one can be pricey..  Paying someone else for hosting when you already HAVE an IT infrastructure just plain doesn’t make sense.

P.S. The funniest part is I’m now hosting about a half-dozen servers for friends/family (not free, I’m ugly, not stupid; and co-lo cages are NOT cheap) and about 40-50 websites that I’ve gotten via friends and word-of-mouth…

Of course my guarantee is as follows:

“Best effort, and you have to realize I have a day job that by it’s very nature comes first.”  🙂