On Cisco’s Cloud Announcement

Cisco announced their entry into the public cloud market this past Monday. After chewing on it for a few days, I’ve come to the following conclusions:

  • While some people have said that a 1B investment over 2 years isn’t enough, I have a different opinion. Over the last few years, Cisco has invested a lot of money into cloud architecture via the acquisition of Cloupia, the development of ACI, and enhancements to the UCS ecosystem. 1B to take assets that are actively being developed and deploy them as a multi-tenant cloud provider service sounds appropriate – especially if this is over and above the current R&D.
  • Cisco’s Intercloud includes several features that are targeted towards Enterprise Cloud Laggards – specifically the IT Service Management Services and the Compliance and Configuration Management Services. These types of features address many common cloud concerns among large, conservative enterprises.
  • Cisco has a large reseller community – never underestimate existing sales relationships (which continues to be a downside to Amazon and Google).
  • Cisco is a core component to the two largest Converged Infrastructure offerings: VCE’s Vblock & NetApp’s Flexpod. I expect to see some really good hybrid solutions from those two organizations, increasing the value of Cisco as a hybrid cloud enabler.

My primary concern currently is cost – public clouds are very cost competitive, with frequent cost reductions by competitors. Will Cisco be able to stay competitive with Public Cloud costs while providing enough value over and above commodity IaaS / PaaS to justify a higher cost?

Interesting Links – Devops, Pivotal, Testing

Some links I’ve enjoyed reading and considering over the last few weeks:

Organization Anti-Pattern- Release Testing: Testing at the moment when you’re committed and in production is a risky notion and should be avoided as much as possible – via test driven development and automated testing.

You’re Not a Beautiful and Unique Snowflake: Extremely good read on challenging the status quo in Enterprise shops with existing auditing requirements.

“Wrapping things like Source Control Management, and Test Driven Development around your automation allows you to 1) have tested infrastructure code, 2) audit what is changing in your environment , 3) have an audit trail of who changed things, and 4) know exactly when it changed [...] Attempting to conform the organizational change to the organization just leaves you with the same organization you had in the first place.”

DevOps – The Future of DIY IT?: Money quote on the long term ramifications of avoiding current infrastructure stewards:

“Application developers may be getting what they want from NoSQL now, but cutting out the primary data stewards will result in long-term data quality and information governance challenges for the larger enterprise.”

Big Data Revenue and Market Forecast: Pivotal’s Big Data revenue is larger than Microsoft’s, Amazon, NetApp, EMC, Hortonworks, and Cloudera.

How Cheap Can You Fail?

Adrian Cockcroft presented on CloudOps as done at Netflix during this year's FlowCon – Velocity and Volume (or Speed Wins). The portion I found most relevant was the Innovation Cycle taken from Mainframe Operations, through Client/Server and Commodity Operations, to Cloud Operations (or continuous innovation).

Over time, technology gets cheaper which allows for faster iteration at lower risk- lowering the amount of work-in-progress for IT organizations and bringing quicker benefit to the business at lower risks. This presentation caused me to realize something else, though- as proprietary, costly software is chosen over Open Source software, your agility and speed of delivery is reduced since the cost of a failed implementation goes up. Not to mention, most license models are not “cloud friendly.”

This is an important aspect of any implementation – how cheap can you make failure? How fast can you bring business value?

Amazon Workspaces – Not Just DaaS

A few weeks ago, Amazon announced their VDI solution running in EC2 – Amazon Workspaces. Gunnar compared Workspaces against the current Desktop as a Service providers in this Gartner post.

This announcement is bigger than just virtual desktops. An IT organization’s shift from software to services will not happen overnight… For years, existing thick client Enterprise crapplications will need to be provided to an organization’s users. Due to data gravity and latency requirements of many of these types of applications, the clients need to be hosted close to the applications until they are fully decommissioned and replaced.

For organizations that want to fully migrate from on-premise data centers to the Amazon cloud, this provides a key capability to allow certain types of enterprise workloads to land on EC2 – the capability to run user desktops near the “legacy” backend all while enabling the mobility, flexibility, and security of centralizing desktops.  As the last applications get migrated to “cloud first” architectures, Enterprises will be able to quickly decommission the managed desktops rather than wait for their desktop leases/amortizations to complete.  I would advise potential users of this service to evaluate the desktop services in conjunction with Office 365 to determine what combination provides the most business value to their users.

Innovation Requires Self Worth

You can’t open Business Insider or LinkedIn without seeing several articles on improving innovation.  Innovation is seen as a silver bullet for everything from poor customer experience (always accompanied with a mandatory Zappos article) to products that are priced too high (again, citing Zappos retail pricing model and Apple’s profit margins).

3319404087_d166e7297eToday’s article that came across my feed recommended establishing “Innovation Officers” as a way of creating a culture of innovation.  While the notion of changing a culture by filling a few Chief Innovation roles seems laughable, this article does hit on something very important – culture is one of the most important parts of healthy organizations and plays a large role in innovation.

Most innovation requires risk taking.  Risk taking and the chance of being wrong requires self confidence and a culture of embracing failed attempts.  If your organization is filled with people who question their contribution, their employability, and the currentness of skill sets – innovation will suffer.  Managers don’t exist to validate employees – they exist to make sure that the organizations needs are being met while ensuring that they’re employees are meeting their optimal contribution point (which naturally means happiness, work/life balance, training, and role) – an environment where managers spend large amounts of time propping up employee self esteem is not one that leads to innovation.  Importantly, articles on successful lean management organizations always emphasize autonomous, confident, and skilled workforces.

How does an organization shift a “scared” culture to a “confident” one?  Executives should be transparent and vocal about skillsets they see the organization needing in the next 3-5 years – and then fund training for staff who want to adjust their training to make sure that they’re as valuable to the organization of tomorrow as they are to the organization of today.  Additionally, each employee should understand what their industry is hiring for and target their training towards meeting those skillsets.  Employees who don’t feel like they’re “held hostage” in their job will be more innovative and risk-taking.  Companies need to emphasize an environment of continual education and mentorship.

Apple is often cited as the premiere example of how innovation can change a company, with Steve Jobs shown as the innovative genius behind the iPhone and iPod’s success.  Standing back, those products alone didn’t shift the course of the industry – it was a smaller move when Apple opened up access to the iPod devices to Windows by developing iTunes on Windows.  Without the ability to sell those devices to the majority of PC users, it is very unlikely Apple would have become as dominant as they are today.  Developing iTunes for Windows, as it turns out, was a decision that Steve Jobs was against.

As noted in the book Design Crazy, then-CEO Steve Jobs was adamantly opposed to the idea of Apple’s software spreading to Windows, despite the protests of marketing VP Phil Schiller and then-VP of hardware engineering Jon Rubenstein.
“We argued with Steve a bunch [about putting iTunes on Windows], and he said no,” Rubenstein recalls. “Finally, Phil Schiller and I said ‘we’re going to do it.’ And Steve said, ‘F#@k you guys, do whatever you want. You’re responsible.’ And he stormed out of the room.”

If Phil and Jon hadn’t had the self confidence to push forward regardless, where would Apple be today?

Photo: Janet T – Some Rights Reserved

Configuring DHCP on AD for CentOS

After getting the separate network segment set up yesterday and installing Active Directory, it was time to configure DHCP server to hand out leases for that subnet and automatically update DNS.  On the Windows 2012 Server, install the DHCP role and management tools.  Launch the DHCP management tool and configure a new scope that includes the following:

  • Set the scope to a range on vmnet2 that doesn’t include the router VM and the Domain Controller.
  • Set the options for DNS Server (the IP address of the Domain Controller), the Router (the IP address of the Router VM), and the domain suffix for the Active Directory


  • Right click on the scope and select Properties.  Under the DNS tab, make sure the settings allow DHCP to update DNS records.


As you build out your CentOS VMs, use vmnet2 (Host-only) as their network adapter and, provided the router and Domain Controller VMs are up, they should receive the appropriate IP information and be able to route to the internet for software updates.  However, by default CentOS will not forward it’s hostname to the DHCP server for the DNS update.  To enable that, on each VM edit the /etc/sysconfig/network-scripts/ifcfg-<interface> and add the following line:


Simple Router in VMware Workstation

When building a test Active Directory environment in VMware workstation, it’s a good idea to have the environment on an isolated network segment with a single VM serving as a proxy to the internet (for updates and software installs).  I typically use CentOS for this role since it is lightweight and does not expire after 180 days.

First of all, create a non-DHCP enabled Host-Only network in the Virtual Network Editor.  This will allow you to use Windows DHCP for automatic DNS enrollment on this subnet (among other things).  In the screen shot below, this is labeled vmnet2 and is on the network


Next, create a CentOS VM with two network interfaces – the primary one should be ‘Bridged’ and the secondary one should be the Host-only one you just created.  Install CentOS (a minimal install is all that is required):


Once CentOS is installed, you’ll need to modify the scripts to bring up the network interfaces.  These are the settings I’ve had success with:

In /etc/sysconfig/network-scripts/ifcfg-eth0:


In /etc/sysconfig/network-scripts/ifcfg-eth1:


After this, you’ll need to configure IP NAT and forwarding in CentOS – first, edit the following line in /etc/sysctl.conf: net.ipv4.ip_forward = 1 and run the following command to reconfigure the currently booted install: echo 1 > /proc/sys/net/ipv4/ip_forward

Next, you’ll need to configure iptables to masquerade the connections across eth1.  This will drop the current iptables configuration and is not the most secure configuration, but it works well (and can be later tightened down once it is functionally working):

iptables --table nat --flush
iptables --delete-chain
iptables --table nat --delete-chain
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
/etc/init.d/iptables save

Finally, create a Windows VM with a single interface on vmnet2.  After it is installed, statically define the network interface with the information from vmnet2 using the CentOS’s eth1 interface as the gateway.  This should allow you to get online from the network segment for updates and license authorizations.


Powershell to Install PHP on Windows

Fairly recently, I inherited the support responsibilities for quite a few PHP and WordPress on Windows installs across the server population.  After creating the initial documentation, I started automating as much of PHP maintenance as possible, including the ability to perform side-by-side PHP upgrades with the ability to roll back to previous runtimes via Microsoft’s excellent PHP manager.  The script is out on GitHub, but I wanted to offer a bit of instruction here as well.

  1. Download the required version of PHP from  Make sure to use the non-thread-safe version, and pick the zip file rather than the installer.  Unzip the contents to a consistent directory accessible from the PowerShell script (for example, php-5.4.19).
  2. Modify the following PowerShell variable to point to the location from step 1: $phpInstallMedia
  3. Modify the following PowerShell variable to indicate the PHP version: $php_version
  4. Modify the following PowerShell variables to install PHP and configure logs and temp folders for your environment: $php_install, $php_log, $php_temp
  5. Download the appropriate version of PHP Manager from  Unzip it to a consistent directory accessible from the PowerShell script.
  6. Modify the following PowerShell variable to indicate where the PHP Manager MSI is placed: $phpmgrInstallMedia
  7. Download WinCache from, unzip it, and place it into a folder named the same as the PHP version above.
  8. Modify the following PowerShell variables to configure default folders for IIS in your environment: $web_root, $web_log

After those variables are set to match your environment, the script should install PHP, WinCache, and reconfigure the default WWW root and WWW log folders.  If you get an error about execution policy, using set-executionpolicy in PowerShell to adjust the security parameters should resolve that easily.

Maintaining VCOPS Idle Thresholds

Periodically reviewing thresholds in VCOPS is a good idea to make sure the reporting matches what is seen in production – as platform changes occur in your environment, thresholds in VCOPS may become outdated.  This can increase the cost of your environment since “idle” VMs may go unreported – especially if new software is rolled out across the entire environment such as security or monitoring software.  It is a good idea to maintain a single VM with nothing other than what is considered standard platform to gauge what thresholds should be set to find truly “idle” VMs.

Copyright © 2014 da5is. All rights reserved.