There’s a lot of talk about Dynamic Data Centres, Dynamic Infrastructures; mostly in a cloudy context and mostly as some over-arching architectural vendor-focused vision. At times, I wonder if when a vendor talks about a ‘Dynamic Infrastructure’; if they actually mean, you can use as much of OUR infrastructure as you like? You can flex up and down on OUR infrastructure.
This is rather limiting from an end-user IT consumer’s point of view because you still find yourselves locked into a vendor or a group of vendors. So it’s only dynamic with constraints; actually, I think Amazon got it right in their naming, it’s Elastic but not truly Dynamic.
So as a good architect/designer/bodge-it-and-scarper-type person, you should be asking this question every time; if I do this, can I get out? What is my exit plan? Can I change any key component of the stack without major process/capability impact? Is the lock-in which comes with any unique feature worth it?
And when I say any component, I mean all the way up to the application. So as part of the non-functional requirements of any application, there should be:
- Data Export/Import
- Archival
standards defined and actually implemented. This goes for any off-the-shelf application as well.
For Cloud to truly change the way IT is done and delivered; this has to be done..otherwise the only way is vertically integrated stacks, which ultimately lead to long-term lock-in. There are still mainframes in existence, not only because they are the right platform for some workloads but also because people are struggling to unpick the complex interdependencies which exist.