Welcome!

Cognitive Computing Authors: Elizabeth White, Liz McMillan, Kevin Benedict, William Schmarzo, Yeshim Deniz

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Blog Feed Post

The Cloud Metastructure Hubub

How Infrastructure 2.0 might leverage publish-subscribe technology like PubSubHubub to enable portability of applications

Pieter_Bruegel_TowerBabel
Tower of Babel by Pieter Bruegel the Elder.
One of the topics surrounding cloud computing that continues to rear its ugly head is the problem of portability across clouds. Avoiding vendor lock-in has been problematic since the day the first line of proprietary code was written and cloud computing does nothing to address this. If anything, cloud makes this worse because one of its premises is that users (that’s you, IT staff) need not concern themselves with the underlying infrastructure. It’s a service, right, so you just use it and don’t worry about it.

Let’s assume for a moment that you can easily move applications from data center to cloud to cloud. Plenty of folks are working on that, but very few of them address the “rest of the story”: the metastructure.

Metastructure contains the metadata that describes the network, application network, and security infrastructure providing all those “don’t worry about” services cloud providers offer. Load balancing, firewalls, IPS, IDS, application acceleration, secure remote access. If you’ve spent time with your cloud provider tweaking those services – or configuring them yourself – then moving to a new cloud provider is not only a huge investment in time, it’s actually going to be painful because you’re essentially going to have to recreate every metastructure configuration again.

Yes, you’ve done this inside your own data center for years. Every forklift replacement or upgrade of infrastructure has come with its own load of baggage in the configuration arena. Switching out vendor equipment – especially core components – can be extremely painful, especially when configurations need to essentially be “translated” between them. But cloud makes this worse because technically speaking you don’t even have access to the existing configurations. You can’t see them, you can’t have them, and you can’t run them through whatever “upgrade” or “migration” script your new vendor offers to ease the process.

Are you depressed yet?

There’s been some talk of including metastructure data with the virtual machine, but the problem with this is that it almost always requires that the meta data be wrapped up using a proprietary API, such as is provided by VMware. That’s okay if you restrict yourself to only cloud providers that use the same virtualization technology, but not okay if you want to be able to make a move from one technology to another. It also assumes that the metadata is specific to the infrastructure, which is even more unlikely when moving between cloud providers.


HOW ABOUT A CLOUD-BASED CMDB (Configuration Management Database)?


There are several ongoing efforts to address this very scenario because it is so painful. Most of them would, if adopted, require vendors to implement support for a specific standard so that configurations can be managed and exchanged in that standard format. That makes sense, that’s how we’ve always handled translation of data between disparate systems that don’t speak the same language. In the application world we call the process of mapping one format to another “integration” and you can easily evoke a look of terror on a co-worker’s face just by saying the word within their range of hearing. Go ahead, try it. Just make sure they aren’t carrying anything heavy that can be easily thrown at you when you do.

CMDB (Configuration Management Database) technology is another method of addressing the problem of, well, managing configurations. These solutions store configuration of a wide variety of infrastructure solutions – from routers and switches to web and application servers to application delivery controllers. They do a great job of managing configuration and can even “push” configuration out to devices if so desired. But the configurations stored and managed in a CMDB are product-specific, not generic, so they can’t adequately today address the problem of portability.

You can probably see where this is going: a cross between CMDB and a nice, industry-wide standard would probably do the trick, wouldn’t it? And if it was public (in the sense that any application or service is public on the network – that is, accessible via the Internet to any cloud provider or customer site) then cloud providers and organizations alike could take advantage of that configuration management mechanism and use it to their advantage. Portability becomes possible rather than fantasy.


PUBSUBHUBUB


Cloud providers and organizations alike are likely to stop right there. Sharing configuration of infrastructure and core components is just asking for trouble. If ever such a cloud-based CMDB were compromised, well…let’s just say it would be A Very Bad Thing.

But what if the actual metadata, the configuration information, were stored either in the enterprise or the cloud provider (or both), and merely pushed and pulled via a public mechanism on-demand?  Configuration isn’t changed all that often and if an organization is moving between clouds they certainly know when they’re doing it. If there was some mechanism through which metastructure could be published and to which infrastructure could subscribe then when changes were made or providers changed that metastructure data could be easily grabbed from the public cloud-CMDB system (cloud catalog, anyone?) and interpreted into product-specific configuration by the products themselves.

Think of  it like SOA clients pulling WSDL (Web Services Description Language) from a UDDI (Universal Description, Discovery, and Integration) server. The SOA client pulls the WSDL, which describes the service(s), configures itself appropriately, and then is able to make use of those services. The intent of introducing UDDI was a service-catalog that could be polled on-demand to provide the latest information about the service and describe it in an abstract, vendor-neutral way such that any client could access any service, regardless of implementation language or environment. Sounds a lot like what we want for infrastructure portability, doesn’t it?

PUBSUBHUBHUBThat’s where PubSubHubub comes in. While this draft standard for a publish-subscribe system is generally being leveraged by software developers to enable faster sharing of information across the Internet, it is also a fine example of a system that could be used by infrastructure 2.0 solutions to share metastructure. Consider the existence of a public PubSubhubub Hub, like Google’s public PubSubHubub Hub, and how it might be leveraged to share metastructure between clouds or the organization and the cloud.

Note that XMPP is used today by at least one cloud provider to enable distributed cloud management in a nature very similar to that of PubSubhubhub.

In any case, the specific implementation of the configuration “hub” is relatively unimportant; what’s important is that (a) customers can publish a vendor-neutral metastructure to an isolated channel that communicates their specific infrastructure needs and (b) providers can subscribe, at will, to customer topics and retrieve metastructure in a way that allows their infrastructure to in turn configuration itself (or be configured by the provider’s system, as is required by the provider’s implementation).

Early on it would be necessary for the cloud provider to provide the “translation” and configuration services simply because even if a metastructure standard existed today (and it doesn’t) it would take months and possibly years before all the possible infrastructure vendors were able to update their systems to interpret the standard. If the provider implements a configuration “gateway”, however, he can immediately take advantage of such a standard and use existing skills and knowledge gained from its automation and orchestration of its cloud to configure the infrastructure appropriately based on the metastructure. This has the added advantage of “hiding” the infrastructure implementation from the outside world, which for some providers is a very important thing to do.


SOME CONFIGURATIONS ARE INHERENTLY VENDOR SPECIFIC


That’s okay for two reasons: first, we ensure that the metadata description is XML-based, because it’s extensible. If we build into the standard a way to extend it naturally such as is provided with XML the interpreters (configuration “gateways”) can either (a) translate if it can or (b) ignore.

Consider the use of OVF (Open Virtualization Format) to further describe what is called a Virtual Machine Contract (VMC):

For each virtual system, the associated metadata is described in a set of specific sections. The VirtualHardwareSection describes the virtual hardware required including the amount of memory, number of CPUs, information about network interfaces, etc. The OperatingSystemSection describes the guest operating system that will run in the virtual system. The ProductSection provides basic information such as the name and vendor of the appliance and can also specify a set of properties that can be used to customize the appliance.

While VMC is very basic at this point, it’s a good start at providing the foundation for building out a more complete, standards-based description of the metastructure necessary to configure an infrastructure to deploy a specific application in a virtual machine format. Using this as the basis for metadata exchange – when fully described – via a public hub could alleviate most of the issues with sharing infrastructure metadata (metastructure) across clouds in a generally vendor non-specific manner. In other words, portability of both the virtual machine and the specific infrastructure configurations necessary to optimally execute and deliver the application to the end user in the most fast and secure manner possible.

We’re nowhere near this point, by the way. VMC needs to be fleshed out as far as standard metadata goes for infrastructure (perhaps a good chore for the SRI Infrastructure 2.0 Working Group) and vendors would need to adopt and extend out the ProductSection of VMC for product specific configuration that isn’t included in the base format. And PubSubHubub would need to be proven to be a secure method of exchanging the metastructure across clouds. What is likely is that as we move forward trying to extend the plateau of collaboration down the stack toward the core infrastructure is that a new set of tools, products, solutions, and services will emerge to fill the unavoidable gaps in the standards, e.g. a service-based cloud configuration hub offering translation of proprietary metastructure data to some other proprietary metastructure data.

Perhaps there’s a better way overall, and OVF/VMC and PubSubHubub will simply remain in our memories as the catalyst and template for a different set of standards providing portability across clouds. But there is a way to provide this level of portability and collaboration across clouds, across the infrastructure and the application. The need – and perhaps more importantly the belief that it’s necessary to address the need – is growing.

UPDATE: Christofer Hoff pointed out that vCloud has been submitted to the DMTF for standardization, technically making it "open" rather than "proprietary." It is still only implemented by VMware technologies, so for the time being it might as well be proprietary, but this may change in the future.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, will examine the regulations and provide insight on how it affects technology, challenges the established rules and will usher in new levels of diligence a...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
In the enterprise today, connected IoT devices are everywhere – both inside and outside corporate environments. The need to identify, manage, control and secure a quickly growing web of connections and outside devices is making the already challenging task of security even more important, and onerous. In his session at @ThingsExpo, Rich Boyer, CISO and Chief Architect for Security at NTT i3, discussed new ways of thinking and the approaches needed to address the emerging challenges of security i...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
There is only one world-class Cloud event on earth, and that is Cloud Expo – which returns to Silicon Valley for the 21st Cloud Expo at the Santa Clara Convention Center, October 31 - November 2, 2017. Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers. Companies are each developing their unique mix of cloud technologies and service...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...