Cloud Architectures – Storage in the Cloud

roadchimp clouds

Brief

Cloud technology is deployed across a wide variety of industries and applications. The term ‘Cloud’ itself has become so widely prevalent that we’ve devised additional terms in an effort to describe what type of cloud we’re talking about. What’s your flavor? Iaas, Paas or Saas? Or perhaps it’s Public, Private or Hybrid?

Regardless of the type of cloud you’re using or planning to implement, there’s no denying that storage is an essential component of every cloud architecture that simply cannot be overlooked. In this post, we will look into some of the most common usages of storage in the cloud and peel back the layers to discover exactly what makes them tick. Our goal is to come up with a yardstick to measure storage design.

Drivers towards Cloud Storage adoption

dropbox box_logo

What do Dropbox and Box Inc have in common? Both companies are less than 5 years old, offer services predominantly centered around cloud storage and file sharing and have been able to  attract significant amounts of capital from investors. In fact, Dropbox raised $250 million at a $4 billion dollar valuation from investors with  Box Inc raising another $125 million in mid 2012. It looks like Silicon Valley sees Cloud Storage services as a key piece in the future of cloud. So why is there such a tremendous interest around cloud storage? Consumers are drawn to a number of benefits of using cloud:

  • Redundancy: Large clouds incorporate redundancy at every level. Your data is stored in multiple copies on multiple hard drives on multiple servers in multiple data centers in multiple locations (you get the picture).
  • Geographical Diversity: With a global audience and a global demand for your content, we can place data physically closer to consumers by storing it at facilities in their country or region. This dramatically reduces round trip latency, a common complaint for dull Internet performance.
  • Performance: Storage solutions in the cloud are designed to scale dramatically upwards to support events that may see thousands or millions more consumers accessing content over a short period of time. Many services provide guarantees in the form of data throughput and transfer.
  • Security & Privacy:  Cloud storage solutions incorporate sophisticated data lifecycle management and security features that enable companies to fulfill their compliance requirements. More and more cloud providers are also providing services that are HIPAA compliant.†
  • Cost: As clouds get larger, the per unit costs of storage go down, primarily due to Economies of Scale. Many service providers choose to pass on these cost savings to consumers as lower prices.
  • Flexibility: The pay as you use model takes away concerns for capacity planning and wastage of resources due to cyclical variations in usage.

It should be noted that a Draft opinion paper released by the EU Data Protection Working Party while not explicitly discouraging Cloud adoption, recommended that Public Sector agencies perform a thorough risk analysis prior to migrating to the cloud. You can read the report here.

Storage Applications for the Cloud

We’ve listed some of the most common applications for cloud storage in this section:

  • Backup: The cloud is perceived to be a viable replacement for traditional backup solutions, boasting greater redundancy and opportunities for cost savings. The Cloud backup market is hotly contested in both the consumer and enterprise markets.
    • In the consumer market, cloud backup services like Dropbox, Microsoft SkyDrive and Google Drive offer a service that takes part of your local hard drive and syncs them up with the cloud. The trend for these pay for use services are on the rise, with Dropbox hosting data for in excess of 100 million users within four years of launching their service.
    • In the Enterprise Space, Gartner’s magic quadrant for enterprise backup solutions featured several pureplay Cloud backup providers including Asigra, Acronis and i365. Even leading providers such as CommVault and IBM have launched cloud-based backup solutions. Amazon’s recently launched Glacier service provides a cost-effective backup tier for around $0.01 per gigabyte per month.
      01 Gartner Magic Quadrant
  • File Sharing: File sharing services allow users to post files online and then share the files to users using a combination of Web links or Apps.  Services like Mediafire, Dropbox and Box offer a basic cloud backup solution that provides collaboration and link sharing features. On the other end of the spectrum, full-blown collaboration suites such as Microsoft’s Office 365 and Google Apps feature real-time document editing and annotation services.
  • Data Synchronization: (between devices): Data synchronization providers such as Apple’s iCloud as well as a host of applications including the productivity app Evernote allow users to keep files  photos and even music synchronized across array of devices (Desktop, Phone, Tablet etc.) to automatically synchronize changes
    evernote
  • Content Distribution: Cloud content distribution network (CDN) services are large networks of servers that are distributed across datacenters over the internet. At one point or another, we’ve used CDNs such as Akamai to enhance our Web browsing experience. Cloud providers such as the Microsoft Windows Azure Content Distribution Network (CDN) and the Amazon CDN offer affordable CDN services for serving static files and images to even streaming media to global audience.
  • Enterprise Content Management Companies are gradually turning to the cloud to manage Organizational compliance requirements such as eDiscovery and Search. Vendors such as HP Autonomy and EMC provide services that feature secure encryption and de-duplication of data assets as well as data lifecycle management.
  • Cloud Application Storage: The trend towards hosting applications in the cloud is driving innovations in how  we consume and utilize storage. Leading the fray are large cloud services providers such as Amazon and Microsoft who have developed cloud storage services to meet specific applications needs.
    • Application Storage Services: Products like Amazon Simple Storage Service (S3) and Microsoft Windows Azure Storage Account support storage in a variety of formats (blob, queue and table data) and scaling to very large sizes (Up to 100TB volumes).  Storage services are redundant (at least 3 copies of each bit stored) and can be accessed directly via HTTP, XML or a number of other supported protocols. Storage services also support encryption on disk.
      02 Azure storage
    • Performance Enhanced Storage: Performance enhanced storage emulates storage running on a SAN and products like Amazon Elastic Block Storage provide persistent, block-level network attached storage that can be attached to virtual machines running and in cases VMs can even boot directly from these hosts. Users can allocate performance to these volumes in terms of IOPs.
    • Data Analytics Support: Innovative distributed file systems that support super-fast processing of data have been adapted to the cloud. For example, the Hadoop Distributed File System (HDFS) manages and replicates large blocks of data across a network of computing nodes, to facilitate the parallel processing of Big Data. The Cloud is uniquely positioned to serve this process, with the ability to provision thousands of nodes, perform compute processes on each node and then tear down the nodes rapidly, thus saving huge amounts of resources. Read how the NASA Mars Rover project used Hadoop on Amazon’s AWS cloud here.

Storage Architecture Basics

03 Generic Storage Architecture

So how do these cloud based services run? If we were to peek under the hood, we would see a basic architecture that is pretty similar to the diagram above. All storage architectures comprise of a number of layers that work together to provide users with a seamless storage service. The different layers of a cloud architecture are listed below:

  • Front End: This layer is exposed to end users and typically exposes APIs that allow access to the storage. A number of protocols are constantly being introduced to increase the supportability of cloud systems and include Web Service Front-ends using REST principles, file-based front ends and even iSCSI support. So for example, a user can use an App running on their desktop to perform basic functions such as  creating folders,  uploading and modifying files, as well as defining permissions and share data with other users. Examples of Access methods and sample providers are listed below:
    • REST APIs: REST or Representational State Transfer is a stateless Web Architecture model that is built upon communications between clients and servers. Microsoft Windows Azure storage and Amazon Web Services Simple Storage Service (S3)
    • File-based Protocols: Protocols such as NFS and CIFS are supported by vendors like Nirvanix, Cleversafe and Zetta*.
  • Middleware: The middleware or Storage Logic layer supports a number of functions including data deduplication and reduction; as well as the placement and replication of data across geographical regions.
  • Back End: The back end layer is where the actual physical hardware is implemented and we refer to read and write instructions in the Hardware Abstraction Layer.
  • Additional Layers: Depending on the purpose of the technology, there may be a number of additional layers
    • Management Layer: This may supporting scripting and reporting capabilities to enhance automation and provisioning of storage.
    • Backup Layer: The cloud back end layer can be exposed directly to API calls from Snapshot and Backup services. For example Amazon’s Elastic Block Store (EBS) service supports a incremental snapshot feature.
    • DR (Virtualization) Layer: DR service providers can attach storage to a Virtual hypervisor, enabling cloud storage data to be accessed by Virtual Hosts that are activated in a DR scenario. For example the i365 cloud storage service automates the process of converting backups of server snapshots into a virtual DR environment in minutes.

Conclusion:

This brief post provided a simple snapshot of cloud storage, it’s various uses as well as a number of common applications for storage in the cloud. If you’d like to read more, please visit some of the links provided below.

Roadchimp, signing out! Ook!

Reference:

* Research Paper on Cloud Storage Architectures here.
Read a Techcrunch article on the growth of Dropbox here.
Informationweek Article on Online Backup vs. Cloud Backup here.
Read more about IBM Cloud backup solutions here.
Read about Commvault Simpana cloud backup solutions.

Technology in Government – Cloud Computing

Executive Brief

A number of governments have implemented roadmaps and strategies that ultimately require their ministries, departments and agencies to default to Cloud computing solutions first when evaluating IT implementations. In this article, we evaluate the adoption of cloud computing in government and discuss some of the positive and negative implications of moving government IT onto the cloud.

Latest Trends

In this section, we look at a number of cloud initiatives that have been gaining leeway in the public sector:

  • Office Productivity Services – The New Zealand Government has identified office productivity services as the first set of cloud-based services to be deployed across government agencies. Considered to be low hanging fruit and fueled by successes in migrating perimeter services like anti-spam onto the cloud, many organizations see email and collaboration as a natural next step of cloud adoption. Vendors leading the charge include Microsoft’s Office 365 for Government, with successful deployments including Federal Agencies like the USDA, Veterans Affairs, FAA and the EPA as well as the Cities of Chicago, New York and Shanghai. Other vendor solutions include Google Apps for Government which supports the US Department of the Interior.
  • Government Cloud Marketplaces – A number of governments have signified the need to establish cloud marketplaces, where a federated marketplace of cloud service providers can support a broad range of users and partner organizations. The UK  government called for the development of a government-wide Appstore, as did the New Zealand Government in a separate cabinet paper on cloud computing in August 2012. The US government has plans to establish a number of cloud services marketplaces, including the GSA’s info.apps.gov and the DOE’s YOURcloud, a secure cloud services brokerage built on Amazon’s EC2 offering. (link) The image below lists the initial design for the UK government App store.
    03 UK App Store
  • Making Data publicly available  – The UK Government is readily exploiting opportunities to make available the Terabytes of public data that can be used to develop useful applications. The recent release of Met Office UK Weather information to the public via Microsoft Azure’s cloud hosting platform. (link)
  • Government Security Certification – A 2012 Government Cloud Survey conducted by KPMG listed security as the greatest concern for governments when it comes to cloud adoption and that governments are taking measures to manage security concerns. For example, the US General Services Administration subjects each successful cloud vendor to a battery of tests that include an assessment of access controls.

01a Canada Mappings

Canadian Government Cloud Architectural Components

Strategic Value

The strategic value of cloud computing can be summed up into a number of key elements in government. We’ve listed a few that appear on the top of our list:

  • Enhancing agility of government – Cited as a significant factor in cloud adoption, cloud computing promises rapid provisioning and elasticity of resources, reducing turnaround times on projects.
  • Supporting government policies for the environment – The environmental impact due to reduced data center spending and consumption of energy on cooling has tangible environmental benefits in terms of reduced greenhouse gas emissions and potential reductions in allocations of carbon credits.
  • Enhancing Transparency of government – Cloud allows the developed of initiatives that can make government records accessible to the public, opening up tremendous opportunities for innovation and advancement.
  • Efficient utilization of resources – By adopting a pay-for-use approach towards computing, stakeholders are encouraged to architect their applications to be more cost effective. This means that unused resources are freed up to the common pool of computing resources.
  • Reduction in spending – Our research indicated this particular element is not considered to be a significant aspect of moving to cloud computing according to technology decision makers, however some of the numbers being bandied about in terms of cost savings are significant (Billions of dollars) and can appeal to any constituency.

Positive Implications

We’ve listed a number of positive points towards cloud adoption. These may not be relevant in every use case, but worthwhile for a quick read:

  • Resource Pooling – leads to enhanced efficiency, reduced energy consumption and more economical cost savings from scale
  • Scalability – Unconstrained capacity allows for more agile enterprises that are scalable, flexible and responsive to change
  • Reallocation of human resources – Freed up IT resources can focus on R&D, designing new solutions that are optimized in cloud environments and decoupling applications from existing infrastructures.
  • Cost containment – Cloud computing requires the adoption of a ‘you pay for what you use’ model, which encourages thrift and efficiency. The transfer of CAPEX to OPEX also smoothes out cash-flow concerns  in an environment of tight budgets.
  • Reduce duplication and encourage re-use – Services designed to meet interoperability standards can be advertised in a cloud marketplace and become building blocks that can be used by different departments to construct applications
  • Availability – Cloud architecture is designed to be independent of the underlying hardware infrastructure and promotes scalability and availability paradigms such as homogeneity and decoupling
  • Resiliency – The failure of one node of a cloud computing environment has no overall effect on information availability

Negative Implications

A sound study should also include a review of the negative implications of cloud computing:

  • Bureaucratic hinderances – when transitioning from legacy systems, data migration and change management can slow down the “on demand” adoption of cloud computing.
  • Cloud Gaps – Applications and services that have specific requirements which are unable to be met by the cloud need to be planned for to ensure that they do not become obsolete.
  • Risks of confidentiality – Isolation has been a long-practiced strategy for securing disparate networks. If you’re not connected to a network, there’s no risk of threats getting in. A common cloud infrastructure runs the risk of exploitation that can be pervasive since all applications and tenants are connected via a common underlying infrastructure.
  • Cost savings do not materialize – The cloud is not a silver bullet for cost savings. We need to develop cloud-aligned approaches towards IT provisioning, operations and management. Applications need to be decoupled and re-architected for the cloud. Common services should be used in order to exploit economies of scale; applications and their underlying systems need to be tweaked and optimized.

05 Cloud Security concerns

Security was cited as a major concern (KPMG)

Where to start?

There is considerable research that indicates government adoption of cloud computing will accelerate in coming years. But to walk the fine line of success, what steps can be taken? We’ve distilled a number of best practices into the following list:

00 USG Roadmap

  1. Develop Roadmaps:  Before Cloud Computing can reap all of the benefits that it has to offer, governments must first move along a continuum towards adoption. For that very purpose, a number of governments have developed roadmaps to aid in developing a course of progression towards the cloud. Successful roadmaps featured the following components:
    • A technology vision of Cloud Computing Strategy success
    • Frameworks to support seamless implementation of federated community cloud environments
    • Confidence in Security Capabilities – Demonstration that cloud services can handle the required levels of security across stakeholder constituencies in order to build and establish levels of trust.
    • Harmonization of Security requirements – Differing security standards will impede and obstruct large-scale interoperability and mobility in a multi-tenanted cloud environment, therefore a common overarching security standard must be developed.
    • Management of Cloud outliers – Identify gaps where Cloud cannot provide adequate levels of service or specialization for specific technologies and application and identify strategies to deal with these outliers.
    • Definition of unique mission/sector/business Requirements (e.g. 508 compliance, e-discovery, record retention)
    • Development of cloud service metrics such as common units of measurement in order to track consumption across different units of government and allow the incorporation of common metrics into SLAs.
    • Implementation of Audit standards to promote transparency and gain confidence
  2. Create Centers of Excellence: Cloud Computing Reference Architectures; Business Case Templates and Best Practices should be developed so that cloud service vendors should map their offerings to (i.e. NIST Reference Architecture) so that it is easier to compare services.
  3. Cloud First policies: Implementing policies that mandate all departments across government should consider cloud options first when planning for new IT projects.

Conclusion

The adoption of cloud services holds great promise, but due to the far reaching consequences necessitated by the wide-spread adoption of cloud to achieve objectives such as economies of scale, a comprehensive plan compounded with standardization and transparency become essential elements of success.

We hope this brief has been useful. Ook!

Useful Links

Microsoft’s Cloud Computing in Government page
Cisco’s Government Cloud Computing page
Amazon AWS Cloud Computing page
Redhat cloud computing roadmap for government pdf
US Government Cloud Computing Roadmap Vol 1.
Software and Information Industry updates on NIST Roadmap
New Zealand Government Cloud Computing Strategy link
A
ustralian Government Cloud Computing Strategic Direction paper
Canadian Government Cloud Computing Roadmap
UK Government Cloud Strategy Paper
GCN – A portal for Cloud in Government
Study – State of Cloud Computing in the public sector

Technological Transformation in Government

Inauguration Obama

Photo (c) A/P Sandy Huffaker

Foreword

We live in an exciting juncture when the world is undergoing massive and visible transformation. The Internet has given us instant access to information and it has affected how we do things on a global scale. Our children go to school and interact with knowledge in ways that we could have never imagined before; while demand and supply interact within virtual, global marketplaces where consumers are informed and empowered and suppliers are intelligent and efficient. Yet there is no place where the impacts of technology are more visibly felt than in the Public Sector, where technology may be deployed to serve an informed electorate with high expectations, demanding services and efficiency at an ever-accelerating pace.

Brief

In this series of articles, I will explore a number contemporary issues that Technology decision makers in Government are concerned with and also look into innovative, viable solutions that have been successfully implemented in a number of countries to solve or address these concerns.

  • Cloud Computing – While cloud technology promises to delivery significant cost savings from economies of scale and cut down on deployment costs, cloud has been traditionally shunned by governments for a number of reasons, including security and confidentiality. In recent years, a number of vendors have developed Government Clouds that are designed to integrate with existing Government networks and systems, while meeting government needs for compliance and security.
  • Big Data – Big Data refers to data sets that are so large that they become difficult to manage using traditional tools. With the proliferation of e-government initiatives, governments word-wide face significant challenges in managing vast repositories of information.
  • Open Source and Interoperability – Government’s ability to adopt and enhance open standards that encourage interoperability between different systems and establish an environment of equal opportunities among technology vendors, partners and end-users.
  • Digital Access – The Internet has redefined access to knowledge and learning and it is a priority for governments to ensure that students from all walks of life are not limited in opportunity due to poor access to the web. Here we explore how technology is transforming big cities and communities alike in accessing the web.
  • Mobility and Telecommuting – Governments worldwide are embracing  telecommuting and flex-time work policies as a viable long-term solution to reducing costs and energy consumption. We explore technologies that foster collaboration and productivity for a mobile workforce.
  • Cyber Security – With the call for increased vigilance against acts of cyber terrorism, we explore the extent that governments are prepared to do in order to maintain Confidentiality, Integrity and Availability amidst an increasingly connected ecosystem of public-sector employees, vendors, contractors and other stakeholders.
  • Open Government – Governments are heeding the call for greater transparency, public participation and collaboration by making information more readily available on government websites and also providing the public with greater access for providing feedback and commentary. This had led to the adoption of new technologies and innovations to ensure that confidentiality is not sacrificed in the light of new policies
  • Connected Health and Human Services – Case management, health records management and health benefits administration are but a few components of government services that many lives depend on to function effectively and efficiently. We will explore technologies that are transforming these services.
  • Accessibility – In an age of information workers, support for differently abled employees has become a source of competitive advantage, enabling governments to tap into additional segments of the workforce.
  • Defense and Intelligence – Technology has long played a vital role in ensuring that vital battlefield decisions can be made with timely access to information; communications occurs unimpeded in times of emergency; and cost efficiencies can me maximized in times of tightening budgets.

Dimensions of Exploration

Essential to any well-thought out study, we must consider important attributes such as the long-term implications, return on investment and practicality of implementation. Therefore, for each of the issues listed above, we will include in our analysis the following components:

  • Executive Brief
  • Latest Trends
  • Strategic Value
  • Positive Implications
  • Negative Implications
  • Proposed Solutions
  • Reference Implementations
  • Useful Links

Topics

An individual article has been dedicated to each of the following topics; please click on each one for further reading:

  • Cloud Computing
  • Big Data
  • Open Source and Interoperability
  • Digital Access
  • Mobility and Telecommuting
  • Cyber Security
  • Open Government
  • Connected Health and Human Services
  • Accessibility
  • Defense and Intelligence

* This series is a work in progress, and does not support a particular thesis or ideal. It simply reflects research of the solutions that have been devised to solve frequently unique problems and does not reflect an endorsement of a particular technology or ideal.

Why write about Government?

I’ve spent a significant amount of time consulting for government and in truth, nothing has given me greater pleasure than to see the benefits of technology impact my selfless friends and colleagues who have made the altruistic decision to stay in government in order to serve the greater good. These unsung heroes maintain the systems that support our health, education, defense, civil, social and legal infrastructure and many other essential functions of government, which many lives may depend on.

Cloud Architectures: The Strategy behind homogeneity

Extract

So what is homogeneity? Homogeneity refers to keeping things identical or non differentiable from one another on purpose. For example, Cloud Service Providers create homogenous infrastructure on a massive scale by deploying commodity hardware in their datacenters, thus enjoying lower incremental costs which they pass on to the consumer via lower prices. Technology architects can similarly design their applications and services to run over these homogenous computing nodes or building blocks, facilitating horizontal scaling capabilities commonly referred to as elastic computing. In this article, we will explore how homogeneity has been in use long before the era of modern technology and also derive meaningful takeaways from each particular adaptation of homogeneity.

Agility  – Military Tactics

From the dawn of greatest of ancient civilizations, battles were fought and won by ambitious generals relying on the strength of their strategies and ability to quickly adapt their tactics to changing battlefield situations. The conquering Romans provide a fascinating study on the implementation of homogeneity and standardization at the fundamental unit level. A Roman Legion was subdivided into 10 units or cohorts, with each cohort further subdivided into six centuries,  numbering 80 men on average.

Legion

While each Century served as an individual unit in battle and had autonomy of movement under the command of a centurion. the true tactical advantage of the Roman Legion was seen through the combination of these units on the battlefield, known as formations. Each formation presented a different pattern of deploying military units, optimized for various scenarios and terrains, including troop movement and battle formations depending on the relative strength of one’s army and that of opposing forces. For example, the Wedge formation highlighted below, drew the strongest elements of the force into the center that could be used to drive forward through opposing forces. Similarly, the Strong Right Flank formation provided tactical strength versus an opposing army under the principle that a strong right flank could quickly overrun an opposing force’s left flank since the enemy’s left-hand side would be encumbered by shields and less agile to sideways strikes.

RomanFormations

Under the watchful eye of a skilled and experienced commander, the Roman Legion was a force to be reckoned with, but this ultimately relied on the abilities of each individual unit to be deployed rapidly in battle and  to act reliably in performing their role. The Romans dominated the battlefield, besting even the famed Greek Phalanx due to the agility by which the Legion could change formations in the heat of battle. Our takeaway here is that homogeneous building blocks provide scalability and flexibility in response to rapidly changing situations.

Operational Efficiency – Southwest Airlines

Southwest

Southwest Airlines has reached great commercial success in part to its strategy of operating a homogenous fleet of Boeing 737 aircraft. The utilization of a single build of aircraft greatly enhances operational efficiencies in terms of technical support, training, holding spare parts and even route planning, since passenger and baggage loads are fairly consistent across each plane. It’s also less complicated to plan for fleet growth and to allocate resources in anticipation of future demand with a homogenous unit. The task of staffing employees for flights is also greatly simplified, since aircrews are trained to operate one type of aircraft across several variants and the task of scheduling crews to support multiple routes along Southwest’s Point-to-Point system becomes a much simpler endeavor. Southwest Chairman Gary Kelly announced plans in April 2012 to acquire 74 Boeing 737-800s by 2013 in order to augment their existing fleet of 737-700s. This expansion represents an incremental upgrade to their existing fleet capacity, in response to greater demand across Southwest’s expanding network and also to capture greater economy amidst rising fuel costs. An important takeaway from Southwest Airlines is that homogenization leads to operational efficiencies and also drives competitive advantage especially in highly price sensitive markets. Most importantly, homogenization has tangible benefits by directly enhancing profits.

Build Standardization – MacDonald’s Fast Food Restaurants

MacDonalds

Probably one of the most classic examples of enhanced operational efficiency derived from developing a homogenous product, this iconic fast-food provider was founded on the principles of consistency, homogeneity and ease of preparation. While in recent years, MacDonald’s has stepped up its efforts to localize its menu in an effort suit local palates, over 30% of revenues are driven by sales of core items that include the Big Mac, hamburger, cheeseburger, Chicken McNuggets and their world-famous french fries as stated by CEO Jim Skinner during a 2011 earnings call. Sticking to a shorter, standardized menu was part of the company’s push to adhere to stringent quality standards and became a crucial factor in building operational efficiencies from resource procurement capabilities during company’s international expansion in the 1970s. MacDonald’s was specific and exacting in it’s standards when expanding into new markets, to the point of defining the genetic variety of potatoes to use in it’s french fries. This is an important takeaway particularly from the perspective of  a business that is expanding into International markets and whose products are not exactly exportable from one country to another. Standardization and homogenization of a product catalog makes it easier to decide early on whether to local source for materials or to bring your own.

Resource Sharing – Automobile Twinning

Auto Twinning

A long-perceived benefit of consolidating automobile manufacturers into several large corporations that govern the production of numerous brands such as America’s General Motors Group and Ford Motor Company as well as Germany’s Volkswagen Group lies in the ability to utilize a common drive train or chassis that can be deployed across different brands or makes of automobiles. For example, the Ford Escape, Mercury Mariner and Mazda Tribute are all built on the same chassis and share a large proportion of their components. The differences in these cars tend more to be aesthetic in nature, since they’re designed to appeal to different consumer segments, nevertheless the financial benefits to the automobile manufacturer are far more tangible. By getting design teams across various brands to collaborate at an early stage in a car’s development, manufacturers can assemble  a common design platform, of which elements can be later customized to fulfill individual brand attributes. The results are that  manufacturers are able to reap huge benefits from doubling or in some cases tripling their economies of scale as well as accelerating returns on their R&D investment dollars. Our takeaway here is that common components or build elements can be identified and jointly developed as shared resources to prevent duplication of effort and wastage of resources. Service Oriented Architectures that subscribe to a shared services model are a great example of this philosophy in action.

Conclusion

In this article, we explored how the fundamental Cloud Architecture principle of homogenization could provide agility to the Roman Legions, operating efficiencies to Southwest Airlines, standardization to MacDonald’s restaurants and cost savings and improved ROI for major automotive manufacturers. The fact is, examples abound in the world to showcase the rational concepts behind building applications in the Cloud and for this very reason, the Cloud is an exciting and fundamentally compelling evolution of computing for our generation because it makes sense! Thank you for reading.

About RoadChimp

RoadChimp is a published author and trainer who travels globally, writing and speaking about technology and how we can lend paradigms from other industries in order to build upon our understanding of a rapidly emerging technologies. The Chimp started off his technology career by building one of the first dot.com startups of its type in Asia and subsequently went on to gain expertise in Large Scale computing in North America, the Caribbean and Europe. Chimp is a graduate of Columbia Business School and

References

Article on Roman Military Tactics (http://romanmilitary.net/strategy/legform/)

Blog Article on Southwest Airlines (http://blog.kir.com/archives/2010/03/the_genius_of_s.asp)

MacDonald’s Menu breakdown of profitability (http://adage.com/article/news/mcdonald-s-brings-u-s-sales-years/232319/)

FTC study on MacDonald’s International Expansion (http://www.ftc.gov/be/seminardocs/04beyondentry.pdf)

Report on Automotive Twinning (http://www.edmunds.com/car-buying/twinned-vehicles-same-cars-different-brands.html#chart)

Cloud Architectures: Session Handling

Introduction

Deploying applications to the cloud, requires a critical re-think of how applications should be architected and designed to take advantage of the bounty that the cloud has to offer. In many cases, this requires a paradigm shift in how we design the components of our applications to interact with each other. In this post, we shall explore how web applications typically manage session state and how cloud services can be leveraged to provide greater scalability.

Web Application Tiers

It is a common practice to design and deploy Scalable Web Applications in a 3-tiered configuration, namely as follows:

1. Web Tier: This tier consists of anywhere from a single to a large number of identically configured web servers that are primarily responsible for authenticating and managing requests from users as well as coordinating requests to subsequent tiers in the web architecture. Cloud-enabled Web Servers commonly utilize the HTTP protocol and the SOAP or REST styles to facilitate communication with the Service Tier.

2. Service Tier: This tier is responsible for managing business logic and business processing. The Service Tier comprises of a number of identical stateless nodes that host services that are responsible for performing a specific set of routines or processes.

3. Data Tier: The data tier hosts business data across a number of structured or unstructured formats and most cloud providers commonly host a variety of storage formats, including Relational Databases, NoSQL and simple File Storage, commonly known as BLOBS (Binary Large OBjects).

4. Load Balancing (Optional): An optional tier of load balancers can be deployed on the perimeter of the  Web Services tier to load balance requests from users and distribute load among servers in the Web Tier.

Managing Session State

Any web application that serves users in a unique way needs an efficient and secure method of keeping track of each user’s requests during active session.  For example, an e-commerce shopping site that provides each user with a unique shopping cart needs to be able to track the individual items in each user’s shopping cart for the duration of their active web session. More importantly, the web application that serves the user needs to be designed to be resilient to failures and potential loss of session data. In a Web Services architecture, there are a number of methods which can be employed to manage the session state of a user. We shall explore the most common methods below:

  • Web Tier Stateful (Sticky) Sessions: A web application can be designed such that the active Web Server node that a user get’s redirected to stores the session information locally and all future requests from the user are served by that node. This means that the Server Node becomes stateful in nature. Several disadvantages of this design are that the node serving the user becomes a single point of failure and also that any new nodes added to the collection of Web Servers can only share the load of subsequently established sessions since existing sessions continue to be maintained on existing nodes, thus severely limiting the scalability of this design and its ability to evenly distribute load
  • Web Tier Stateless Session Management: This design solves several limitations stated in the previous design by storing user session state externally, that can be referenced by any of the connected Web Server nodes. An efficient way to store small amounts of session data can be via a small cookie that stores a Session ID for the individual user. This Session ID can serve as a pointer for any inbound request between the user and the Web Application. Cloud Service Providers offer various types of storage to host Session State data, including NoSQL, Persistent Cloud Storage and Distributed Web Cache storage. For example, a web-tier request would be written to use AJAX to call REST services that would in turn pull JSON data relating to an individual user’s session state.
  • Service Tier Stateless Session Management: In most web architectures, the Service Tier is designed to be insulated from user requests. Instead, the Web Tier acts as a proxy or intermediary, allowing the Service Tier to be designed to be run in a trusted environment. Services running in this tier do not require state information and are completely stateless. Due to this statelessness, the service tier enjoys the benefits of the loose coupling of services, which allows individual services to be written in different forms of code such as Java, Python, C#, F# or Node.js based on the proficiency of the development teams and are still able to communicate with each other.

Summary

Stateless Session Management allows us to build scalable compute nodes within a Web Application Architecture that are easy to deploy and manage, and reduce single points of failure and take advantage of scalability and resiliency offered by Cloud Services providers to host session state data.

Exchange 2013 Sample Architecture Part 2: High-level Architectural Design Document

Overview:

This section provides an introduction into the key elements of the Exchange 2013 Architectural solution. It provides high-level solution overview and is suitable for all technical project stakeholders. The excerpts of the final design document are listed under this post and the full High Level Design document can be downloaded here: RoadChimp Sample Architectural Doc (ADOC) v1.1

1. Messaging Infrastructure function

The Messaging Infrastructure serves the primary function of providing electronic mail (E-mail) functionality to Chimp Corporation. The messaging infrastructure supports E-mail access from network connected computers and workstations as well as mobile devices. E-mail is a mission critical application for Chimp Corp and it serves as an invaluable communications tool that increases efficiencies and productivity, both internally to an organization, and externally to a variety of audiences. As a result, it is of paramount importance for the Chimp Corp to maintain a robust infrastructure that will meet present and future messaging needs.

Key requirements of the messaging infrastructure are as follows:

  • Accommodate service availability requirements
  • Satisfy archiving requirements
  • Satisfy growth requirements
  • Provide required business continuity and disaster recovery capabilities

1.A. About IT Services (ITS)

The IT Services Organization is responsible for managing the IT environment for the Chimp Corp as well as ensuring adherence to published standards and operational compliance targets throughout the enterprise.

1.B. Service Level Agreement (SLA)

The Email infrastructure is considered mission critical and, therefore, has an SLA requirement of 99.99% availability.
The full SLA for the messaging environment can be found in the document <Link to SharePoint: Messaging SLA> 

1.C.  Locations

The messaging infrastructure is hosted from two separate datacenters being at:

  • Datacenter A (DCA)
    Chimp Center Prime
    1 Cyber Road,
    Big City
  • Datacenter B (DCB)
    Chimp Center Omega
    10 Jungle Way,
    Banana Town

The messaging infrastructure is supported by the IT Services Support Organization located at:

  • Chimp Corp Headquarters
    Chimp Center Prime
    Bldg 22, 1 Cyber Road,
    Big City

1.D.     E-mail User Classifications

The primary users of the messaging system are Chimp Corp employees. The user base is divided in two groups as follows:

  •     Exec: users performing Senior or Critical corporate functions
  •     Normal: the rest of the user population

2. Existing Platform

This section of the Asset document provides an overview of the present state of the asset, as well as a chronological view of changes based on organizational or technological factors.

2.A.     Existing Exchange 2003 design

A third-party consulting company performed the initial implementation of the messaging environment in 2000. The messaging platform was Microsoft Exchange 2003 and Windows 2003 Active Directory. The diagram below provides a representation of the existing design blueprint.

Exchange 2003 Environment

Fig. 1 Existing Messaging Environment

A single unified Active Directory Domain namespace chimpcorp.com was implemented in a Single Domain, single Forest design.

2.B. Change History

Over the years the Chimp Corp messaging environment has undergone various changes to maintain service level and improve functionality. The timeline below shows the changes over time.

Chimpcorp Timeline
Fig. 2 Chimp Corp Messaging Infrastructure Timeline

2.B.1   Initial Implementation

The Exchange 2003 messaging infrastructure was implemented by IT Services in 2005 and the entire user base was successfully migrated over to Exchange 2003 by September 2005.

2.B.2   Linux Virtual Appliances deployed for Message Hygiene

A decision was made by IT to deploy a Message Hygiene environment for the company in Windows 2013.

 This change was scheduled as maintenance and was executed early 2009.

2.B.3   Additional Datacenter Site (Omega)

In order to improve infrastructure availability and to support additional growth of the corporate environment, a second datacenter site, codenamed Omega was commissioned and fully completed by March of 2009.

2.B.4   Two Exchange Mailbox Clusters (A/P) deployed in the Omega Datacenter Site

To improve the availability of e-mail for users and also to meet plans for storage and user growth, two additional Exchange Mailbox Servers were deployed in Datacenter Omega (DCB).

2.B.5   Third-party archiving solution

A third party archiving solution was deployed by IT Services in 2010 as part of efforts to mitigate growth of the Exchange Information Stores, based on recommendations from their primary technology vendor. The archiving solution incorporates a process known as e-mail stubbing to replace messages in the Exchange Information Stores with XML headers.

2.B.6   Acquisition by Chimp Corp

After being acquired by Chimp Corp in September 2011, immediate plans were laid out to perform a technology refresh across the entire IT infrastructure.

2.B.7   Active Directory 2008 R2 Upgrade

The Windows Active Directory Domain was updated to version 2008 R2 in native mode in anticipation of impending upgrades to the Messaging infrastructure. The replacement Domain Controllers were implemented as Virtual Machines hosted in the Enterprise Virtual Server environment running VMWare vSphere 5. This change was completed in March 2012.

2.C. Existing Hardware Configuration

The current hardware used in the messaging platform consists of the following elements:

2.C.1   Servers

Existing server systems comprising the messaging environment include:

    • 12 x HP DL 380 G4 servers at DCA with between 2 – 4 GB of RAM
    • 10 x HP DL 380 G4 servers at DCB with between 2 – 4 GB of RAM

2.C.2   Storage characteristics

Exchange storage used for databases, backups, transaction logs and public folders have been provisioned on:

    • 2 TB of FC/SAN attached storage provisioned for 5 Exchange Storage Groups and 21 Databases and Transaction Logs
    • 2 TB ISCSI/SAN attached storage Archiving

2.D. Network Infrastructure

The Chimp Corp email infrastructure network has two main physical locations at the DCA and DCB datacenter sites. These are currently connected via the Chimp Corp LAN/WAN. The core switches interconnecting all hardware are Cisco 6500 Series Enterprise class switches.

2.E.  Present Software Configuration

Software and licenses presently in use include:

  • Microsoft Windows 2003 Standard
  • Microsoft Windows 2003 Enterprise
  • Microsoft Exchange 2003 Standard
  • Microsoft Exchange 2003 Enterprise
  • Third Party SMTP Appliances
  • A Stub-based third-party Email Archiving Tool

3. Messaging Infrastructure Requirements

The design requirements for the Exchange 2013 messaging environment have been obtained from the project goals and objectives, as listed in the Project Charter for the E13MAIL Project.

The primary objective for the E13MAIL Project is to ensure continued reliability and efficient delivery of messaging services to users and applications connecting to Chimp Corp from a variety of locations. Stated design goals are to increase performance, stability and align the operational capabilities of the messaging environment with Industry Best Practices.

The requirements/objectives for the messaging infrastructure are:

  • Redundant messaging solution deployed across 2 datacenter locations; DCA and DCB.
  • Capable of Audit and Compliance requirements
  • High Availability (99.99%)
  • Monitoring of services and components
  • Accurate configuration management for ongoing support
  • Adherence to Industry Best Practices for optimal support by vendors and service delivery organizations
  • Reliable Disaster-Recoverable backups, with object level recovery options
  • Message Archiving functionality with a maximum retention period of 7 years

4. Design Components

The primary messaging solution is to deploy new Exchange 2013 environment that spans Chimp Corp’s physical data center locations and extends into Microsoft’s Office 365 cloud to take advantage of the latest user productivity and collaboration features of Microsoft Office 2013.

The main goals for this solution are:

  • Minimize end-user impact: Minimizing the end-user impact is a key goal for Chimp Corp. Significant effort must be made to ensure that the transition of all e-mail related services are seamless to the end-user.
  • Reliable delivery of services: The messaging environment is a mission critical component of Chimp Corps IT infrastructure and adheres to strict Change Management practices. The solution must be able to integrate with existing Operational and Change processes.
  • Longevity of solution: The new messaging solution must endure beyond the initial implementation as it evolves into a production state. This requires the necessary attention to ensuring that operational knowledge is transferred to IT Services technical teams such that they can maintain uptime requirements.

The individual design components were subjected to a stringent evaluation process that included the following design criteria:

  •     Costs of Ownership
  •     Technological engineering quality
  •     Scalability
  •     Fault Tolerance / Reliability
  •     Industry best practices
  •     Supportability
  •     Ease of administration
  •     Compatibility with existing systems
  •     Reliability
  •     Vendor specifications

4.A. Hardware technology

IT Services researched the server solutions from a number of hardware vendors and made a final decision in favor of HP, Brocade and Cisco vendor equipment.

4.A.1   Server hardware

The server platform used is the eighth generation (G8) HP Blade 400-series server. This is an Intel based server system. The CPUS’s in these systems are standardized to Intel Xeon E5-2640 processors; these are hex-core processors with a 2.5 GHz speed. The servers are equipped with 128 GB of memory to accommodate their specific functions. The blade servers are provisioned in HP Blade C 7000 class enclosures.

4.A.2   Storage hardware

To accommodate the storage requirements two storage arrays are implemented. The primary array is an HP EVA 6400 class Storage Area Network. This array is equipped with 25TB of RAW storage and is used for on-line, active data. The secondary array is an HP P2000 G3 MSA class storage area network. This array is equipped with 15TB of RAW storage and is used for secondary storage like archives, backups etc.

4.A.3   Interconnect technology

HP’s Virtual Connect technology is used to accommodate network connectivity to both the storage network and the data networks. The virtual connect technology acts as a virtual patch panel between uplink ports to the core switching infrastructure and the blade modules. The virtual connect backplane will connect the network connections into a Cisco based core network. The storage area network is interconnected via a Brocade switch fabric.

4.A.4   Server Operating Systems technology

The majority of the messaging infrastructure components will be deployed onto the Microsoft Windows Server 2012 Operating System platform, licensed to the Enterprise version of the operating system. For systems that do not support Windows Server 2012, Windows Server 2008/R2 will utilized.

4.A.5   Messaging platform technology

A pristine Microsoft Exchange Server 2013 will be implemented in a hybrid configuration, featuring two major components:

  • On-premise Exchange 2013: The on-premise environment to support core business functions that cannot be moved to the cloud due to compliance reasons.
  • Office 365: All non-compliance restricted users will be migrated onto the Office 365 cloud.

The hybrid deployment will feature full interoperability between on-premise and cloud-based users, featuring single sign-on, sharing of calendar Free/busy information and a single unified OWA login address.

4.A.6   Back-end Database technology

Microsoft SQL Server 2012 was selected as the database platform to support all non-Exchange application requirements. The selection criterion for this product was partly dictated by the usage of technologies that depend on the SQL server back-end. As part of simplification and unification, it is preferred to keep all back-end databases in the messaging infrastructure on the same database platform.

4.A.7   Systems Management Solution

Due to the diversity of software applications and hardware in this infrastructure, a mix of management tools and products are used to manage all aspects of the messaging infrastructure. Major components are listed below:

(a)    Server hardware management: Vendor provided HP System Insight Manager hardware tools are used in combination with Microsoft System Center Operations Manager (SCOM) to provide hardware-level monitoring and alerting.

(b)    Server event management: Microsoft Systems Center Operations Manager (SCOM) 2012 is used for server event consolidation, management and alerting.

(c)     Server Applications management: Server software management comprises of systems patch management and server provisioning.

    • Systems patch management: Windows Systems Update Server (WSUS) integrated into Systems Center Configurations Manager (SCCM) provides patch management of all Windows Server Operating Systems in the messaging environment.
    • Server Provisioning: Server Provisioning for both bare metal and virtual server deployments are managed via the HP rapid deployment pack (HP/RDP)

4.A.8   Message Security and Protection technology

The following Security and Protection products have been selected:

  • Server Virus protection: McAfee Antivirus has been selected to protect the server operating system.
  • Message hygiene: Microsoft Exchange Online Protection (EOP) will be used for message hygiene and protection.
  • Security events auditing: Microsoft SCOM has been selected to capture information such as security auditing and alerting events that are generated from the server platforms.

4.B. Functional Blueprint

The blueprint below illustrates the desired messaging infrastructure:

Exchange 2013 Design

Figure 3 Chimp Corp Functional Messaging Design 

Conclusion

In the next section we will cover more detailed aspects of the Exchange 2013 design, as well as Server Virtualization Considerations for deploying Exchange 2013.

For the next part of this post, please click here.

Exchange 2013 – 70-342 (Advanced Solutions of Microsoft Exchange Server 2013)

Hi all,

I took the Exchange 70-342 exam – Advanced Solutions of Microsoft Exchange Server 2013 while it was still in Beta in November 2012 and there weren’t many resources available for assist us with preparation, so I ended up reading through most of Technet.

As a friendly sort of chimp, I decided that many of you guys out there would be able to benefit from some short summarized info docs explaining the various features, components and commands for the knowledge areas tested in Exchange 2013. So here’s what I’ve done, I’ve posted up a number of short articles, called Road Chimp’s Exchange 2013 Briefs.

You can reach them at the menu section on the top of this blog. I’ve also decided to post links below. Put a like on this post if you’ve found it useful. 🙂

1. Exchange Unified Messaging
2. Site Resilience
3. Information Rights Management
4. Mailbox and Administrative Auditing
5. In-Place Archiving
6. Data Loss Prevention
7. Message Records Management
8. In-place eDiscovery
9. In-place Hold
10. Coexistence with Exchange Online (Hybrid)
11. Coexistence with Legacy Systems
12. Cross-Forest Coexistence
13. Exchange Federation