Technology in Government – Cloud Computing

Executive Brief

A number of governments have implemented roadmaps and strategies that ultimately require their ministries, departments and agencies to default to Cloud computing solutions first when evaluating IT implementations. In this article, we evaluate the adoption of cloud computing in government and discuss some of the positive and negative implications of moving government IT onto the cloud.

Latest Trends

In this section, we look at a number of cloud initiatives that have been gaining leeway in the public sector:

  • Office Productivity Services – The New Zealand Government has identified office productivity services as the first set of cloud-based services to be deployed across government agencies. Considered to be low hanging fruit and fueled by successes in migrating perimeter services like anti-spam onto the cloud, many organizations see email and collaboration as a natural next step of cloud adoption. Vendors leading the charge include Microsoft’s Office 365 for Government, with successful deployments including Federal Agencies like the USDA, Veterans Affairs, FAA and the EPA as well as the Cities of Chicago, New York and Shanghai. Other vendor solutions include Google Apps for Government which supports the US Department of the Interior.
  • Government Cloud Marketplaces – A number of governments have signified the need to establish cloud marketplaces, where a federated marketplace of cloud service providers can support a broad range of users and partner organizations. The UK  government called for the development of a government-wide Appstore, as did the New Zealand Government in a separate cabinet paper on cloud computing in August 2012. The US government has plans to establish a number of cloud services marketplaces, including the GSA’s info.apps.gov and the DOE’s YOURcloud, a secure cloud services brokerage built on Amazon’s EC2 offering. (link) The image below lists the initial design for the UK government App store.
    03 UK App Store
  • Making Data publicly available  – The UK Government is readily exploiting opportunities to make available the Terabytes of public data that can be used to develop useful applications. The recent release of Met Office UK Weather information to the public via Microsoft Azure’s cloud hosting platform. (link)
  • Government Security Certification – A 2012 Government Cloud Survey conducted by KPMG listed security as the greatest concern for governments when it comes to cloud adoption and that governments are taking measures to manage security concerns. For example, the US General Services Administration subjects each successful cloud vendor to a battery of tests that include an assessment of access controls.

01a Canada Mappings

Canadian Government Cloud Architectural Components

Strategic Value

The strategic value of cloud computing can be summed up into a number of key elements in government. We’ve listed a few that appear on the top of our list:

  • Enhancing agility of government – Cited as a significant factor in cloud adoption, cloud computing promises rapid provisioning and elasticity of resources, reducing turnaround times on projects.
  • Supporting government policies for the environment – The environmental impact due to reduced data center spending and consumption of energy on cooling has tangible environmental benefits in terms of reduced greenhouse gas emissions and potential reductions in allocations of carbon credits.
  • Enhancing Transparency of government – Cloud allows the developed of initiatives that can make government records accessible to the public, opening up tremendous opportunities for innovation and advancement.
  • Efficient utilization of resources – By adopting a pay-for-use approach towards computing, stakeholders are encouraged to architect their applications to be more cost effective. This means that unused resources are freed up to the common pool of computing resources.
  • Reduction in spending – Our research indicated this particular element is not considered to be a significant aspect of moving to cloud computing according to technology decision makers, however some of the numbers being bandied about in terms of cost savings are significant (Billions of dollars) and can appeal to any constituency.

Positive Implications

We’ve listed a number of positive points towards cloud adoption. These may not be relevant in every use case, but worthwhile for a quick read:

  • Resource Pooling – leads to enhanced efficiency, reduced energy consumption and more economical cost savings from scale
  • Scalability – Unconstrained capacity allows for more agile enterprises that are scalable, flexible and responsive to change
  • Reallocation of human resources – Freed up IT resources can focus on R&D, designing new solutions that are optimized in cloud environments and decoupling applications from existing infrastructures.
  • Cost containment – Cloud computing requires the adoption of a ‘you pay for what you use’ model, which encourages thrift and efficiency. The transfer of CAPEX to OPEX also smoothes out cash-flow concerns  in an environment of tight budgets.
  • Reduce duplication and encourage re-use – Services designed to meet interoperability standards can be advertised in a cloud marketplace and become building blocks that can be used by different departments to construct applications
  • Availability – Cloud architecture is designed to be independent of the underlying hardware infrastructure and promotes scalability and availability paradigms such as homogeneity and decoupling
  • Resiliency – The failure of one node of a cloud computing environment has no overall effect on information availability

Negative Implications

A sound study should also include a review of the negative implications of cloud computing:

  • Bureaucratic hinderances – when transitioning from legacy systems, data migration and change management can slow down the “on demand” adoption of cloud computing.
  • Cloud Gaps – Applications and services that have specific requirements which are unable to be met by the cloud need to be planned for to ensure that they do not become obsolete.
  • Risks of confidentiality – Isolation has been a long-practiced strategy for securing disparate networks. If you’re not connected to a network, there’s no risk of threats getting in. A common cloud infrastructure runs the risk of exploitation that can be pervasive since all applications and tenants are connected via a common underlying infrastructure.
  • Cost savings do not materialize – The cloud is not a silver bullet for cost savings. We need to develop cloud-aligned approaches towards IT provisioning, operations and management. Applications need to be decoupled and re-architected for the cloud. Common services should be used in order to exploit economies of scale; applications and their underlying systems need to be tweaked and optimized.

05 Cloud Security concerns

Security was cited as a major concern (KPMG)

Where to start?

There is considerable research that indicates government adoption of cloud computing will accelerate in coming years. But to walk the fine line of success, what steps can be taken? We’ve distilled a number of best practices into the following list:

00 USG Roadmap

  1. Develop Roadmaps:  Before Cloud Computing can reap all of the benefits that it has to offer, governments must first move along a continuum towards adoption. For that very purpose, a number of governments have developed roadmaps to aid in developing a course of progression towards the cloud. Successful roadmaps featured the following components:
    • A technology vision of Cloud Computing Strategy success
    • Frameworks to support seamless implementation of federated community cloud environments
    • Confidence in Security Capabilities – Demonstration that cloud services can handle the required levels of security across stakeholder constituencies in order to build and establish levels of trust.
    • Harmonization of Security requirements – Differing security standards will impede and obstruct large-scale interoperability and mobility in a multi-tenanted cloud environment, therefore a common overarching security standard must be developed.
    • Management of Cloud outliers – Identify gaps where Cloud cannot provide adequate levels of service or specialization for specific technologies and application and identify strategies to deal with these outliers.
    • Definition of unique mission/sector/business Requirements (e.g. 508 compliance, e-discovery, record retention)
    • Development of cloud service metrics such as common units of measurement in order to track consumption across different units of government and allow the incorporation of common metrics into SLAs.
    • Implementation of Audit standards to promote transparency and gain confidence
  2. Create Centers of Excellence: Cloud Computing Reference Architectures; Business Case Templates and Best Practices should be developed so that cloud service vendors should map their offerings to (i.e. NIST Reference Architecture) so that it is easier to compare services.
  3. Cloud First policies: Implementing policies that mandate all departments across government should consider cloud options first when planning for new IT projects.

Conclusion

The adoption of cloud services holds great promise, but due to the far reaching consequences necessitated by the wide-spread adoption of cloud to achieve objectives such as economies of scale, a comprehensive plan compounded with standardization and transparency become essential elements of success.

We hope this brief has been useful. Ook!

Useful Links

Microsoft’s Cloud Computing in Government page
Cisco’s Government Cloud Computing page
Amazon AWS Cloud Computing page
Redhat cloud computing roadmap for government pdf
US Government Cloud Computing Roadmap Vol 1.
Software and Information Industry updates on NIST Roadmap
New Zealand Government Cloud Computing Strategy link
A
ustralian Government Cloud Computing Strategic Direction paper
Canadian Government Cloud Computing Roadmap
UK Government Cloud Strategy Paper
GCN – A portal for Cloud in Government
Study – State of Cloud Computing in the public sector

Exchange 2013 Sample Architecture Part 3: Design Feature Overview and Virtualization Considerations

Overview

In this part of the Sample Architecture Series, we will hone in on several elements of the Exchange Solution design, namely a description of the overall Exchange 2013 solution design, followed by some basic system configuration parameters as well as virtualization considerations.

Design Features

Exchange 2013 Design

The Exchange 2013 Environment for Chimp Corp features the following design elements:

  • Internal Client Access: Internal clients can automatically locate and connect to available CAS Servers through the Availability and Autodiscover services. CAS Servers are configured in arrays for high-availability and the locations of the CAS servers are published through Service Connection Points (SCPs) in Active Directory.
  • External Client Access:  External clients can connect to Exchange via Outlook Web Access (OWA), Outlook Anywhere and Exchange ActiveSync. Exchange 2013 now supports L4 load balancing for stateless failover of connections between CAS servers in the same Array. Client traffic arrives at the Network Load Balancer, which uses Service Connection Points to locate the internal Mailbox servers and distribute load accordingly.
  • Single Domain Name URL: Exchange 2013 relies on a feature in the TCP/IP protocol stack of client computers that supports the caching of multiple IP addresses that correspond to the same name resolved from DNS. In the event of an individual site failure, the IP address corresponding to the CAS array in that site will become unresponsive. Clients automatically connect to the next cached IP address for the CAS Array in order to reestablish client connections. This IP address corresponds to the CAS Servers in the alternative site and failover occurs without any intervention.
  • Mailbox High availability: This feature is  provided by implementing Database Availability Groups (DAG). A single DAG will be configured to protect the messaging service.  It is preferred to deploy a high number of smaller mailbox databases in order to reduce mailbox restoration or reseed times in the event of a failure of a database copy.
  • Message Routing: All External SMTP traffic will be routed securely via Microsoft’s Exchange Online Protection (EOP) cloud-based services and the Internet. Inter-site messages between the premise and online users will also be routed via EOP. Internal messages between on-premise users in either datacenter site will be routed automatically via the transport service on the on-premise Mailbox servers.
  • Hybrid Deployment: The Exchange 2013 environment will be deployed in tandem with an Exchange Online Organization. The purpose of the Exchange Online Organization will be to host mailbox accounts that have been flagged as non-compliance sensitive and reduce the costs of the on-premises deployment. The hybrid implementation will feature a seamless experience between users in the on-premise and online environments, including Single Sign-on for users through the configuration of trusts between the Microsoft Online ID and the on-premises Active Directory Forest; unified GAL access and the ability for online and on-premise users to share free/busy information through the configuration of a Federation Trust with the Microsoft Federation Gateway; as well as secure encrypted message transport between on-premise and online environments, encrypted, authenticated and transported via Transport Layer Security (TLS)
  • Message Archiving: All Messages will be transferred to the Exchange 2013 via the Exchange Online Archiving Service. The existing on-premises archiving solution will be decommissioned after existing message archives are ingested into the Exchange Online Archive.

Exchange 2013 Virtualization

All Exchange 2013 server roles are fully supported for virtualization by Microsoft. Virtualization can assist an organization in consolidating its computing workload and enjoying benefits from cost reduction and efficient hardware resource utilization. According to Microsoft recommended Best Practices, load calculations when provisioning Exchange deployments in a virtual environment must accommodate for additional overheads from the virtualization hypervisor. Therefore, this solution design has factored in an additional resource overhead of 12% to accommodate virtualization.

The following server roles will be virtualized:

  •     Exchange 2013 Mailbox Servers
  •     Exchange 2013 CAS Servers

Microsoft provides further guidance on implementing Exchange Server 2013 in a virtualized environment. Relevant factors have been listed below:

  1. Exchange Servers may be combined with virtual host-based failover clustering migration technology, provided that the virtual machines are configured to not save and restore disk state when moved or taken offline. Host-based failover must result in a cold boot when the virtual machine is activated on a target node.
  2. The root machine should be free of all applications save the virtual hypervisor and management software.
  3. Microsoft does not support taking a snapshot of an Exchange virtual machine.
  4. Exchange supports a Virtual Processor to Physical Processor ratio of no greater than 2:1 and Microsoft recommends an ideal processor ratio of 1:1. Furthermore, virtual CPUs required to run the host OS should be included in the processor ratio count
  5. The disk size allocated to each Exchange Virtual machine must use a disk that is of size equal to 15GB plus the size of virtual memory allocated to the virtual server.
  6. The storage allocated for Exchange data can either be virtual storage of a fixed site, such as fixed Virtual Hard Disks (VHDs), SCSI pass-through storage or iSCSI storage.
  7. Exchange 2013 does not support NAS storage. However, fixed VHDs that are provisioned on block level storage and accessed via SMB 3.0 on Windows Server 2012 Hyper-V are supported.
  8. Exchange 2013 is designed to make optimal usage of memory allocations and as such, dynamic memory features for Exchange are not supported.

Conclusion

Subsequent sections of this series will focus on the Exchange Mailbox Design and CAS Design, as well as the Hybrid Implementation and additional features.

Please click here for the next part: Exchange 2013 Mailbox Server Role Design.

Exchange 2013 Sample Architecture Part 2: High-level Architectural Design Document

Overview:

This section provides an introduction into the key elements of the Exchange 2013 Architectural solution. It provides high-level solution overview and is suitable for all technical project stakeholders. The excerpts of the final design document are listed under this post and the full High Level Design document can be downloaded here: RoadChimp Sample Architectural Doc (ADOC) v1.1

1. Messaging Infrastructure function

The Messaging Infrastructure serves the primary function of providing electronic mail (E-mail) functionality to Chimp Corporation. The messaging infrastructure supports E-mail access from network connected computers and workstations as well as mobile devices. E-mail is a mission critical application for Chimp Corp and it serves as an invaluable communications tool that increases efficiencies and productivity, both internally to an organization, and externally to a variety of audiences. As a result, it is of paramount importance for the Chimp Corp to maintain a robust infrastructure that will meet present and future messaging needs.

Key requirements of the messaging infrastructure are as follows:

  • Accommodate service availability requirements
  • Satisfy archiving requirements
  • Satisfy growth requirements
  • Provide required business continuity and disaster recovery capabilities

1.A. About IT Services (ITS)

The IT Services Organization is responsible for managing the IT environment for the Chimp Corp as well as ensuring adherence to published standards and operational compliance targets throughout the enterprise.

1.B. Service Level Agreement (SLA)

The Email infrastructure is considered mission critical and, therefore, has an SLA requirement of 99.99% availability.
The full SLA for the messaging environment can be found in the document <Link to SharePoint: Messaging SLA> 

1.C.  Locations

The messaging infrastructure is hosted from two separate datacenters being at:

  • Datacenter A (DCA)
    Chimp Center Prime
    1 Cyber Road,
    Big City
  • Datacenter B (DCB)
    Chimp Center Omega
    10 Jungle Way,
    Banana Town

The messaging infrastructure is supported by the IT Services Support Organization located at:

  • Chimp Corp Headquarters
    Chimp Center Prime
    Bldg 22, 1 Cyber Road,
    Big City

1.D.     E-mail User Classifications

The primary users of the messaging system are Chimp Corp employees. The user base is divided in two groups as follows:

  •     Exec: users performing Senior or Critical corporate functions
  •     Normal: the rest of the user population

2. Existing Platform

This section of the Asset document provides an overview of the present state of the asset, as well as a chronological view of changes based on organizational or technological factors.

2.A.     Existing Exchange 2003 design

A third-party consulting company performed the initial implementation of the messaging environment in 2000. The messaging platform was Microsoft Exchange 2003 and Windows 2003 Active Directory. The diagram below provides a representation of the existing design blueprint.

Exchange 2003 Environment

Fig. 1 Existing Messaging Environment

A single unified Active Directory Domain namespace chimpcorp.com was implemented in a Single Domain, single Forest design.

2.B. Change History

Over the years the Chimp Corp messaging environment has undergone various changes to maintain service level and improve functionality. The timeline below shows the changes over time.

Chimpcorp Timeline
Fig. 2 Chimp Corp Messaging Infrastructure Timeline

2.B.1   Initial Implementation

The Exchange 2003 messaging infrastructure was implemented by IT Services in 2005 and the entire user base was successfully migrated over to Exchange 2003 by September 2005.

2.B.2   Linux Virtual Appliances deployed for Message Hygiene

A decision was made by IT to deploy a Message Hygiene environment for the company in Windows 2013.

 This change was scheduled as maintenance and was executed early 2009.

2.B.3   Additional Datacenter Site (Omega)

In order to improve infrastructure availability and to support additional growth of the corporate environment, a second datacenter site, codenamed Omega was commissioned and fully completed by March of 2009.

2.B.4   Two Exchange Mailbox Clusters (A/P) deployed in the Omega Datacenter Site

To improve the availability of e-mail for users and also to meet plans for storage and user growth, two additional Exchange Mailbox Servers were deployed in Datacenter Omega (DCB).

2.B.5   Third-party archiving solution

A third party archiving solution was deployed by IT Services in 2010 as part of efforts to mitigate growth of the Exchange Information Stores, based on recommendations from their primary technology vendor. The archiving solution incorporates a process known as e-mail stubbing to replace messages in the Exchange Information Stores with XML headers.

2.B.6   Acquisition by Chimp Corp

After being acquired by Chimp Corp in September 2011, immediate plans were laid out to perform a technology refresh across the entire IT infrastructure.

2.B.7   Active Directory 2008 R2 Upgrade

The Windows Active Directory Domain was updated to version 2008 R2 in native mode in anticipation of impending upgrades to the Messaging infrastructure. The replacement Domain Controllers were implemented as Virtual Machines hosted in the Enterprise Virtual Server environment running VMWare vSphere 5. This change was completed in March 2012.

2.C. Existing Hardware Configuration

The current hardware used in the messaging platform consists of the following elements:

2.C.1   Servers

Existing server systems comprising the messaging environment include:

    • 12 x HP DL 380 G4 servers at DCA with between 2 – 4 GB of RAM
    • 10 x HP DL 380 G4 servers at DCB with between 2 – 4 GB of RAM

2.C.2   Storage characteristics

Exchange storage used for databases, backups, transaction logs and public folders have been provisioned on:

    • 2 TB of FC/SAN attached storage provisioned for 5 Exchange Storage Groups and 21 Databases and Transaction Logs
    • 2 TB ISCSI/SAN attached storage Archiving

2.D. Network Infrastructure

The Chimp Corp email infrastructure network has two main physical locations at the DCA and DCB datacenter sites. These are currently connected via the Chimp Corp LAN/WAN. The core switches interconnecting all hardware are Cisco 6500 Series Enterprise class switches.

2.E.  Present Software Configuration

Software and licenses presently in use include:

  • Microsoft Windows 2003 Standard
  • Microsoft Windows 2003 Enterprise
  • Microsoft Exchange 2003 Standard
  • Microsoft Exchange 2003 Enterprise
  • Third Party SMTP Appliances
  • A Stub-based third-party Email Archiving Tool

3. Messaging Infrastructure Requirements

The design requirements for the Exchange 2013 messaging environment have been obtained from the project goals and objectives, as listed in the Project Charter for the E13MAIL Project.

The primary objective for the E13MAIL Project is to ensure continued reliability and efficient delivery of messaging services to users and applications connecting to Chimp Corp from a variety of locations. Stated design goals are to increase performance, stability and align the operational capabilities of the messaging environment with Industry Best Practices.

The requirements/objectives for the messaging infrastructure are:

  • Redundant messaging solution deployed across 2 datacenter locations; DCA and DCB.
  • Capable of Audit and Compliance requirements
  • High Availability (99.99%)
  • Monitoring of services and components
  • Accurate configuration management for ongoing support
  • Adherence to Industry Best Practices for optimal support by vendors and service delivery organizations
  • Reliable Disaster-Recoverable backups, with object level recovery options
  • Message Archiving functionality with a maximum retention period of 7 years

4. Design Components

The primary messaging solution is to deploy new Exchange 2013 environment that spans Chimp Corp’s physical data center locations and extends into Microsoft’s Office 365 cloud to take advantage of the latest user productivity and collaboration features of Microsoft Office 2013.

The main goals for this solution are:

  • Minimize end-user impact: Minimizing the end-user impact is a key goal for Chimp Corp. Significant effort must be made to ensure that the transition of all e-mail related services are seamless to the end-user.
  • Reliable delivery of services: The messaging environment is a mission critical component of Chimp Corps IT infrastructure and adheres to strict Change Management practices. The solution must be able to integrate with existing Operational and Change processes.
  • Longevity of solution: The new messaging solution must endure beyond the initial implementation as it evolves into a production state. This requires the necessary attention to ensuring that operational knowledge is transferred to IT Services technical teams such that they can maintain uptime requirements.

The individual design components were subjected to a stringent evaluation process that included the following design criteria:

  •     Costs of Ownership
  •     Technological engineering quality
  •     Scalability
  •     Fault Tolerance / Reliability
  •     Industry best practices
  •     Supportability
  •     Ease of administration
  •     Compatibility with existing systems
  •     Reliability
  •     Vendor specifications

4.A. Hardware technology

IT Services researched the server solutions from a number of hardware vendors and made a final decision in favor of HP, Brocade and Cisco vendor equipment.

4.A.1   Server hardware

The server platform used is the eighth generation (G8) HP Blade 400-series server. This is an Intel based server system. The CPUS’s in these systems are standardized to Intel Xeon E5-2640 processors; these are hex-core processors with a 2.5 GHz speed. The servers are equipped with 128 GB of memory to accommodate their specific functions. The blade servers are provisioned in HP Blade C 7000 class enclosures.

4.A.2   Storage hardware

To accommodate the storage requirements two storage arrays are implemented. The primary array is an HP EVA 6400 class Storage Area Network. This array is equipped with 25TB of RAW storage and is used for on-line, active data. The secondary array is an HP P2000 G3 MSA class storage area network. This array is equipped with 15TB of RAW storage and is used for secondary storage like archives, backups etc.

4.A.3   Interconnect technology

HP’s Virtual Connect technology is used to accommodate network connectivity to both the storage network and the data networks. The virtual connect technology acts as a virtual patch panel between uplink ports to the core switching infrastructure and the blade modules. The virtual connect backplane will connect the network connections into a Cisco based core network. The storage area network is interconnected via a Brocade switch fabric.

4.A.4   Server Operating Systems technology

The majority of the messaging infrastructure components will be deployed onto the Microsoft Windows Server 2012 Operating System platform, licensed to the Enterprise version of the operating system. For systems that do not support Windows Server 2012, Windows Server 2008/R2 will utilized.

4.A.5   Messaging platform technology

A pristine Microsoft Exchange Server 2013 will be implemented in a hybrid configuration, featuring two major components:

  • On-premise Exchange 2013: The on-premise environment to support core business functions that cannot be moved to the cloud due to compliance reasons.
  • Office 365: All non-compliance restricted users will be migrated onto the Office 365 cloud.

The hybrid deployment will feature full interoperability between on-premise and cloud-based users, featuring single sign-on, sharing of calendar Free/busy information and a single unified OWA login address.

4.A.6   Back-end Database technology

Microsoft SQL Server 2012 was selected as the database platform to support all non-Exchange application requirements. The selection criterion for this product was partly dictated by the usage of technologies that depend on the SQL server back-end. As part of simplification and unification, it is preferred to keep all back-end databases in the messaging infrastructure on the same database platform.

4.A.7   Systems Management Solution

Due to the diversity of software applications and hardware in this infrastructure, a mix of management tools and products are used to manage all aspects of the messaging infrastructure. Major components are listed below:

(a)    Server hardware management: Vendor provided HP System Insight Manager hardware tools are used in combination with Microsoft System Center Operations Manager (SCOM) to provide hardware-level monitoring and alerting.

(b)    Server event management: Microsoft Systems Center Operations Manager (SCOM) 2012 is used for server event consolidation, management and alerting.

(c)     Server Applications management: Server software management comprises of systems patch management and server provisioning.

    • Systems patch management: Windows Systems Update Server (WSUS) integrated into Systems Center Configurations Manager (SCCM) provides patch management of all Windows Server Operating Systems in the messaging environment.
    • Server Provisioning: Server Provisioning for both bare metal and virtual server deployments are managed via the HP rapid deployment pack (HP/RDP)

4.A.8   Message Security and Protection technology

The following Security and Protection products have been selected:

  • Server Virus protection: McAfee Antivirus has been selected to protect the server operating system.
  • Message hygiene: Microsoft Exchange Online Protection (EOP) will be used for message hygiene and protection.
  • Security events auditing: Microsoft SCOM has been selected to capture information such as security auditing and alerting events that are generated from the server platforms.

4.B. Functional Blueprint

The blueprint below illustrates the desired messaging infrastructure:

Exchange 2013 Design

Figure 3 Chimp Corp Functional Messaging Design 

Conclusion

In the next section we will cover more detailed aspects of the Exchange 2013 design, as well as Server Virtualization Considerations for deploying Exchange 2013.

For the next part of this post, please click here.

Exchange 2013 Architecture Samples

I’m posting a set of Sample Architectural Design documents that were adapted from a real-world Multi-site Hybrid Exchange deployment. The documents are built entirely on Best Practices and I’ve taken the liberties of updating elements of the design to reflect changes in the Exchange 2013 Architecture (and of course to remove and sensitive confidential information).

This was a fairly large implementation and took the combined efforts of a large team of engineers to complete all of the deliverables, who inspired me and continue to do so. You may not need all of these document components, but it’s good to see how a large Messaging Environment can be broken down into its constituent components and architected in detail.

Read First: These design documents were derived from detailed research and consulting with expert engineers. Feel free to use as a reference, but always verify the requirements of your project against the data in these guides. Roadchimp takes no responsibility for any implementation issues that you encounter. Make sure that you implement licensed copies of Microsoft Software with valid Microsoft Support in place prior to making any changes to a production environment. Further more, make sure that you consult with Microsoft’s  resources to ensure that your hardware is fully supported by Microsoft for deploying Exchange 2013, Windows Active Directory and other architectural components.

I will start posting links to these templates here:

Exchange 2013 Configuration Guides

A warm Ook and hello from your banana loving primate friend! I’ve decided to put up a list of configuration guides for Exchange 2013 in an easy to access part of this blog. The configuration guides will help you to perform (hopefully) some tasks that you may find useful. I will post links to various guides on this page.

1. Exchange 2013 in Windows Azure

2. Configuring a Hybrid Exchange 2013 Deployment

 

I hope to get more posts out there. Thanks for all your comments and likes!

Road Chimp saying Ook!

 

 

Exchange 2013 Architecture Series – Part 3: Mailbox and Storage Design

Hello all, in this third section on Exchange 2013 Architecture, we will look into the Exchange storage subsystem and build around some best practices on Exchange 2013 design.

Exchange Storage Types

Before you start provisioning your Exchange Server 2013 environment, it’s useful to think about the different types of storage that a healthy Exchange environment uses. Each storage classification has its own unique performance requirements, and a well-architected Exchange storage architecture will be designed in order to support these needs.

  • Operating System and Page File Volumes: At the most fundamental level, all Exchange Servers run on top of an Operating System. In addition to storing the OS Kernel, the Operating System volume manages all I/O operations on the Exchange Server, as well as memory allocation and disk management.
  • Exchange Binaries: These volumes contain the actual application files that Exchange needs to run and follows the path: <Drive:>Program FilesMicrosoftExchange ServerV15. Microsoft requires at least 30GB of free space on the drive you wish to install the Exchange binaries on.
  • Exchange Database Volumes: These volumes store the actual Exchange Database files, which follow the format ‘.edb’. Enhancements in the Exchange Server database engine have resulted in reductions in disk resource requirements. However, database volumes should still be optimized for high performance and commonly use RAID striped volumes with parity to support high IOPS.
  • Exchange Database Log Volumes: Exchange 2013 uses a relational database technology that utilizes transaction logs to record changes to the Exchange Databases. The database log volumes are write intensive in nature.
  • Exchange Transport Database Volumes: Changes to how Exchange 2013 manages mailflow have resulted in the creation of several new features known as Shadow Redundancy and Safety Net. Read my previous post for more information on these new features. For Shadow Redundancy, the transport server makes a redundant copy of any messages it receives before it acknowledges successfully receiving the message back to the sending server. The Safety Net feature is an updated version of the Transport Dumpster and retains copies of retained messages in a database for a default of 2 days, via the SafetyNetHoldTime parameter. You should design your storage to accommodate two full days of additional e-mails within a high-availability boundary.
  • Backup/Restore Volumes: With database replication and resiliency features of Exchange now providing fault tolerance and high availability, backup and restore services are less crucial. However, they must be considered in the event of restoring historical or archived data. Most organizations consider less expensive storage types such as WORM (Write one, read many)

Storage hardware technologies

Exchange 2013 supports the following storage hardware technologies:

  • Serial ATA (SATA): SATA disks are cost effective, high capacity storage options that come in a variety of form factors. Microsoft recommends that you do not store your Exchange databases across a spanned volume comprising of multiple SATA drives.
  • Serial-attached SCSI: SCSI is a mature disk access technology that supports higher performance than SATA, but at a higher cost.
  • Fibre Channel (FC): Fibre Channel (note the spelling) support high performance and more complex configuration options such as connectivity to a SAN at a higher cost. FC disks are typically used in a collection of disks known as an Array and support the high-speed transmission of data (up to 16Gbps and 3200 MBps) and potentially require expensive fibre-channel infrastructure (known as switch-fabric) supporting single-mode or multi-mode fiber cables. The advantages of Fibre-channel are that the disks can be colocated a significant distance away from the actual servers (hundreds of meters up to Kilometers) without experiencing any loss in performance, which means that an organization can consolidate the disks used by numerous applications into one part of the datacenter and configure them optimally with high-redundancy features. This is the basic justification for a SAN.
  • Solid-state Drive (SSD): SSD drives use flash-type memory to store data and have a number of advantages of conventional SATA and SCSI based disk technologies which still employ rotary mechanical operations to access and write data on spinning platters. SSD drives are a relatively newer technology and currently support lower disk capacities but feature very high performance, boasting low disk access times (0.1ms compared to SCSI 4-12ms times). Due to their high cost, it is common for organizations to build servers with a combination of disk types, using SSD for Operating System partitions as well as volumes that benefit from write-heavy access, such as transaction log volumes for relational database systems.

Logical Storage Architectures

Exchange 2013 has changed how it addresses storage architecture from the ground up. Since the Extensible Storage Engine was rewritten via managed code and optimized for multiple threading, the storage design for Exchange has had to change as well to keep up. Microsoft provides the following recommendations with respect to the following storage architectures:

  • Direct/Locally Attached Storage (DAS): DAS storage architecture featured disks and arrays that are locally attached to the Exchange Server and are commonly supported by Microsoft Exchange 2013 and include Serial ATA (SATA) and SCSI hardware architectures.
  • Storage Area Networks (SANS): SAN architecture is fully supported in Microsoft Exchange 2013 both over Fibre and iSCSI interfaces.
    • Microsoft recommends that you allocate dedicated volumes (spindles) to Exchange Server and not share the same underlying physical disks with other applications on your SAN.  This recommendation is in support of ensuring that you have reliable and predictable storage performance for Exchange and not have to worry about resource contention and bottlenecks that may be common in poorly designed or managed SAN environments.
    • Fibre-channel over Ethernet (FCoE): Microsoft has yet to release any design information pertaining to whether Exchange 2013 supports implementations of FCoE. While technically, FCoE should have no issues supporting the latency requirements and frame sizes of an Exchange 2013 environment, I would recommend that you proceed with caution when implementing any Microsoft technology over a non-supported environment. In the event of a support incident, Microsoft support has the right to declare that your environment is non-supportable due to inconsistencies with their Hardware Compatibility list (HCL).
  • Network Attached Storage (NAS): At this point in time, Exchange Server 2013 does not suppor the use of NAS-based storage devices for either Physical or Virtual implementations of Exchange.

Allocating Storage

Let’s start with the basic storage requirements of Exchange Server 2013. Microsoft indicates that Exchange 2013 requires that you allocate the following amount of storage space to accommodate the following Storage Types:

  • Exchange System Volumes: At least 200 MB of available disk space on the system drive
  • Exchange binaries: At least 30GB of free space on the drive you wish to install the Exchange binaries on. With an additional 500 MB of available disk space for each Unified Messaging (UM) language pack that you plan to install
  • Exchange Database Volumes: Amount of storage required would vary depending on a number of factors including the number of and size of mailboxes in your organization, the mailbox throughput (average number of emails sent/received), high availability features (database copies), as well as email retention policies (how long you need to store emails). These factors will determine an optimal number of mailboxes per database, number of databases and database copies and finally the amount of storage allocated for growth.
  • Exchange Database Log Volumes: You should provision sufficient storage to handle the transaction log generation volume of your organization. Factors that affect the rate of transaction log generation include message throughput (number of sends/receives), size of message payloads and high-availability features such as database copies. If you plan to move mailboxes on a regular basis, or need to accomodate large numbers of mailbox migrations (import/export to Exchange), this will result in a higher number of transaction logs generated. If you implement lagged database copies, then you need to provision additional storage on the transaction log volume for the number of days of lag you have configured. The following requirements exist for log file truncation when lagged copies are configured:
    • The log file must be below the checkpoint for the database.
    • The log file must be older than ReplayLagTime + TruncationLagTime.
    • The log file must have been truncated on the active copy.
  • Message Queue Database: A hard disk that stores the message queue database on with at least 500 MB of free space.

Volume Configuration

Microsoft recommends the following configuration settings for each volume that hosts Exchange-related data:

  • Partitioning: GPT is a newer technology over traditional MBR partitioning formats to accommodate much larger disk sizes (up to 256 TB), While MBR is supported, Microsoft Recommends that you use GPT partitions to deploy Exchange. Partitions should also be  aligned to 1MB.
  • File System: NTFS is the only supported file system type supported by Exchange 2013. Microsoft recommends an optimal allocation unit size of 64KB for both Exchange Database and Log File volumes. NTFS features such as NTFS Compression, Defragmentation and Encrypted File System (EFS) are not supported by Exchange 2013.
  • Windows BitLocker: BitLocker is a form of Drive Encryption supported in newer versions of Microsoft Windows. BitLocker is supported for all Exchange Database and Log File volumes. However, there is limited supportability for Windows BitLocker on Windows Failover Clusters. Link here.
  • SMB 3.0: SMB is a Network File Sharing Protocol over TCP/IP with the latest version available in Windows Server 2012. SMB 3.0 is only supported in limited deployment configurations of Exchange 2013, where fixed virtual hard disks (VHDs) are provisioned via sMB 3.0 only in the Windows Server 2012 Hyper-V or later version. Direct storage of Exchange data is not supported on SMB. Read this article.

Storage Best Practices

The following best practices offer useful guidelines on how to configure your Exchange 2013 Environment

  • Large Disk Sector Sizes: With ever-increasing disk capacities in excess of 3TB, hardware manufacturers have introduced a new physical media format known as Advanced Format that increases physical sector sizes. Recent versions of the Windows Operating System, including Windows Vista, Windows 7 and Windows Servers 2008, 2008 R2 and 2012 with certain patches applied (link) support a form of logical emulation known as 512e which presents a logical sector size of 512k, whereas the physical hardware can actually read or write to a larger sector size, known as atomicity.
  • Replication homogeneity: Microsoft does not support Exchange databases copies that are stored across different disk types. For example, if you store one copy of an Exchange database on a 512-byte sector disk, you should not deploy database copies to volumes on another Exchange server that are configured with a different sector size (commonly 4KB).
  • Storage Infrastructure Redundancy: If you choose externally attached storage such as a SAN solution, Microsoft recommends that you implement multi-pathing or other forms of path redundancy in order to ensure that the Exchange  Server’s access to the storage networks remains resilient to single points of failure in the hardware and interconnecting infrastructure (Ethernet for iSCSI and Fibrechannel for conventional SCSI-based SANs).
  • Drive Fault Tolerance: While RAID is not a requirement for Exchange 2013, Microsoft still recommends RAID deployments especially for Stand-alone Exchange servers. The following RAID configurations are recommended based on the type of storage:
    • OS/System or Pagefile Volume: RAID 1/10 is recommended, with dedicated LUNs being provisioned for the System and Page file volumes.
    • Exchange Database Volume: For standalone (non-replicating) Servers, Microsoft recommends deploying a RAID 5 with a maximum array size of 7 disks and surface scanning enabled. For larger array sizes, Microsoft recommends deploying RAID 6 (5+1) for added redundancy. For high-availability configurations with database replication, redundancy is provided by deploying more than one copy of any single Exchange Database, therefore Microsoft recommends less stringent hardware redundancy requirements. You should have at least 2 or more lagged copies of each database residing on separate servers and if you can deploy three or more database copies, you should have sufficient database redundancy to rely on JBOD (Just a Bunch of Disks) storage. The same recommendations for RAID 5/6 apply for high-availability configurations. In both cases of standalone and high-availability configurations that use slower disk speeds (5400 – 7200 rpm), Microsoft recommends deploying disks in a RAID 1/10 for better performance.
    • Exchange Mailbox Log Volumes: For all implementations, Microsoft supports all RAID types, but recommends RAID 1/10 as a best practice, with JBOD storage only being used if you have at least three or more replica copies of an Exchange Database. If deploying lagged database copies, you should implement JBOD storage only if you have two or more copies of a database.
    • RAID configuration parameters: Furthermore, Microsoft recommends that any RAID array be configured with a block size of 256KB or greater and with storage array cache settings configured for 75% write cache and 25% read cache. Physical disk write caching should be disabled when a UPS is not in use.
  • Database volume size and placement: Microsoft Exchange Server 2013 supports database sizes of up to 16TB, however for optimal database seeding, replication and restore operations, Microsoft recommends that you limit each database size to 200GB and provision the size of each volume to accommodate a 120% of the maximum database size.
  • Transaction Log volume size and placement: Microsoft recommends implementing Database and Log file isolation by deploying your database files and transaction log files in separate volumes. This is actually a good practice, since the performance requirements of databases and transaction logs differ.
  • Basic and Dynamic Disk types: The Windows Operating System supports a form of  disk initialization known as Dynamic Disks, which allows you to configure options such as software-based RAID and dynamic volume sizes. While Dynamic Disks are a supported storage type in Exchange 2013, Microsoft recommends that you deploy Exchange on the default Basic Disk storage.

Conclusion:

In this section, we explored the various storage components supported by Microsoft Exchange Server 2013 and reviewed some deployment best practices for implementing Exchange. There are a number of different types of storage that the various components of Exchange utilize and a well architected storage solution should seek to optimize performance of these various components.

Reference:

Link to Microsoft Technet article on Storage Configuration Options.

Article on MSDN site on Windows support for Advanced Format disk types

Link to Microsoft Technet article on Virtualization Support

PMP Exam Prep – Part 8: Project Scope Management

In this section we will learn how to define, measure and control the amount work to be performed in order to achieve the goals or objectives of a project.

Project Scope Management involves some of the earliest activities that a PM will manage on a project. Logically, you need to first figure out the total amount of work you need to accomplish in order to complete a project before you can calculate how long the project will take and also how expensive the project will be. You might also recall from earlier sections that Scope is a component of the Triple Constraint and therefore Scope is the first of the triple constraints that we focus on.

Scope Activities throughout the Project Lifecycle

There are five processes that relate to scope when it comes to the Project Lifecycle:

  • Collecting Requirements
  • Defining Scope (Planning)
  • Creating the WBS
  • Verify Scope
  • Control Scope

PMI wants us to be comfortable with each of these processes and the role that they play throughout the Project Lifecycle.

Collecting Requirements

PMI’s approach towards Project Scope Management is to start off by collecting requirements. Here, we first perform a needs analysis as well as some initial data gathering. The focus here is on starting some of the very preliminary processes of the project. Some of the activities performed include:

  1. Performing the initial Risk assessment.
  2. Conducting Focus Groups and Workshops
  3. Working through Questionnaires and Surveys
  4. Evaluating Prototypes

At this point in the project, we’re still trying to identify what the requirements are and how we can measure the success of a project. It’s important to note that we should have a copy of the Project Charter to refer to at this point, as the Charter is listed as an input to the project.

Define Scope

We go through a process known as decomposition. We start with some of the preliminary bit of information that we have assembled in the project to date, such as a Project Charter, Statement of Work and Business Plans and we’re trying to break down the requirements into a greater level of detail. In other words, we are trying to build a detailed description of the project and its final deliverables.

Tools and techniques

  1. Product Analysis
  2. Alternatives Identification
  3. Facilitated Workshops

The output of this process is the Project Scope Statement. The scope statement is a written document and it contains a project justification; the product or end result of the project; as well as the overall objectives of the project being undertaken. The Scope Statement is often an attachment to the Project Charter and not part of the Project Charter itself.

The Project Scope Statement commonly contains the following components:

  • Project Scope Description
  • Acceptance Criteria or what must be completed in order for the project to be considered a success
  • Deliverables which can be thought of as the end result of the project
  • Exclusions which typically identify the areas that are out of scope for the project
  • Constraints which are externally imposed restrictions on the project, such as deadlines, budgets and limited resources
  • Assumptions relating to the scope of the project and the potential impact of these assumptions if they are not valid.

Create WBS

The WBS is a Product oriented (no longer task oriented) family tree of activity according to PMI.  The US Military was responsible for many advances in Project Management, including the development of the WBS as well as the PERT technique (A concept we will cover in the section under Project Time Management) that was developed during the Polaris Submarine Missile Program.

Decomposition and the 100% rule

Decomposition is the process of breaking down project deliverables into smaller, more manageable components, as the WBS is constructed in a hierarchical fashion and gets into progressively greater detail as we move from the upper levels of the WBS into the lowest levels of the WBS, also known as the work package level.

The 100% rule states that the WBS should capture all of the deliverables, both internal and external to the project. This follows the concepts of MBO, which were highlighted in the section on Project Integration Management. MBO or Management By Objective  defines an approach where all of the efforts in a project are directed solely towards the achievement of project objectives and that absolutely no effort should be focused on tasks that are superfluous to the project.

WBS Coding Scheme

You should be familiar with the WBS coding scheme for the exam. A coding scheme refers to the numbering format that is attached to the various levels of the WBS. An example of the WBS scheme is listed below:

152.1.1   Hardware Build-out

152.1.1.1  Requirements Definition

152.1.1.2  Scheduling and Procurement

152.1.1.3  Assembly

152.1.1.4  Closeout

152.1.2   Product Training

152.1.2.1  Training Requirements

152.1.2.2  Scheduling and Logistics

Cost Account – Work Package Relationship

The cost account is a term used when analyzing or constructing the WBS and is deemed to be just one level up from the lowest level, also known as the work package level in the WBS. The cost account is considered to be a summary activity with the work package as its child.

Exam Hint – Distractor answers in the exam. You will be presented with several options that are similar to “Cost Account”. For example, Code of Accounts: Defined in the WBS as any numbering system that is used to uniquely identify each WBS element. Chart of Accounts: Defined as any numbering system used to identify project costs by category and does not appear on the WBS. You might be asked to distinguish between these terms on the exam.

 

80 Hour Rule

This is a generally accepted rule when it comes to assembling the WBS. No discrete activity or series of activities should consume more than 80 hours of effort to accomplish a deliverable. This is equivalent to two 40-hour work weeks. This was a common practice especially in environments where reporting periods are conducted once every two weeks. This rule defines a level of work effort as compared to duration of a particular activity. For example, you can get 80 hours of work completed in one day if you hire enough people.

WBS Benefits

The WBS can provide many benefits to a project, we have listed several below:

  • Team Building
  • Creating a Framework
  • Clarifies Responsibility
  • Clarifies Objectives

In addition, the WBS can be used to help with all of the configuration management processes, including planning; budgeting; funding; estimating and scheduling.

Other Breakdown Structures

For the exam, you will be required to distinguish between the WBS and other breakdown structures. Several common breakdown structures have been listed below:

  • CWBS or contractual work breakdown structure: This is the customer’s perspective of the work breakdown structure.
  • OBS or organizational breakdown structure: The work tied into the hierarchy. We look at the individual elements of the WBS and tie that into the organization. We look at the tasks and refer to the departments in the organization that should be performing the work.
  • RBS or resource breakdown structure: We break down the tasks at the resource level.
  • PBS or project breakdown structure: This is simply another name for the WBS.

Scope Baseline

The WBS lays down the scope baseline for the project and that is because if a task is not in the project, it will not appear in the WBS.  We can have multiple baselines in a project, including a quality baseline; a cost baseline (budget) and a time baseline (schedule). The WBS is still considered to be the primary baseline.

Verify Scope

The scope verification process involves formalizing the acceptance of the Project Scope by Stakeholders.Before we commence on a project, it makes good sense to make sure that everyone agrees on the objectives defined by the project scope before we start investing all of our resources such as time and money.

Similarly, as we complete our work, we also need to obtain acceptance of our work results. As part of our process within the entire project or for each individual phase in the project life-cycle, we need to continuously gain and get acceptance before we move onwards.

In simple terms, we perform verification to ensure that what we have done so far is close to what we had initially planned. We are trying to minimize our level of risk by performing verification. In other words, as the complexity of a project increases, so then does the degree of risk involved in the project.

A good example would be to try to take a shortcut that you’re not familiar with as you’re driving toward a destination. As you turn off the highway, you realize that there is the possibility that you might encounter construction, get lost or even run into bad traffic. The complexity increases as you select this additional route, and hence the risk or the possibility of affecting the outcome of the journey increases.

Conclusion

In this section, we reviewed several concepts relating to Project Scope Management. We reviewed the need to collect requirements and define our scope through a Project Scope Statement and we also looked into the concept of Decomposition, where we break down information into it’s component parts and seek to explain or describe a task in greater detail. We looked at the WBS and examined some of its structural components.

In the next section, we will look at Project Time Management, another element of the triple constraint.

Hope you found this article interesting. As always, show some love by leaving your comments or likes.

Road Chimp signing out.

PMP Exam Prep – Part 2: PMP Exam Application Tips

Ook!

To a lot of exam candidates, submitting their PMP examination application can be a very daunting task. I’ve built up quite a lot of experience helping my students and colleagues with their PMP Exam Applications and so I decided to write this article in order to provide some tips and tricks that might end up saving you time.

Estimated duration: 20+ hours (no kidding!)
The average applicant takes a minimum of 20 hours to put together their examination application materials; format their experience into projects; quantify their experience into the project knowledge areas and finally click on the submit button. Click here to read up on the certification requirements in a previous post.

Why this much effort?

To understand the amount of effort required to put in a good application, you first need to understand PMI’s rationale. As a non-profit organization, PMI’s major sources of revenues are from membership and examination fees. But since most candidates only take an examination once in their lifetime, the bulk of PMI’s revenues comes from membership fees. In other words, PMI must be able to sustain it’s operations and growth strategies primarily from it’s pool of members.

It’s also a little known secret that the PMP examination is not difficult to pass; the fact that there’s close to 400,000 certified PMPs’ globally can attest to that. However the important thing that PMI has to constantly manage is the validity of it’s credential. In other words, what quality benchmarks attest to the PMP certification?

By implementing a rigorous screening process, peppered with blind audits, PMI has been able to maintain a successful certification program that can withstand the quality of the PMP credential from external auditors.

So what doe this boil down to?

  • The PMP examination is not difficult to pass. It reflects knowledge of a framework that is based on Industry Best Practice. If you have the prerequisite amount of experience, you would have encountered similar situations tested in the exam. 
  • PMI puts in place the rigorous certification requirements in order to defend the validity of the certification. As a PMP, I take pride in holding this credential, because I know that holders must have had to pass rigorous experiential requirements before they could even take the exam.
  • PMI wants you to pass the screening stage. They need more people to take their exams, pass and become lifelong members. That’s how they can grow their organization and keep adding tremendous value to the greater Project Management Profession at large.

Conclusion: Therefore, the exam application process may take time and effort, but it is totally worth the time and effort. Also, if another 400,000 people could have done this, then you most certainly could as well. They key here is strategy and planning and I’ve put down some steps to guide you in this process.

PMP Application Steps:

  1. Update your resume
  2. Start looking for previous work experience that constitutes as Projects
    1. Definition: Temporary/Unique Deliverable
    2. Role: Show career progression > role in earlier projects as a contributor and progressing to role of PM in more recent projects
    3. Identify 7-10 projects
    4. Each project should have a minimum duration of 300 hours (3 months – 1 yr)
    5. Each project requires a description:
      1. Scope/scale of project: x customers, y sites, z users
      2. Unique deliverable: what tangible product/service/result was the customer left with? Reports, documentation, application/site, process
      3. Role: What did you do?
      4. Responsibilities:
        1. Early career: Technical deliverables, design, contributing
        2. Lead to PM career: Reports, creating plans, identifying scope/risk
  3. Project hourly breakdown
    1. Out of 100% of hours in a project
      1. 10-15% initiating
      2. 15-25% planning
      3. 25 – 35% executing
      4. 10% monitoring & controlling
      5. 5%  closing

Let’s go through this in greater detail:

1. Update your resume:

You should make sure that your resume is up to date. Pay attention to dates and company information, as you will need this information when you upload all of your examination data to the PMI website.

2. Look for projects:

Now that you have an accurate copy of your resume, start listing out 7-10 sizeable projects from the most recent 5 or 8 years, depending on which category you fit into. (See my previous post for more information). PMI has no limitation of projects that you can list in your application, but for expedience sake, most candidates find that 7-10 projects are a sizeable number that they can work with effectively.

Each project should fall under PMI’s definition of a project. Meaning to say that the project should have a definitive start date and end date (and not carry on indefinitely) and the project should have a unique end-result or deliverable.

For each project, you will need to obtain the following information that I have listed in this example:

Project Title: Global Infrastructure Migration
Company Name: Monkey Business Inc.
(Company Address and phone number)
Project Duration: 13 months
Hours: 1500 hours
Project Start and Finish: December 2005 – January 2007
Project Description:
1. Scope: 20 locations globally, 3000 users, 350 servers. Team size:  30 IT team members globally. New York was one of six network hub locations within the global network.  I was responsible for the network sites in the North-East region (New York, New Hampshire, Boston).
2. Deliverables: Deliverables included the consolidation and migration of three Windows 2003 Active Directory Domains across 20 sites, 3000 users and 350 servers spanning a global network with offices in North America, Europe and Asia; integration of Exchange 2000 and Exchange 2003 messaging platforms; restructuring of network addressing and routing at offices in the North-East region; deployment of VPN connectivity to branch offices; collaborating with corporate engineers at several key network locations within the global IT infrastructure.  Responsible for updating project team during Weekly status update meetings with IT team, authoring technical elements of project plan, providing duration and cost estimates to project manager.

3. Project Durations

PMI categorizes project effort into 5 categories. Initiating, Planning, Executing, Monitoring and Controlling, and Closing. I’ve provided some guidelines above on how you could break down your project effort into the different process groups. Depending on the phase and scale of the individual project you were working on, as well as your role, these proportions would vary for you. I’ve listed 2 examples below:

Example 1:

Role: Project contributor.
> I was an engineer and performed a lot of implementation work.
Hours: 1500
Initiating: (5%) 75 hours > I attended the kick-off meeting
Planning: (5%) 75 hours > I was involved in some of the initial design and planning work
Executing: (55%) 825 hours > I was heavily involved in the implementation of all sites
Monitoring and Controlling: (20%) 300 hours > I attended all weekly status meetings and provided regular status updates to the PM over the 13 month project.
Closing: (15%) 225 hours > I was involved in the final user acceptance testing and project closeout activities.

Example 2:

Role: Project Manager.
> I was responsible for the deliverables, reporting and risks for this project
Hours: 1500
Initiating: (15%) 225 hours > I attended the kick-off meeting
Planning: (30%) 450 hours > I managed all aspects of the project planning deliverables. I developed the project Budgets, Scope and Timeline and finalized the project risk matrix.
Executing: (10%) 150 hours > I performed Quality Assurance and audits
Monitoring and Controlling: (25%) 300 hours > I was responsible for monitoring project status, compiling progress reports and providing executive summaries to sponsors.
Closing: (20%) 300 hours > I was responsible for final delivery of the project and contractual signoff of all deliverables.

Conclusion:

This seems like a lot of information to put together for each project, but I want you to understand that by helping the application reviewers to get an understanding of the scale and scope of your projects, as well as what your specific responsibilities were; they’re better able to gauge whether you’re qualified to pass the screening. Don’t forget that the reviewers may not have your same industry-specific experience, so try to stay away from industry terminology and nomenclature.

Exchange 2013 Brief – Mailbox Audit Logging

Executive Overview

Due to the wide-spread prevalence of e-mail and the potential that e-mails contain sensitive information that may be of high impact to a business or contain personal information, there is a need for many IT departments to be able to track access to mailboxes. Mailbox audit logging enables an organization to identify mailbox access by mailbox owners, delegates and administrators.

Notable Features

  • Mailbox Audit Logon Types
  • Mailbox Audit Log

Architecture/Components

  • Mailbox Audit Logon Types: In Exchange 2013, you can distinguish between three classes of users when they access a mailbox. These classes are:
    • Mailbox Owners: The account designated to access the mailbox. (Primarily Users)
    • Mailbox Delegates: Alternate accounts that have been granted permissions to access a mailbox
    • Administrators: Administrators typically access an account during the following three instances: Firstly, when In-Place eDiscovery is used to search a mailbox. Secondly, when the New-MailboxExportRequest cmdlet is used to export a mailbox; and Thirdly, the Microsoft Exchange Server MAPI Editor is used to access a mailbox.
  • Mailbox Audit Logs: Mailbox audit logs are generated for each mailbox that has mailbox audit logging enabled. Log entries are retained in the mailbox by default for 90 days in the Audits subfolder of the audited mailboxRecoverable Items folder. Mailbox Audit logs allow you to specific what types of important information should be logged for a specific logon type. These include:
    • User Actions (Accessing, copying, creating, moving or deleting a message)
    • Performing SendAs or SendOnBehalf actions
    • Reading or previewing a message
    • Client IP adress
    • Client Host name
    • Process that client used to access the mailbox

Common Administrative Tasks

  1. Enabling or Disabling Mailbox Audit Logging: via EAC or PowerShell
    Set-Mailbox -Identity “Road Chimp” -AuditEnabled $true to enable &
    Set-Mailbox -Identity “Road Chimp” -AuditEnabled $false to disable
  2. Enabling/Disabling Mailbox Audit Logging for various logon types:
    Set-Mailbox -Identity “Road Chimp” -AuditOwner or
    Set-Mailbox -Identity “Road Chimp”
     -AuditDelegate or
    Set-Mailbox -Identity “Road Chimp” -AuditAdmin
  3. Verify Mailbox Audit Logging was configured: via Powershell
    Get-Mailbox “Road Chimp | Format-List *audit*
  4. Create a Mailbox Audit Log Search: via EAC or PowerShell
    New-MailboxAuditLogSearch “Admin and Delegate Access” -Mailboxes “Road Chimp”,”Chief Peeler” -LogonTypes Admin,Delegate -StartDate 1/1/2012 -EndDate 12/01/2012 -StatusMailRecipients “auditors@chimpcorp.com”
  5. Searching Mailbox Audit Log for a specific search term: via EAC or PowerShell
    Search-MailboxAuditLog -Identity “Road Chimp” -LogonTypes Admin,Delegate -StartDate 1/1/2012 -EndDate 12/31/2012 -ResultSize 2000
  6. Bypass a User Account from Mailbox Audit Logging: via EAC or Powershell
    Set-MailboxAuditBypassAssociation -Identity “Road Chimp” -AuditBypassEnabled $true

Top PowerShell Commands/Tools:

– Set-Mailbox -AuditEnabled
– Set-Mailbox -AuditDelegate |AuditAdmin | AuditOwner
– Get-Mailbox

References/Links

Technet: Article on Mailbox Audit Logging
Cmdlets: For Mailbox Audit Logging

Exchange 2013 Brief – Data Loss Prevention (DLP)

Executive Overview

DLP capabilities help you protect your sensitive data and inform users of your policies and regulations. DLP can also help you prevent users from mistakenly sending sensitive information to unauthorized people. When you configure DLP polices, you can identify and protect sensitive data by analyzing the content of your messaging system, which includes numerous associated file types. The DLP policy templates supplied in Exchange 2013 are based on regulatory standards such as PII and payment card industry data security standards (PCI-DSS). DLP is extensible, which allows you to include other policies that important to your organization. Additionally, the new Policy Tips capability allows you to inform users about policy violations before sensitive data is sent.

Notable Features

  • DLP Policies
  • Sensitive Information Types
  • Policy Detection and Reporting
  • Policy Tips

Architecture/Components

The transport rule agent (TRA) is used in Exchange 2013 to invoke deep message content scanning and also to apply policies defined as part of Exchange Transport Rules.

  • DLP Policies: These policies contain sets of conditions which comprise of Transport rules, actions and exceptions. Conditions can be configured from scratch or modified from pre-existing policy templates in Exchange 2013. There are three supported methods to create DLP policies:
    • Create a DLP policy from an existing policy template: At the time of writing, Exchange 2013 supports over 40 policy templates to support a number of compliance requirements from various Countries and jurisdictions such as GLB and PCI-DSS.
    • Import a pre-built policy file from outside your organization: Exchange 2013 allows organizations to use DLP policies created by independent software vendors by importing these policies directly into Exchange as XML files. To define your own DLP policy template files, you must first define an XML schema (read here; then you can define sensitive information rule types (read here).
    • Create a custom policy from scratch: Exchange 2013 provides the granularity to define a DLP policy to match an organization’s requirements for monitoring certain types of data.
  • Sensitive Information Types: DLP now has the ability to perform deep content analysis via keyword matches, dictionary matches, regular expression evaluation, and other content examination to detect content that violates organizational DLP policies. Sensitive information rule types augment the existing transport rules framework and allow you to apply messaging policies to email messages that flow through the transport pipeline in the Transport service on Mailbox servers and on Edge Transport servers. Read my article on Exchange Transport architecture.
  • Policy Detection and Reporting: Exchange 2013 provides availability and access to information that identifies policy violations occurring within the DLP environment. This information is made available via the Message Tracking Logs. The AgentInfo Event is used to add DLP related entries in the message tracking log. A single AgentInfo event will be logged per message describing the DLP processing applied to the message. An incident report can be created for each DLP policy rule set via the Generate Incident Report feature in the EAC.
  • Policy Tips: enable you to notify email senders that they are about to violate one of the  DLP policies before they send the offending message. Click here for more information.

Common Administrative Tasks

  1. Create a DLP policy from a Template: To use existing templates, the DLP must be configured via the EAC. Read this article.
  2. Import a DLP policy from a File: Via EAC or PowerShell
    Import-DlpPolicyCollection -FileData ([Byte[]]$(Get-Content -Path ” C:DocDLP Backup.xml ” -Encoding Byte -ReadCount 0))
  3. Create a custom DLP policy without any rules: This must be configured via EAC
  4. Export a DLP policy:  Via EAC or PowerShell
    Export-DlpPolicyCollection
  5. Create a custom DLP policy: Via EAC or PowerShell
    New-DlpPolicy “Employee IDs”
  6. View details of an existing DLP policy: Via EAC or PowerShell
    Get-DlpPolicy “Employee IDs” | Format-List
  7. Change a DLP policy: Via EAC or PowerShell
    Set-DlpPolicy “Employee IDs” -Mode (Audit|AuditAndNotify|Enforce)
  8. Delete a DLP policy: Via EAC or PowerShell
    Remove-DlpPolicy “Employee IDs”
  9. Import/Export a DLP policy: Via EAC or PowerShell
  10. Manage Policy Tips: Via EAC, for more information click here.
  11. Create a New Classification Rule Collection: via PowerShell
    New-ClassificationRuleCollection -FileData ([Byte[]]$(Get-Content -Path “C:DocExternal Classification Rule Collection.xml” -Encoding Byte -ReadCount 0))
    † This action overwrites all pre-existing DLP policies that were defined in your organization, so make sure you backup your current DLP policy information first.

Top PowerShell Commands/Tools:

– Set|Get|New|Remove -DlpPolicy
– Set|Get|New|Remove -ClassificationRuleCollection
– Export|Import -DlpPolicyCollection

References/Links

Command Reference for DLP
Microsoft Technet page on DLP in Exchange 2013