Cloud technology is deployed across a wide variety of industries and applications. The term ‘Cloud’ itself has become so widely prevalent that we’ve devised additional terms in an effort to describe what type of cloud we’re talking about. What’s your flavor? Iaas, Paas or Saas? Or perhaps it’s Public, Private or Hybrid?
Regardless of the type of cloud you’re using or planning to implement, there’s no denying that storage is an essential component of every cloud architecture that simply cannot be overlooked. In this post, we will look into some of the most common usages of storage in the cloud and peel back the layers to discover exactly what makes them tick. Our goal is to come up with a yardstick to measure storage design.
Drivers towards Cloud Storage adoption
What do Dropbox and Box Inc have in common? Both companies are less than 5 years old, offer services predominantly centered around cloud storage and file sharing and have been able to attract significant amounts of capital from investors. In fact, Dropbox raised $250 million at a $4 billion dollar valuation from investors with Box Inc raising another $125 million in mid 2012. It looks like Silicon Valley sees Cloud Storage services as a key piece in the future of cloud. So why is there such a tremendous interest around cloud storage? Consumers are drawn to a number of benefits of using cloud:
- Redundancy: Large clouds incorporate redundancy at every level. Your data is stored in multiple copies on multiple hard drives on multiple servers in multiple data centers in multiple locations (you get the picture).
- Geographical Diversity: With a global audience and a global demand for your content, we can place data physically closer to consumers by storing it at facilities in their country or region. This dramatically reduces round trip latency, a common complaint for dull Internet performance.
- Performance: Storage solutions in the cloud are designed to scale dramatically upwards to support events that may see thousands or millions more consumers accessing content over a short period of time. Many services provide guarantees in the form of data throughput and transfer.
- Security & Privacy: Cloud storage solutions incorporate sophisticated data lifecycle management and security features that enable companies to fulfill their compliance requirements. More and more cloud providers are also providing services that are HIPAA compliant.†
- Cost: As clouds get larger, the per unit costs of storage go down, primarily due to Economies of Scale. Many service providers choose to pass on these cost savings to consumers as lower prices.
- Flexibility: The pay as you use model takes away concerns for capacity planning and wastage of resources due to cyclical variations in usage.
†It should be noted that a Draft opinion paper released by the EU Data Protection Working Party while not explicitly discouraging Cloud adoption, recommended that Public Sector agencies perform a thorough risk analysis prior to migrating to the cloud. You can read the report here.
Storage Applications for the Cloud
We’ve listed some of the most common applications for cloud storage in this section:
- Backup: The cloud is perceived to be a viable replacement for traditional backup solutions, boasting greater redundancy and opportunities for cost savings. The Cloud backup market is hotly contested in both the consumer and enterprise markets.
- In the consumer market, cloud backup services like Dropbox, Microsoft SkyDrive and Google Drive offer a service that takes part of your local hard drive and syncs them up with the cloud. The trend for these pay for use services are on the rise, with Dropbox hosting data for in excess of 100 million users within four years of launching their service.
- In the Enterprise Space, Gartner’s magic quadrant for enterprise backup solutions featured several pureplay Cloud backup providers including Asigra, Acronis and i365. Even leading providers such as CommVault and IBM have launched cloud-based backup solutions. Amazon’s recently launched Glacier service provides a cost-effective backup tier for around $0.01 per gigabyte per month.
- File Sharing: File sharing services allow users to post files online and then share the files to users using a combination of Web links or Apps. Services like Mediafire, Dropbox and Box offer a basic cloud backup solution that provides collaboration and link sharing features. On the other end of the spectrum, full-blown collaboration suites such as Microsoft’s Office 365 and Google Apps feature real-time document editing and annotation services.
- Data Synchronization: (between devices): Data synchronization providers such as Apple’s iCloud as well as a host of applications including the productivity app Evernote allow users to keep files photos and even music synchronized across array of devices (Desktop, Phone, Tablet etc.) to automatically synchronize changes
- Content Distribution: Cloud content distribution network (CDN) services are large networks of servers that are distributed across datacenters over the internet. At one point or another, we’ve used CDNs such as Akamai to enhance our Web browsing experience. Cloud providers such as the Microsoft Windows Azure Content Distribution Network (CDN) and the Amazon CDN offer affordable CDN services for serving static files and images to even streaming media to global audience.
- Enterprise Content Management Companies are gradually turning to the cloud to manage Organizational compliance requirements such as eDiscovery and Search. Vendors such as HP Autonomy and EMC provide services that feature secure encryption and de-duplication of data assets as well as data lifecycle management.
- Cloud Application Storage: The trend towards hosting applications in the cloud is driving innovations in how we consume and utilize storage. Leading the fray are large cloud services providers such as Amazon and Microsoft who have developed cloud storage services to meet specific applications needs.
- Application Storage Services: Products like Amazon Simple Storage Service (S3) and Microsoft Windows Azure Storage Account support storage in a variety of formats (blob, queue and table data) and scaling to very large sizes (Up to 100TB volumes). Storage services are redundant (at least 3 copies of each bit stored) and can be accessed directly via HTTP, XML or a number of other supported protocols. Storage services also support encryption on disk.
- Performance Enhanced Storage: Performance enhanced storage emulates storage running on a SAN and products like Amazon Elastic Block Storage provide persistent, block-level network attached storage that can be attached to virtual machines running and in cases VMs can even boot directly from these hosts. Users can allocate performance to these volumes in terms of IOPs.
- Data Analytics Support: Innovative distributed file systems that support super-fast processing of data have been adapted to the cloud. For example, the Hadoop Distributed File System (HDFS) manages and replicates large blocks of data across a network of computing nodes, to facilitate the parallel processing of Big Data. The Cloud is uniquely positioned to serve this process, with the ability to provision thousands of nodes, perform compute processes on each node and then tear down the nodes rapidly, thus saving huge amounts of resources. Read how the NASA Mars Rover project used Hadoop on Amazon’s AWS cloud here.
Storage Architecture Basics
So how do these cloud based services run? If we were to peek under the hood, we would see a basic architecture that is pretty similar to the diagram above. All storage architectures comprise of a number of layers that work together to provide users with a seamless storage service. The different layers of a cloud architecture are listed below:
- Front End: This layer is exposed to end users and typically exposes APIs that allow access to the storage. A number of protocols are constantly being introduced to increase the supportability of cloud systems and include Web Service Front-ends using REST principles, file-based front ends and even iSCSI support. So for example, a user can use an App running on their desktop to perform basic functions such as creating folders, uploading and modifying files, as well as defining permissions and share data with other users. Examples of Access methods and sample providers are listed below:
- REST APIs: REST or Representational State Transfer is a stateless Web Architecture model that is built upon communications between clients and servers. Microsoft Windows Azure storage and Amazon Web Services Simple Storage Service (S3)
- File-based Protocols: Protocols such as NFS and CIFS are supported by vendors like Nirvanix, Cleversafe and Zetta*.
- Middleware: The middleware or Storage Logic layer supports a number of functions including data deduplication and reduction; as well as the placement and replication of data across geographical regions.
- Back End: The back end layer is where the actual physical hardware is implemented and we refer to read and write instructions in the Hardware Abstraction Layer.
- Additional Layers: Depending on the purpose of the technology, there may be a number of additional layers
- Management Layer: This may supporting scripting and reporting capabilities to enhance automation and provisioning of storage.
- Backup Layer: The cloud back end layer can be exposed directly to API calls from Snapshot and Backup services. For example Amazon’s Elastic Block Store (EBS) service supports a incremental snapshot feature.
- DR (Virtualization) Layer: DR service providers can attach storage to a Virtual hypervisor, enabling cloud storage data to be accessed by Virtual Hosts that are activated in a DR scenario. For example the i365 cloud storage service automates the process of converting backups of server snapshots into a virtual DR environment in minutes.
This brief post provided a simple snapshot of cloud storage, it’s various uses as well as a number of common applications for storage in the cloud. If you’d like to read more, please visit some of the links provided below.
Roadchimp, signing out! Ook!
* Research Paper on Cloud Storage Architectures here.
Read a Techcrunch article on the growth of Dropbox here.
Informationweek Article on Online Backup vs. Cloud Backup here.
Read more about IBM Cloud backup solutions here.
Read about Commvault Simpana cloud backup solutions.
A number of governments have implemented roadmaps and strategies that ultimately require their ministries, departments and agencies to default to Cloud computing solutions first when evaluating IT implementations. In this article, we evaluate the adoption of cloud computing in government and discuss some of the positive and negative implications of moving government IT onto the cloud.
In this section, we look at a number of cloud initiatives that have been gaining leeway in the public sector:
- Office Productivity Services – The New Zealand Government has identified office productivity services as the first set of cloud-based services to be deployed across government agencies. Considered to be low hanging fruit and fueled by successes in migrating perimeter services like anti-spam onto the cloud, many organizations see email and collaboration as a natural next step of cloud adoption. Vendors leading the charge include Microsoft’s Office 365 for Government, with successful deployments including Federal Agencies like the USDA, Veterans Affairs, FAA and the EPA as well as the Cities of Chicago, New York and Shanghai. Other vendor solutions include Google Apps for Government which supports the US Department of the Interior.
- Government Cloud Marketplaces – A number of governments have signified the need to establish cloud marketplaces, where a federated marketplace of cloud service providers can support a broad range of users and partner organizations. The UK government called for the development of a government-wide Appstore, as did the New Zealand Government in a separate cabinet paper on cloud computing in August 2012. The US government has plans to establish a number of cloud services marketplaces, including the GSA’s info.apps.gov and the DOE’s YOURcloud, a secure cloud services brokerage built on Amazon’s EC2 offering. (link) The image below lists the initial design for the UK government App store.
- Making Data publicly available – The UK Government is readily exploiting opportunities to make available the Terabytes of public data that can be used to develop useful applications. The recent release of Met Office UK Weather information to the public via Microsoft Azure’s cloud hosting platform. (link)
- Government Security Certification – A 2012 Government Cloud Survey conducted by KPMG listed security as the greatest concern for governments when it comes to cloud adoption and that governments are taking measures to manage security concerns. For example, the US General Services Administration subjects each successful cloud vendor to a battery of tests that include an assessment of access controls.
Canadian Government Cloud Architectural Components
The strategic value of cloud computing can be summed up into a number of key elements in government. We’ve listed a few that appear on the top of our list:
- Enhancing agility of government – Cited as a significant factor in cloud adoption, cloud computing promises rapid provisioning and elasticity of resources, reducing turnaround times on projects.
- Supporting government policies for the environment – The environmental impact due to reduced data center spending and consumption of energy on cooling has tangible environmental benefits in terms of reduced greenhouse gas emissions and potential reductions in allocations of carbon credits.
- Enhancing Transparency of government – Cloud allows the developed of initiatives that can make government records accessible to the public, opening up tremendous opportunities for innovation and advancement.
- Efficient utilization of resources – By adopting a pay-for-use approach towards computing, stakeholders are encouraged to architect their applications to be more cost effective. This means that unused resources are freed up to the common pool of computing resources.
- Reduction in spending – Our research indicated this particular element is not considered to be a significant aspect of moving to cloud computing according to technology decision makers, however some of the numbers being bandied about in terms of cost savings are significant (Billions of dollars) and can appeal to any constituency.
We’ve listed a number of positive points towards cloud adoption. These may not be relevant in every use case, but worthwhile for a quick read:
- Resource Pooling – leads to enhanced efficiency, reduced energy consumption and more economical cost savings from scale
- Scalability – Unconstrained capacity allows for more agile enterprises that are scalable, flexible and responsive to change
- Reallocation of human resources – Freed up IT resources can focus on R&D, designing new solutions that are optimized in cloud environments and decoupling applications from existing infrastructures.
- Cost containment – Cloud computing requires the adoption of a ‘you pay for what you use’ model, which encourages thrift and efficiency. The transfer of CAPEX to OPEX also smoothes out cash-flow concerns in an environment of tight budgets.
- Reduce duplication and encourage re-use – Services designed to meet interoperability standards can be advertised in a cloud marketplace and become building blocks that can be used by different departments to construct applications
- Availability – Cloud architecture is designed to be independent of the underlying hardware infrastructure and promotes scalability and availability paradigms such as homogeneity and decoupling
- Resiliency – The failure of one node of a cloud computing environment has no overall effect on information availability
A sound study should also include a review of the negative implications of cloud computing:
- Bureaucratic hinderances – when transitioning from legacy systems, data migration and change management can slow down the “on demand” adoption of cloud computing.
- Cloud Gaps – Applications and services that have specific requirements which are unable to be met by the cloud need to be planned for to ensure that they do not become obsolete.
- Risks of confidentiality – Isolation has been a long-practiced strategy for securing disparate networks. If you’re not connected to a network, there’s no risk of threats getting in. A common cloud infrastructure runs the risk of exploitation that can be pervasive since all applications and tenants are connected via a common underlying infrastructure.
- Cost savings do not materialize – The cloud is not a silver bullet for cost savings. We need to develop cloud-aligned approaches towards IT provisioning, operations and management. Applications need to be decoupled and re-architected for the cloud. Common services should be used in order to exploit economies of scale; applications and their underlying systems need to be tweaked and optimized.
Security was cited as a major concern (KPMG)
Where to start?
There is considerable research that indicates government adoption of cloud computing will accelerate in coming years. But to walk the fine line of success, what steps can be taken? We’ve distilled a number of best practices into the following list:
- Develop Roadmaps: Before Cloud Computing can reap all of the benefits that it has to offer, governments must first move along a continuum towards adoption. For that very purpose, a number of governments have developed roadmaps to aid in developing a course of progression towards the cloud. Successful roadmaps featured the following components:
- A technology vision of Cloud Computing Strategy success
- Frameworks to support seamless implementation of federated community cloud environments
- Confidence in Security Capabilities – Demonstration that cloud services can handle the required levels of security across stakeholder constituencies in order to build and establish levels of trust.
- Harmonization of Security requirements – Differing security standards will impede and obstruct large-scale interoperability and mobility in a multi-tenanted cloud environment, therefore a common overarching security standard must be developed.
- Management of Cloud outliers – Identify gaps where Cloud cannot provide adequate levels of service or specialization for specific technologies and application and identify strategies to deal with these outliers.
- Definition of unique mission/sector/business Requirements (e.g. 508 compliance, e-discovery, record retention)
- Development of cloud service metrics such as common units of measurement in order to track consumption across different units of government and allow the incorporation of common metrics into SLAs.
- Implementation of Audit standards to promote transparency and gain confidence
- Create Centers of Excellence: Cloud Computing Reference Architectures; Business Case Templates and Best Practices should be developed so that cloud service vendors should map their offerings to (i.e. NIST Reference Architecture) so that it is easier to compare services.
- Cloud First policies: Implementing policies that mandate all departments across government should consider cloud options first when planning for new IT projects.
The adoption of cloud services holds great promise, but due to the far reaching consequences necessitated by the wide-spread adoption of cloud to achieve objectives such as economies of scale, a comprehensive plan compounded with standardization and transparency become essential elements of success.
We hope this brief has been useful. Ook!
Microsoft’s Cloud Computing in Government page
Cisco’s Government Cloud Computing page
Amazon AWS Cloud Computing page
Redhat cloud computing roadmap for government pdf
US Government Cloud Computing Roadmap Vol 1.
Software and Information Industry updates on NIST Roadmap
New Zealand Government Cloud Computing Strategy link
Australian Government Cloud Computing Strategic Direction paper
Canadian Government Cloud Computing Roadmap
UK Government Cloud Strategy Paper
GCN – A portal for Cloud in Government
Study – State of Cloud Computing in the public sector
The cloud offers consumers more options for deploying their applications and is attractive from the perspective of predictable costs, reliability and scalability. However, not every component of an Organization’s environment may be fully suited for the cloud due to a variety of reasons including confidentiality and compliance. With the increasing trend of organizations to move parts of IT onto the cloud and retain core aspects of their business within their datacenters, it becomes important for us to understand how Exchange 2013 interoperates between on-premises and cloud. Exchange 2013 is designed from the ground up to support coexistence with the cloud. From both the administrator and end-user’s perspective, Exchange 2013 and Office 365 provide a seamless and feature rich experience. We will explore some of these features in this post.
- Secure mail routing
- Mail routing with the same domain space
- Unified GAL and Free/Busy sharing
- Centralized Egress of Messages
- Unified OWA login
- Centralized Management
- Mailbox Migrations
- Cloud-based Message Archiving
- Architecture Components: A hybrid Exchange 2013 environment comprises of the following components.
- Exchange servers: You may have a combination of Exchange 2013, Exchange 2010 or earlier Exchange Servers and roles deployed on-premises. You will need a minimum of one Exchange 2013 Client Access and one Exchange 2013 Mailbox Server if you deploy Exchange 2013 on-premises in your organization.
- Microsoft Office 365: This is Microsoft’s feature-rich cloud based service that includes cloud-based email, instant messaging and online conferencing, Office Web Apps including Word, Excel, Powerpoint and OneNote and Email Archiving. You will need the Midsize Business and Enterprise Plan (E3) in order to configure Active Directory Synchronization with your on-premises environment. You will also need to configure an Exchange Online organization to enable hybrid deployments.
- Exchange Online Protection (EOP): EOP is included in all Office 365 Enterprise tenant subscriptions. EOP enables secure message delivery between cloud and on-premises Exchange Organizations and can also be configured to manage message routing between the Internet and your on-premises Exchange Organization.
- Hybrid Configuration wizard: The Hybrid Configuration wizard is used to manage the hybrid configuration through the Exchange Administrative Center (EAC). The Hybrid Configuration Wizard first performs prerequisite and topology checks, tests account credentials between on-premise and Exchange Online organizations and then subsequently performs the necessary configuration changes to create and enable the hybrid deployment, this includes adding the HybridConfiguration object in the on-premise Active Directory environment.
- Microsoft Federation Gateway: On-premises Exchange Organizations must configure a federation trust with the Microsoft Federation Gateway before they can enable a hybrid configuration with an Exchange Online organization. The Microsoft Federation Gateway acts as a trust broker between the on-premises Exchange and the Online Exchange organizations and federation trusts can be configured manually or via the Hybrid Configuration Wizard. A Federation Trust is necessary for your on-line and on-premise users to be able to share free/busy information.
- Active Directory Synchronization: AD synchronization enables a unified GAL across Online and on-premises users in your Exchange deployment. AD Sync feature requires you to download and install the tool on a separate server (Physical or Virtual) in your on-premises environment. Note that the default limit of 20,000 objects that can be replicated between on-premises Active Directory and the online organization can be increased by contacting the Microsoft Online Services team.
- Active Directory Federation Services (Optional): the AD FS server implementation will enable users in your organization to use their existing network credentials for logging on to the on-premises and Exchange Online organizations using “Single Sign-on”. This is facilitated by configuring trusts between the on-premises Active Directory Forest and the Microsoft Online ID.
- Certificates: To support secure communications between the on-premises and Online environments, Microsoft recommends that you purchase a Subject Alternative Name (SAN) SSL certificate that can be used to secure access to the following services:
- Primary shared SMTP domain: This is your primary email domain and needs to be installed on local Client Access and Mailbox Servers. ie. chimpcorp.com
- Autodiscover: The autodiscover services supports the configuration of remote clients (Outlook and Exchange Active-sync), is installed on your CAS servers and should be provisioned according to the external Autodiscover FQDN of your Exchange 2013 CAS server. ie. autodiscover. chimpcorp.com
- Transport: This is installed on your Exchange 2010 SP3 Edge Transport Servers and matches the external FQDN of your edge transport servers. ie. edge.chimpcorp.com
- AD FS (optional): A certificate is required to establish trust between web clients and federation server proxies and to sign and decrypt security tokens.
- Exchange Federation: A self-signed certificate is required to establish a secure connection between the on-premises Exchange 2013 servers and the Microsoft Federation Gateway.
- Client Access: An SSL certificate is required for use by clients such as OWA and Exchange ActiveSync and Outlook Anywhere. ie. webmail.chimpcorp.com
- Message Transport: Messages between the on-premises and online organizations are encrypted, authenticated and transferred via Transport Layer Security (TLS). Depending on how you choose to configure your hybrid environment, messages can flow either one of the following ways:
- Centralized Mail Transport: All Internet-bound email is delivered via the on-premises Exchange Organization. The Exchange on-premises organization is responsible for message transport and relays all Internet messages from the Exchange Online organization. This configuration is preferable if your organization has compliance or regulatory requirements and must monitor a single point of egress for all messages outside of your organization. Ensure that you provision sufficient bandwidth between the on-premises and online environments to process all outbound messages.
- Online-centric Transport: All Internet-bound email in the Organization is delivered via the Exchange Online organization. In this case, all external outbound messages from the on-premises Exchange Organization are relayed to servers in the Exchange Online organization. This is preferable if you wish to use Microsoft’s Exchange Archiving and Exchange Online Protection (EOP) solutions, as it supports the most efficient flow of messaging traffic.
- Independent message routing: All Internet-bound email from recipients in the Exchange Online organization are delivered directly to the Internet, taking an independent path from your on-premises Exchange 2013 Organization.
- Edge Routing: On-premises endpoint for Exchange and Exchange Online organizations must be an Exchange 2013 CAS Server, or Exchange 2010 SP3 Edge Transport Server. Communications between Exchange Online and older versions of Exchange, SMTP hosts or appliances are not supported.
- Client Access: In Exchange 2013 client access is supported from Outlook via RPC/HTTP and Outlook Web App. Clients connecting to the on-premises Client Access server are redirected to either the on-premises Exchange 2013 Mailbox Server or provided with a link to logon to the Exchange Online organization.
Common Administrative Tasks
- Set up an Office 365 account: Via the Office 365 online portal here.
- Enabling a Hybrid Deployment: Use the Hybrid Deployment Wizard in the EAC.
- Configure or modify the Hybrid Deployment Options: Via the Hybrid Deployment Wizard in the EAC or Powershell
Set-HybridConfiguration -Features OnlineArchive,MailTips,OWARedirection,FreeBusy,MessageTracking
- Verify the configuration was successful: Via PowerShell
- Sharing Free/Busy information: Steps on how to configure Federation Trusts
- Configuring Active Directory Synchronization: Steps to download the AD Synchronization tool from the Office 365 portal.
Top PowerShell Commands/Tools:
Click here to read more briefs on Exchange 2013.
PowerShell Command Reference for Hybrid Configuration
Technet: Article on the Hybrid Configuration Wizard
Technet: Article on Hybrid Certificate Requirements
Technet: Article on configuring message routing
Labs on AD Synchronization
Due to the wide-spread prevalence of e-mail and the potential that e-mails contain sensitive information that may be of high impact to a business or contain personal information, there is a need for many IT departments to be able to track access to mailboxes. Mailbox audit logging enables an organization to identify mailbox access by mailbox owners, delegates and administrators.
- Mailbox Audit Logon Types
- Mailbox Audit Log
- Mailbox Audit Logon Types: In Exchange 2013, you can distinguish between three classes of users when they access a mailbox. These classes are:
- Mailbox Owners: The account designated to access the mailbox. (Primarily Users)
- Mailbox Delegates: Alternate accounts that have been granted permissions to access a mailbox
- Administrators: Administrators typically access an account during the following three instances: Firstly, when In-Place eDiscovery is used to search a mailbox. Secondly, when the New-MailboxExportRequest cmdlet is used to export a mailbox; and Thirdly, the Microsoft Exchange Server MAPI Editor is used to access a mailbox.
- Mailbox Audit Logs: Mailbox audit logs are generated for each mailbox that has mailbox audit logging enabled. Log entries are retained in the mailbox by default for 90 days in the Audits subfolder of the audited mailboxRecoverable Items folder. Mailbox Audit logs allow you to specific what types of important information should be logged for a specific logon type. These include:
- User Actions (Accessing, copying, creating, moving or deleting a message)
- Performing SendAs or SendOnBehalf actions
- Reading or previewing a message
- Client IP adress
- Client Host name
- Process that client used to access the mailbox
Common Administrative Tasks
- Enabling or Disabling Mailbox Audit Logging: via EAC or PowerShell
Set-Mailbox -Identity “Road Chimp” -AuditEnabled $true to enable &
Set-Mailbox -Identity “Road Chimp” -AuditEnabled $false to disable
- Enabling/Disabling Mailbox Audit Logging for various logon types:
Set-Mailbox -Identity “Road Chimp” -AuditOwner or
Set-Mailbox -Identity “Road Chimp” -AuditDelegate or
Set-Mailbox -Identity “Road Chimp” -AuditAdmin
- Verify Mailbox Audit Logging was configured: via Powershell
Get-Mailbox “Road Chimp | Format-List *audit*
- Create a Mailbox Audit Log Search: via EAC or PowerShell
New-MailboxAuditLogSearch “Admin and Delegate Access” -Mailboxes “Road Chimp”,”Chief Peeler” -LogonTypes Admin,Delegate -StartDate 1/1/2012 -EndDate 12/01/2012 -StatusMailRecipients “firstname.lastname@example.org”
- Searching Mailbox Audit Log for a specific search term: via EAC or PowerShell
Search-MailboxAuditLog -Identity “Road Chimp” -LogonTypes Admin,Delegate -StartDate 1/1/2012 -EndDate 12/31/2012 -ResultSize 2000
- Bypass a User Account from Mailbox Audit Logging: via EAC or Powershell
Set-MailboxAuditBypassAssociation -Identity “Road Chimp” -AuditBypassEnabled $true
Top PowerShell Commands/Tools:
– Set-Mailbox -AuditEnabled
– Set-Mailbox -AuditDelegate |AuditAdmin | AuditOwner
Technet: Article on Mailbox Audit Logging
Cmdlets: For Mailbox Audit Logging
DLP capabilities help you protect your sensitive data and inform users of your policies and regulations. DLP can also help you prevent users from mistakenly sending sensitive information to unauthorized people. When you configure DLP polices, you can identify and protect sensitive data by analyzing the content of your messaging system, which includes numerous associated file types. The DLP policy templates supplied in Exchange 2013 are based on regulatory standards such as PII and payment card industry data security standards (PCI-DSS). DLP is extensible, which allows you to include other policies that important to your organization. Additionally, the new Policy Tips capability allows you to inform users about policy violations before sensitive data is sent.
- DLP Policies
- Sensitive Information Types
- Policy Detection and Reporting
- Policy Tips
The transport rule agent (TRA) is used in Exchange 2013 to invoke deep message content scanning and also to apply policies defined as part of Exchange Transport Rules.
- DLP Policies: These policies contain sets of conditions which comprise of Transport rules, actions and exceptions. Conditions can be configured from scratch or modified from pre-existing policy templates in Exchange 2013. There are three supported methods to create DLP policies:
- Create a DLP policy from an existing policy template: At the time of writing, Exchange 2013 supports over 40 policy templates to support a number of compliance requirements from various Countries and jurisdictions such as GLB and PCI-DSS.
- Import a pre-built policy file from outside your organization: Exchange 2013 allows organizations to use DLP policies created by independent software vendors by importing these policies directly into Exchange as XML files. To define your own DLP policy template files, you must first define an XML schema (read here; then you can define sensitive information rule types (read here).
- Create a custom policy from scratch: Exchange 2013 provides the granularity to define a DLP policy to match an organization’s requirements for monitoring certain types of data.
- Sensitive Information Types: DLP now has the ability to perform deep content analysis via keyword matches, dictionary matches, regular expression evaluation, and other content examination to detect content that violates organizational DLP policies. Sensitive information rule types augment the existing transport rules framework and allow you to apply messaging policies to email messages that flow through the transport pipeline in the Transport service on Mailbox servers and on Edge Transport servers. Read my article on Exchange Transport architecture.
- Policy Detection and Reporting: Exchange 2013 provides availability and access to information that identifies policy violations occurring within the DLP environment. This information is made available via the Message Tracking Logs. The AgentInfo Event is used to add DLP related entries in the message tracking log. A single AgentInfo event will be logged per message describing the DLP processing applied to the message. An incident report can be created for each DLP policy rule set via the Generate Incident Report feature in the EAC.
- Policy Tips: enable you to notify email senders that they are about to violate one of the DLP policies before they send the offending message. Click here for more information.
Common Administrative Tasks
- Create a DLP policy from a Template: To use existing templates, the DLP must be configured via the EAC. Read this article.
- Import a DLP policy from a File†: Via EAC or PowerShell
Import-DlpPolicyCollection -FileData ([Byte]$(Get-Content -Path ” C:DocDLP Backup.xml ” -Encoding Byte -ReadCount 0))
- Create a custom DLP policy without any rules: This must be configured via EAC
- Export a DLP policy: Via EAC or PowerShell
- Create a custom DLP policy: Via EAC or PowerShell
New-DlpPolicy “Employee IDs”
- View details of an existing DLP policy: Via EAC or PowerShell
Get-DlpPolicy “Employee IDs” | Format-List
- Change a DLP policy: Via EAC or PowerShell
Set-DlpPolicy “Employee IDs” -Mode (Audit|AuditAndNotify|Enforce)
- Delete a DLP policy: Via EAC or PowerShell
Remove-DlpPolicy “Employee IDs”
- Import/Export a DLP policy: Via EAC or PowerShell
- Manage Policy Tips: Via EAC, for more information click here.
- Create a New Classification Rule Collection: via PowerShell
New-ClassificationRuleCollection -FileData ([Byte]$(Get-Content -Path “C:DocExternal Classification Rule Collection.xml” -Encoding Byte -ReadCount 0))
† This action overwrites all pre-existing DLP policies that were defined in your organization, so make sure you backup your current DLP policy information first.
Top PowerShell Commands/Tools:
– Set|Get|New|Remove -DlpPolicy
– Set|Get|New|Remove -ClassificationRuleCollection
– Export|Import -DlpPolicyCollection
Command Reference for DLP
Microsoft Technet page on DLP in Exchange 2013
In-Place eDiscovery allows you to search mailbox data across your Exchange organization, preview search results, and copy them to a Discovery mailbox. Users in the Discovery Management role group can be delegated access to perform discovery searches without the need to grant them elevated privileges.
- Exchange Search and Keyword Query Language (KQL)
- Discovery Management Role group
- Discovery Mailboxes
- Discovery Search Actions
- eDiscovery Center
In-Place eDiscovery in Exchange 2013 supports
- Exchange Search and Keyword Query Language (KQL): The content indexing feature of Exchange Search has been redesigned to provide greater integration with Microsoft Search Foundation and Microsoft Sharepoint 2013. By exposing the powerful federated search capabilities included in Sharepoint 2013, users can easily structure complex and efficient search queries. This article explains the Keyword Query Language (KQL) capabilities and syntax of Sharepoint 2013.
- Discovery Management Role group: This group consists of two management roles; the Mailbox Search Role, which allows a user to perform an In-place eDiscovery search; and the Legal Hold Role, which allows a user to place a mailbox in In-place hold or Litigation hold.
- Discovery mailboxes: These are used during In-place eDiscovery Searches as target mailboxes and the results of In-place eDiscovery Searches and be copied to these mailboxes. Discovery mailboxes cannot be repurposed as other types of mailboxes.
- Discovery Search Actions: Users can perform the following actions during a discovery search:
- Estimate search results: Obtain an estimate of the total size and number of items that will be returned by the search based on search criteria. Estimates are displayed in the details pane.
- Preview search results: Preview the results of a search by displaying messages returned from each mailbox searched.
- Copy search results: Copy messages returned in search results to a Discovery mailbox.
- eDiscovery Center: The eDiscovery Center site collection is part of SharePoint 2013 and provides features to help with the first half of the eDiscovery Reference Model (EDRM)—identification, preservation, collection, processing, and analysis; and is available on-premises or in the cloud. Using the eDiscovery Center, you can perform searches across SharePoint, Exchange and Lync content archived into Exchange. Click here for a great article on eDiscovery in Sharepoint.
Common Administrative Tasks
- Add a user to the Discovery Management Role Group: In EAC or PowerShell
Add-RoleGroupMember -Identity “Discovery Management” -Member “Road Chimp”
This can be verified via the command: Get-RoleGroupMember -Identity “Discovery Management”
- Create a Discovery Mailbox via the command:
New-Mailbox SearchResults01 -Discovery -UserPrincipalName SearchResults01@roadchimp.com
- Create an In-place eDiscovery Search: In EAC or PowerShell
New-MailboxSearch “Discovery-CaseID001” -StartDate “01/01/2012” -EndDate “12/01/2012” -SourceMailboxes “DG-Finance” -TargetMailbox SearchResults01 -SearchQuery ‘”Bananas” AND “Peel”‘
- Preview an In-place eDiscovery Search: In EAC or PowerShell
Start-Mailbox Search -EstimateOnly….
- Start/Stop an In-place eDiscovery Search: In EAC or PowerShell
Start-MailboxSearch -Identity “Discovery-CaseID001” to start &
Stop-MailboxSearch -Identity “Discovery-CaseID001” to stop
- Retrieve the status of an In-place eDiscovery Search: In EAC or PowerShell
- Modify an In-place eDiscovery Search: In EAC or PowerShell
Set-MailboxSearch -Identity “Discovery-CaseID001” -SourceMailboxes “DG-Executives”
- Remove an In-place eDiscovery Search: In EAC or PowerShell
Remove-MailboxSearch -Identity “Discovery-CaseID001“
- Re-create the Discovery System Mailbox: Click here for more information.
- Configure Exchange for Sharepoint eDiscovery Center: Click here for steps.
Top PowerShell Commands/Tools
– Stop-Mailbox Search
– Get-Mailbox Search
– Set-Mailbox Search
Command Reference for eDiscovery Search
Microsoft Technet page on eDiscovery
Article on Keyword Query Language
Technet blog writeup on eDiscovery Search
In the event that potential litigation may occur, an organization is required to preserve any electronically stored information (ESI), including email that’s relevant to the case. In-Place Hold enables an administrator to search and preserve messages matching query parameters. Messages are protected from deletion, modification, and tampering and can be preserved indefinitely or for a specified period.
- Users can be placed on one or multiple holds
- Preserve deleted items
- query-based searches
- Transparent to users
In-place Hold enables an organization to configure a number of granular policies depending on the needs of a particular situation:
- Indefinite hold: This is intended to preserve mailbox items so you can meet eDiscovery requirements. During the period of litigation or investigation, items are never deleted. The duration is not known in advance, so no end date is configured. To hold all mail items indefinitely, you must not specify any query parameters or time duration when creating an In-Place Hold.
- Query-based hold: If your organization requires that only items matching query parameters be preserved either indefinitely or for a specified duration, you can use a query-based In-Place Hold. You can specify query parameters such as keywords, start and end dates, sender and recipient addresses and message types. After you create a query-based In-Place Hold, all existing mailbox items matching the query and items created in the future, including messages received at a later date that match query parameters are preserved.
- Time-based hold: Time-Place Hold allows you to specify a duration of time for which to hold items. The duration is calculated from the date a mailbox item is received or created.
- Recoverable Items Folder: The recoverable items folder is a location in the user’s mailbox where items are sent to if they are not ‘hard deleted’. This folder contains the following subfolders:
Deletions – Contains items removed from the Deleted Items folder or soft deleted from other folders and are visible to the user when using the Recover Deleted Items feature in Outlook and Outlook Web App. By default, items reside in this folder until the deleted item retention period configured for the mailbox database or the mailbox expires.
Purges – When a user deletes an item from the Recoverable Items folder (by using the Recover Deleted Items tool in Outlook and Outlook Web App, the item is moved to the Purges folder. Items that exceed the deleted item retention period configured on the mailbox database or the mailbox are also moved to the Purges folder. Items in this folder aren’t visible to users if they use the Recover Deleted Items tool. When the mailbox assistant processes the mailbox, items in the Purges folder are purged from the mailbox database. When you place the mailbox user on litigation hold, the mailbox assistant doesn’t purge items in this folder.
DiscoveryHold – If a user is placed on an In-Place Hold, deleted items are moved to this folder. When the mailbox assistant processes the mailbox, it evaluates messages in this folder. Items matching the In-Place Hold query are retained until the hold period specified in the query. If no hold period is specified, items are held indefinitely or until the user is removed from the hold.
Versions – When a user who is placed on In-Place Hold or litigation hold, mailbox items must be protected from tampering or modification by the user or a process. This is accomplished using a copy-on-write. When a user or a process changes specific properties of a mailbox item, a copy of the original item is saved in the Versions folder before the change is committed. The process is repeated for subsequent changes. Items captured in the Versions folder are also indexed and returned in In-Place eDiscovery searches. After the hold is removed, copies in the Versions folder are removed by the Managed Folder Assistant.
- Multiple hold behavior: It’s possible that a user can be placed on multiple holds at the same time. Exchange treats this condition by applying the search parameters of all in-place holds together using a logical OR operator. A special condition is reached where if a user in more than 5 in-place holds, all items are automatically held (this would improve efficiency)
- User notification: Depending on your organization’s policies, a user may need to be informed when they are placed in hold. Exchange 2013 allows you to redirect a user to a web page based on a URL. Outlook 2010 displays this information in the backstage area.
- Monitoring Mailbox Quotas: In Exchange 2013, the Recoverable Items folder has its own quota and therefore items in the Recoverable Items folder aren’t calculated toward the user’s mailbox quota. When a user exceeds the warning quota on recoverable items in the recoverable items folder (RecoverableItemsWarningQuota parameter default set to 20Gb) , an event is logged in the Application Event log of the Mailbox server. Once this quota is reached (RecoverableItemsQuota, default set to 30Gb), users won’t be able to empty the Deleted Items folder or permanently delete mailbox items, nor will copy-on-write won’t be able to create copies of modified items. It therefore is crucial to monitor the Recoverable Items quotas for mailbox users placed on In-Place hold.
- Archived Lync Content: Exchange 2013 allows you to archive Lync Server 2013 content in Exchange, removing the requirement of a separate SQL Server database to store archived Lync content. When you place an Exchange 2013 mailbox on In-Place Hold or litigation hold, Microsoft Lync 2013 content such as instant messaging conversations and files shared in an on-line meeting are archived in the mailbox. If you search the mailbox using the eDiscovery Center in Microsoft SharePoint 2013 or In-Place eDiscovery in Exchange 2013, any archived Lync content matching the search query is also returned in search results. To enable archiving of Lync content in Exchange 2013 mailbox, you must configure Lync 2013 integration with Exchange 2013. For more details, see the following topics:
Common Administrative Tasks
- Authorize users: Add users to the Discovery Management Role Based Access Control Group.
- Place a mailbox in hold: EAC or Powershell
New-MailboxSearch “Hold-CaseId001” -SourceMailboxes “email@example.com” -InPlaceHoldEnabled $true
- Remove an In-place hold:
Set-MailboxSearch “Hold-CaseId001” -InPlaceHoldEnabled $false
- Notify a user who has been placed on hold:
Place notification message in the mailbox user’s Retention Comment property and user the RedirectionURL property to link to a web page.
- Set a quota and warning quota for the Recoverable Items sub-folder.
For an entire database: Set-MailboxDatabase – RecoverableItemsWarningQuota – RecoverableItemsQuota
For a single mailbox: Set-Mailbox – RecoverableItemsWarningQuota – RecoverableItemsQuota
Top PowerShell Commands/Tools
Technet article on In-place hold
Information Rights Management (IRM) features in Exchange 2013 are used to prevent information leakage or loss of potentially sensitive information, which can be costly to an organization and include financial loss, erosion of competitive advantage and damage to image and credibility.
- Active Directory Rights Management Services (RMS)
- AD RMS Rights Policy Templates
- Outlook/Transport Protection Rules
- E-mail/ OWA & ActiveSync support
- In-place eDiscovery support
- Hybrid and Cross-forest deployments
IRM features are deployed in conjunction with Microsoft Active Directory Rights Management Services (AD RMS). Using policy templates, an administrator can quickly deploy a wide array of policies to protect and secure potentially-sensitive data across a variety of client access methods (Outlook/OWA/ActiveSync), while still providing full support for eDiscovery and Journaling processes.
- AD RMS rights policy templates: RMS rights policy templates are XrML documents that contain a predefined usage policy that can be applied to protect an item of content. Templates can contain the following information:
- A template name and description.
- Users and groups that can be granted content licenses.
- The rights and associated conditions granted to the users.
- The content expiration policy.
- A set of extended policies.
- The template revocation policy.
- A revocation list.
- A revocation list refresh interval.
- A public key file for the revocation list.
- IRM Agents: IRM is implemented in Exchange 2013 using transport agents in the Transport service on a Mailbox server. Agents include the following (RMS Decryption Agent | Transport Rules Agent | RMS Encryption Agent | Prelicensing Agent | Journal Report Decryption Agent
- Transport Protection Rules: A transport protection rule is used to apply persistent rights protection to messages based on properties such as sender, recipient, message subject, and content.
- Outlook Protection Rules: An AD RMS template can be applied to Outlook 2010 or other RMS-enabled applications in order to protect messages before they are sent.
- Transport Decryption: This feature allows the Transport Service to inspect the content of an IRM protected message in order to apply policies or rules to the message.
- In-place eDiscovery: You can configure IRM to allow Exchange Search to index IRM-protected messages, in order to support an In-place eDiscovery search that is performed by members of the Discovery Management role group.
- Journal report Decryption: This allows the Journaling agent to attach a decrypted copy of a rights-protected message to the journal report. This requires the Federated Delivery mailbox to be added to the super users group on the AD RMS server.
- IRM in OWA: The following IRM functionality is available from OWA (Send/ Read IRM-protected messages | Send IRM protected attachments | WebReady Document Viewing
- IRM in Exchange ActiveSync: Organizations can use Information Rights Management (IRM) to apply persistent protection to messaging content when accessed from mobile devices. Mobile device users can create/read/reply to and forward IRM-protected messages.
Common Administrative Tasks:
- Configuring IRM: Set-IRMConfiguration: Set-IRMConfiguration -InternalLicensingEnabled $true
- Create a Transport Protection Rule: via EAC or Cmdlet
Retrieve all RMS templates: Get-RMSTemplate | format-list
Create rule: New-TransportRule -Name “New rule” -SubjectContainsWords “Dirty Bananas” -ApplyRightsProtectonTemplate “Do Not Forward”
- Create an Outlook Protection Rule: New-OutlookProtectionRule -Name “Project Bananasplit” -SentTo “DL-BananasplitRnD@chimpcorp.com” -ApplyRightsProtectionTemplate “Business Critical”
- Add the Federation System Mailbox to AD RMS Super Users Group :
Create a dedicated Super User Group: New-DistributionGroup -Name ADRMS SuperUsers -Alias “ADRMS Super Users”
Add the Federated system mailbox to the group: Add-DistributionGroupMember ADRMSSuperUsers -Member FederatedEmail.4c1f4d8b-8179-4148-93bf-00a95fa1e042
- Enable/Disable Transport Decryption: Set-IRMConfiguration -TransportDecryptionSetting Mandatory
- Enable IRM to support In-place eDiscovery:
Enable Exchange Search: Set-IRMConfiguration -SearchEnabled $true
Enable eDiscovery: Set-IRMConfiguration -EDiscoverySuperUserEnabled $true
- Enable/Disable Journal Report Decryption: Set-IRMConfiguration -JournalReportDecryptionEnabled $true
- Enable/Disable IRM OWA support:
Configure on each OWA Virtual Directory: Set-OWAVirtualDirectory -IRMEnabled $true
or Configure on each OWA Mailbox Policy: Set-OWAMailboxPolicy -IRMEnabled $true
- Enable/Disable IRM Exchange ActiveSync support:
Add the Federation System Mailbox to AD RMS Super Users Group (Step 4)
Top PowerShell Commands/Tools:
– New/Get-TransportRule (ApplyRightsProtectionTemplate)
Technet: Information Rights Management
Technet: Common IRM tasks
Technet: Configure permissions
Cmdlets: Messaging policy and compliance
Reference: AD RMS Rights Policy Templates
List of supported file types covered by IRM policies when attached to messages
Unified Messaging in Exchange allows an administrator to integrate Microsoft Exchange 2013 with an Organization’s voice infrastructure. Exchange can extend voicemail and auto-attendant functionality that integrates directly with a user’s mailbox and calendar.
- Collection and Transcription of Voicemails
- Caller ID
- Contact Store
- Outlook Voice Access (OVA)
- Auto attendant
In Exchange 2013, the Unified Messaging platform is no longer deployed as a separate server role. UM processes now run on the Client Access server and Mailbox server roles.
- Microsoft Exchange Unified Messaging Call Router service (Microsoft.Exchange. UM.CallRouter.exe): Runs on a Client Access Server and proxies Session Initialization Protocol (SIP) traffic that’s generated from an incoming calls to the Mailbox Server.
- Microsoft Exchange Unified Messaging service (umservice.exe): Sets up a media channel Runs on a Mailbox Server and plays voice mail greetings, processes call answering rules, and invites the caller to leave a voice message. The Mailbox server then records the voice message, creates a transcription of the message, and deposits it in the user’s mailbox.
- UM Worker Process (UMWorkerProcess.exe): Runs on a Mailbox Server and is used to interact with all incoming and outgoing requests received by the Unified Messaging service and is managed by the UM Worker Process manager.
- UM Dial Plans: (Parameters: Dial Codes | Outlook Voice Access | Dialing Rules (In-country and International) | Dialing Authorization | Transfer and Search)
- UM Mailbox Policies: Links UM-enabled mailboxes with a UM dial plan (Parameters: User Features | Message Text | PIN policies | Dialing Authorization | Protected Voice Mail)
- UM Autoattendant: Call answering and menu navigation features (Parameters: Language | Greetings | Business Hours | Menu Navigation | Address book and Operator Access | Dialing authorization)
- UM hunt group: Determines which UM IP gateways to accept incoming calls from for a particular UM dial plan
- UM Performance Counters
Common Administrative Tasks:
- Create a Dial Plan: New-UMDialPlan
- Assign servers to a Dial Plan:
Mailbox Server: Set-UMService -identity Server -Dialplans DialPlan
CAS Server: Set-UMCallRouterSettings –server Server –DialPlans DialPlan
- Set Startup mode to dual or TLS:
Mailbox Server: Set-UMService -identity -UMStartupMode dual
CAS Server: Set-UMCallRouterSettings: UMStartupMode dual
- Install SSL Certificate:
Mailbox Server: Enable-ExchangeCertificate –thumbprint ‘thumbprint’ –services UM
CAS Server: Enable-ExchangeCertificate –Server ‘SERVER’ –thumbprint ‘thumbprint’ –services umcallrouter
- Restart Services:
Mailbox Server: Restart-Service –msExchangeUM
CAS Server: Restart-Service msExchangeUMCR
- Configure SIP Ports on CAS Server: Set-UMCallRouterSettings
- Configure Speech transcription: Set-UMMailbox –VoiceMailAnalysisEnabled
Top PowerShell Commands/Tools:
– exchucutil.ps1 script is used to create UM IP gateways and UM hunt group
Exchange Server 2013 Voice Architecture here.
Blogpost: Lync Integration with Exchange 2013 here.
Technet: Powershell Commands.
Tutorial: Configuring UM with 3CX
Legacy Exchange 2010 UM Architecture here.