Managing Accessibility of OST files through Cloud-based Platforms

Implementing Cloud-based Platform Migration

Cloud-based migration is a next generation migration option that readily reduces the chances of corruption and makes accessibility of files quite easy. As far as OST files are concerned, they are also preferred to be migrated in the same platform without any worries. Doing so makes them all time available back-up of required files. Moreover, the cloud can be accessed from anywhere at any time and can be even used for bulk migration. 

Thus, the required OST files become safe and handy on few clicks. However, few questions protrude while we think of OST files. It relates to “What happens if OST files get corrupt?” “Will it anyhow affect the existence of PST?” If how, then what will be the concerning factors?

To make the picture clearer it must be clear that the initial focus is to rescue the damaged OST files content and try to keep data hierarchy unchanged without any issues. To be answerable in such unexpected circumstances it is necessary to avail some technical assistance from reliable third-party tool. 

Managing spoilt OST files

There is high probability of getting OST files affected by corruption. However, the severity of OST file corruption is beyond the scope of the topic, but its impetus may somehow put a massive effect on entire file structure. 

Implementing Cloud-based Platform Migration

Cloud-based migration is a next generation migration option that readily reduces the chances of corruption and makes accessibility of files quite easy. As far as OST files are concerned, they are also preferred to be migrated in the same platform without any worries. Doing so makes them all time available back-up of required files. Moreover, the cloud can be accessed from anywhere at any time and can be even used for bulk migration. 

Thus, the required OST files become safe and handy on few clicks. However, few questions protrude while we think of OST files. It relates to “What happens if OST files get corrupt?” “Will it anyhow affect the existence of PST?” If how, then what will be the concerning factors?

To make the picture clearer it must be clear that the initial focus is to rescue the damaged OST files content and try to keep data hierarchy unchanged without any issues. To be answerable in such unexpected circumstances it is necessary to avail some technical assistance from reliable third-party tool. 

Managing spoilt OST files

There is high probability of getting OST files affected by corruption. However, the severity of OST file corruption is beyond the scope of the topic, but its impetus may somehow put a massive effect on entire file structure. 

image

Figure 1 Dialog Box presenting technical issue.

You must be introduced to the dialog box that displays the inaccessibility information. It is not the single note; moreover, there are ample of them that convey the same message in other manner. 

However, assistance from third-party proved to be helpful. Kernel for OST to PST is the most convenient and user-enabled third-party tool, which is often recommended for this purpose. This OST to PST tool simply scans for damaged/corrupt OST files and repairs them keeping OST file hierarchy and data structure unchanged. Moreover, the tool provides an option either to directly migrate the OST files to cloud-based platforms or convert same to PST and other file formats like DBX, MBOX, MSG, EML, TXT, RTF, HTML, MHTML, & PDF.

What takes for restructuring lost OST files?

Kernel for OST to PST makes use of inbuilt QFSCI algorithm to regain the lost file structure of OST files, it also includes resurrection of OST files content. So, to restructure the damaged OST files Kernel for OST to PST makes use of few essential steps, which include selection of concerned OST files, their preview post conversion and their migration/conversion as per user’s choice. The entire process is done within few clicks and conversion/migration entirely depends upon user’s choice.

Cloud-based Migration-User’s choice

Being a choice of next generation, cloud-based migration is being effectively used and recommended by many users.  Kernel for OST to PST provides such option in addition to the conventional methods of saving OST files in other formats. The screenshot of the tool clearly shows the effectiveness of the tool in handling the OST data in conventional process and cloud-based migration that involves email servers, webmails and Office 365.

 

image

Figure 2 Screenshot of Kernel for OST to PST presenting different options.

 

About Kernel for OST to PST

Kernel Data Recovery has designed a more secure way than conventional methods that were quite risky and time consuming. A more dedicated tool – Kernel for OST to PST has been crafted for this purpose. Kernel for OST to PST uses a secure way to convert OST files to other file format. Thus, it brings a clear picture to the user that OST files can be saved in other formats as well with same dedication and precision as it does for PST file format. Even if the obtained files are large in size, then it can be split in required size. For lost OST files Kernel for OST to PST provides ‘Search’ option. The ‘Preview’ option lets user to make sure that entire conversion has been perfectly taken place. This option provides preview of converted items. 

With the below mentioned descriptive figure you can understand the exact functioning of the tool. 

image

Figure3.Making file selection and uploading.

image

Figure4.Details of concerned files and different saving option.

image

Figure5. Saving Path of desired file.

For perfect conversion process to initiate, the user system must have Pentium class processor, minimum 64 MB RAM, 50 MB space for software installation and some space to save results. The tool supports all versions of MS Exchange Server, MS Outlook, Outlook Express, Windows Server and Windows OS. 

Securing OST files-An Ultimate Aim

For secure OST migration to cloud-based platforms it is recommended to take assistance from reliable third-party tool like Kernel for OST to PST. Due to provision of secure migration and conversion, this tool is highly advised. Since cloud-based migration provides effective and convenient access to OST files; therefore, it is regarded as future cloud for MS Outlook users.

You could download the copy from the below location

http://www.nucleustechnologies.com/exchange-ost-recovery.html

PowerShell Script to copy Exchange GUID from Office 365 to Exchange On-prem User.

When users are been migrated from On-Prem to Office 365 using some third party tool then the on-prem user object’s Exchange GUID gets rested to “00000000-0000-0000-0000-000000000000" . This will cause problem when we need to move back the mailbox to on-prem for some reason.  Below is the code which helps to validate the On-prem users which Exchange GUID and copy back the Exchange GUID properties from Online mailbox to the Exchange On-prem user.

Set-ADServerSettings -ViewEntireForest 1
"Remotemailbox" > c:\temp\myremotemailbox.csv
get-remotemailbox  -resultsize unlimited  | %{
$upn = $_.UserPrincipalName
$proxy = $_.EmailAddresses.ProxyAddressString
$exchGuid = $_.ExchangeGuid

$mailboxlist = @()
$found = $false
    foreach($pro in $Proxy)
    {
        If($pro -like "X500:/o=ExchangeLabs/*")
        {
        $found = $true
        }
    }
    if($found -eq $true)
    {
        $upn >> c:\temp\myremotemailbox.csv
    }
    if($exchGuid -eq "00000000-0000-0000-0000-000000000000")
    {
       
        $upn >> c:\temp\myremotemailbox.csv
   
    }
}

$LiveCred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection
Import-PSSession $Session –allowclobber

$csvimport = Import-Csv C:\temp\myremotemailbox.csv
"userprincipalname,legacyExchangeDN,ExchangeGuid" > c:\temp\rmbresult.csv
Foreach($csv in $csvimport)
{
$rmaibox = $csv.remotemailbox
$mailbox = get-mailbox $rmaibox | select userprincipalname,legacyExchangeDN,ExchangeGuid
$mailbox.userprincipalname + "," + $mailbox.legacyExchangeDN + "," + $mailbox.ExchangeGuid >> c:\temp\rmbresult.csv
}

remove-PSSession $Session

$finalRM = Import-csv C:\temp\rmbresult.csv
foreach($final in $finalRM)
{
$upn = $final.userprincipalname
$eguid = $final.ExchangeGuid
$x = "X500:"  +  $final.legacyExchangeDN

if($upn -ne "")
{
get-remotemailbox $upn | Set-reMotemailbox -exchangeguid $eguid -CustomAttribute3 "Account Verified for X500-GUID" -EmailAddresses @{Add=$x}

}
}

Configuring LoadMaster Global Balancing for Exchange 2013 – Part 3

In Part 1 and Part 2 of the article series, we deployed Exchange 2013 servers in each AD site, deployed Kemp LoadMaster and configure it for Exchange services in each AD site.

In this final part of the article series, we will configure LoadMaster with Global Balancing, where if Dallas AD site goes down then client request (internal and external) will route to Exchange 2013 servers in the Pittsburg AD site and vice versa. Global Balancing provide redundancy for both Exchange and LoadMaster itself. Below is the figure 3.1 is the current diagram for the Exchange 2013 lab using Kemp Free LoadMaster

image

Figure 3.1 Exchange 2013 and Kemp LoadMaster LAB configuration.

Configure Static Routes on Kemp LoadMaster

From the above Figure 3.1, Dallas LoadMaster Eth0 interface is configured with network 192.168.1.0/24 network and LAB router with DHCP have assigned DNS and Default gateway for this interface. Using DNS and default gateway, Eth0 interface can reach any external network including Pittsburg Eth0 interface.

image

Figure 3.2 DNS Name Server IP address

But, Dallas LoadMaster Eth1 interface is on 10.10.10.0/24 and it does not configured with DNS and default gateway. Since, two network interface (NIC) on the same machine cannot be configured with two different DNS and Gateway. Hence Eth1 interface has no information on how to reach Eth1 network (20.20.20.0/24) of Pittsburg LoadMaster or any other network internally. To archive this, we need to add manual static routes on the Dallas LoadMaster. Below is the steps to configure the same.

1. Connect to Dallas LoadMaster using Internet Explorer

2. Expand System Configuration -> Additional Routes

3. Add route to reach 20.20.20.0/24 network using gateway 10.10.10.101 (It is the default gateway for 10.10.10.0/24 network to reach Pittsburg network). Below Figure 3.2 is the reference image.

image

Figure 3.3 Adding new Fixed Static Routes on Dallas LoadMaster

Similarly, Pittsburg LoadMaster Eth1 interface is on 20.20.20.0/24 and it is not configured with DNS and default gateway. It has no information on how to reach Eth1 10.10.10.0/24 Dallas network or any other network. Hence we need to add static routes on the Pittsburg LoadMaster to reach Dallas Eth1. Below is the steps to configure the same.

1. Connect to Pittsburg Loadmaster using Internet Explorer

2. From the Left menu , expand System Configuration -> Additional Routes

3. Add route to reach 10.10.10.0/24 network using gateway 20.20.20.101 (It is the default gateway for 10.10.10.0/24 network). Below is the reference image.

image

Figure 3.4 Adding new Fixed Static Routes on Pittsburg LoadMaster

Configuring LoadMaster to Synchronize Configuration each other:

Synchronization lets to replicate the configuration changes or additions done one LoadMaster to another. Below are the steps to synchronize two LoadMaster on Dallas and Pittsburg network.

1. Connect to Dallas LoadMaster from the browser using the IP Address https://192.168.1.100

2. From the main menu ,expand System configuration -> remote Access

3. Under GEO Settings, specify GEO LoadMaster Partners IP Address and click on set Geo LoadMaster Partners. In our case it is Pittsburg LoadMaster Eth0 interface IP address – 192.168.1.101

image

Figure 3.5 Configuring GEO LoadMaster Partner Settings at Dallas

4. Now, connect to Pittsburg LoadMaster from the browser using the IP address https://192.168.1.101

5. From the main menu ,expand System configuration -> remote Access

6. Under GEO Settings, specify GEO LoadMaster Partners IP Address – 192.168.1.100 and click on Set Geo LoadMaster Partners. In our case it is Dallas LoadMaster Eth0 interface IP address

image

Figure 3.6 Configuring GEO LoadMaster Partner Settings at Pittsburg

7. Now we have configured the synchronization between Dallas and Pittsburg LoadMaster, we just make configuration on any one LoadMaster and it gets replicated to other.

Configuring Global Balancing for FQDN – mail.happy.com

1. Connect to Dallas LoadMaster and from the main menu Expand Global Balancing -> Manage FQDNs

2. Input the new FQDN name – mail.happy.com and click on Add FQDN

image

Figure 3.7 Configure Mail.happy.com FQDN at LoadMaster

3. Enter the LoadMaster Dallas External Virtual IP Address – 192.168.1.90 and click on Add Address

image

Figure 3.8 Configure Mail.happy.com FQDN with Dallas External Virtual IP Address

4. Similarly add Pittsburg Loadmaster External Virtual IP address – 192.168.1.91 and click on Add Address

image

Figure 3.9 Configure Mail.happy.com FQDN with Pittsburg External Virtual IP Address

5. Then finally, add Dallas LoadMaster internal Virtual IP – 10.10.10.90 and then Pittsburg LoadMaster Virtual IP – 20.20.20.91

image

Figure 3.10 Mail.happy.com FQDN updated with Dallas and Pittsburg External and Internal Virtual IP Address

6. To provide better health check for the HTTPS services, change the checker from ICPM Ping to TCP Connect for the Virtual IP Address on port 443 and then click on Set Addr

image

Figure 3.11 Configure Mail.happy.com FQDN with Health settings to determine the availability of the services.

7. We could now see that all the Servers are Available and healthy and ready to take connection for mail.happy.com

Configuring Global Balancing for FQDN – autodiscover.happy.com

Since, we have one Virtual IP for all the Exchange HTTPS services on each LoadMaster, we would need to create another FQDN name autodiscover.happy.com and follow the same instruction provided for same IP address and port number used for FQDN name mail.happy.com.

image

Figure 3.12 Configure autodiscover.happy.com FQDN with Pittsburg External and internal Virtual IP Address

Configuring Global Balancing for FQDN – smtp.happy.com

1. Connect to Dallas LoadMaster using Internet Explore

2. Expand Global Balancing -> Manage FQDN

3. Add a FQDN name smtp.happy.com’ and click Add FQDN

image

Figure 3.13 Creating new FQDN smtp.happy.com

4. Input each of the internal and external Virtual IP (VIP) address of both Dallas and Pittsburg LoadMaster and click Add Address. Then, make sure that checker is to set TCP connect for port 25.

image

Figure 3.14 Adding External VIPs for smtp.happy.com FQDN

Since Dallas and Pittsburg is configured to sync with each other, we should be able see the configuration synced from Dallas LoadMaster to Pittsburg LoadMaster in real-time. To validate the same, connect to Pittsburg LoadMaster and Navigate to Global Balancing -> Manage FQDNs.

image

Figure 3.15 Validation Global Balancing synchronization at Pittsburg LoadMaster.

DNS Configuration:

We are almost done with the LoadMaster configuration at both Dallas and Pittsburg AD site. Now we need to configure Internal and External DNS with delegated subdomain for mail.happy.com and autodiscover.happy.com pointing to LoadMaster Virtual IP Address defined in the below table.

image

To accept SMTP emails from internet for happy.com, configure MX records on the external DNS to point to the external DNS VIP of both Dallas and Pittsburg LoadMaster and below are the details.

image

Configure Exchange send connector with option ‘Route mail through smart host’ and specify the LoadMaster SMTP Internal VIP Address – 10.10.10.103 and 20.20.20.104.

This configuration will help clients to connect to all the HTTPS service and also mail flow between internal and internet.

Below is the final diagram with complete IP Address, DNS, LoadMaster and Exchange server details.

image

Figure 3.15: Final LAB diagram

Finally, we are at the end of the articles series, completely installed, configured Kemp Free LoadMaster in both the AD sites and also configured Global balancing between the sites. Same steps can be followed to implement LoadMaster in production environment, but we need public IP address NATed to the DMZ VIPs to communicate with other external domains.

Configure LoadMaster for Exchange 2013 Services in LAB – Part 2

In part 1 of the article series, we got Exchange 2013 configured, Hyper-V networks configured and installed LoadMaster in both the AD site and finally configure with Two-Arm networks. In this part of the article series we will configure LoadMaster for Exchange HTTPS and SMTP services. Below Figure 2.1 is the current lab setup with IP address configuration.

image

Figure 2.1 Current lab setup with IP address configuration.

Importing Exchange Kemp Templates into the LoadMaster

Kemp offers free templates for Exchange 2013 with preconfigured settings. These preconfigured templates are based on the Microsoft best practice and helps us to keep our configuration simpler and quicker. These configurations can further tweaked to suites the complex environment and business requirements.

1. Download Exchange 2013 Core Services template from Kemp LoadMaster documentation page on the Hyper-v host machine

https://kemptechnologies.com/loadmaster-documentation/.

image

Figure 2.2 Downloading Exchange 2013 Core Services template.

2. Core services template helps administrator to configure all the Exchange 2013 HTTPS, SMTP and MAPI protocols easily with minimum configuration steps.

3. Connect to the Dallas LoadMaster from the host machine browser using the IP Address – https://192.168.1.100

4. Click on Virtual services -> Manage Templates

5. Click on Browse button to select the template file ‘Exchange2013Core.tmpl’ from the local machine and click on Add New Template button to import the same.

image

Figure 2.3 Importing Exchange 2013 Template

6. Once imported, it will display the details of all the templates imported

image

Figure 2.4 Exchange 2013 Templates after importing the downloaded template file

Perform the above steps 1-5 to import the Exchange 2013 Core Services template on Pittsburg LoadMaster.

Creating and Configuring HTTPS Virtual Services

In this part, we will configure one Virtual IP for all the Exchange 2013 HTTPS virtual services. HTTPS virtual services include OWA, EAC, Active sync, Outlook anywhere and EWS. We can also configure one virtual IP for each Exchange services. It is complex to configure but provides better redundancy for each of the Exchange services.

Follow the below steps to configure Dallas LoadMaster with one Virtual IP address for all the Exchange HTTPS services.

1. Connect to the Dallas LoadMaster from the browser using the IP Address – https://192.168.1.100

2. Expand Virtual Services -> click Add new

3. To allow external clients to connect to Exchange, sepcify VIP – 192.168.1.90 on port 443, then select use template Exchange 2013 HTTPS and click on Add this virtual service.

image

Figure 2.5 Adding Virtual IP Address for Exchange 2013 HTTPS

4. It then redirects to the properties page of Virtual IP(VIP) address

5. Under Basic Properties, specify the Alternative Address as 10.10.10.90 from which is from Dallas internal network segment.

image

Figure 2.6 Exchange 2013 HTTPS Basic properties configuration.

6. Keep the Standard Options, SSL Properties, Advanced Properties, and ESP Options as default.

image

Figure 2.7 Exchange 2013 HTTPS Standard Options, SSL Properties, Advanced Properties, and ESP configuration.

7. Under Real Servers properties, click on Add New button to add the Dallas Exchange 2013 server

image

Figure 2.8 Exchange 2013 Real Servers Properties

8. Specify the Dallas Exchange 2013 IP Address -10.10.10.2 and click Add This Real Servers

image

Figure 2.9 Specifying Exchange 2013 Server Address for Real Servers options.

9. Validate the addition of Exchange 2013 server under real servers.

image

Figure 2.10 Validating Addition of new Exchange 2013 Real Servers Properties

10. Finally, click on View/Modify services from the main menu to confirm the new HTTPS Virtual IP Addresses and services status is UP.

image

Figure 2.11 Validating HTTPS Virtual IP Addresses and services status

Perform the above operation from step 1 – 11 on Pittsburg LoadMaster to configure External Virtual IP Address 192.168.1.91 and internal alternative Virtual IP as 20.20.20.91. Make sure to add the internal Pittsburg Exchange 2013 server IP address 20.20.20.2 under Real Servers.

Creating and Configuring SMTP Virtual Services

SMTP Virtual services help to route email between internal and external network. Internet MX records must be configured to these external Virtual Address so internet emails are delivered to it. LoadMaster process the Internet email and forwards to the internal Exchange servers. Similarly Internet email from internal are accepted by the LoadMaster and it will be process delivered to external. Below are the steps to configure the same:

1. Connect to the Dallas LoadMaster using browser – https://192.168.1.100

2. From the main menu, expand Virtual Services -> select Add new

3. Input the Virtual Address 192.168.1.103 , select use template Exchange 2013 SMTP and click on Add This Virtual Service

image

Figure 2.12 creating new Virtual IP Address for Exchange 2013 SMTP services.

4. It then redirects the advance properties page

5. Specify the Alternative Address – 10.10.10.103 from Internal network subnet

image

Figure 2.13 Configuring Exchange 2013 SMTP basic properties.

6. Keep Standard Options, SSL Properties, Advanced Properties and ESP Options as default

7. Click on Add New button from Real Serves options to add the Dallas Exchange 2013 server.

image

Figure 2.14 Configuring Real Servers properties.

8. Specify the Exchange 2013 IP Address -10.10.10.2 and click on Add this Real Servers

image

Figure 2.15 Adding Exchange 2013 Server under Real Server.

11. Validate the Exchange 2013 server IP address and port under Real Servers.

image

Figure 2.16 Validating Exchange 2013 Server under Real Server.

12. Click on View/Modify Services to confirm the new SMTP Virtual IP Addresses and services status is UP

image

Figure 2.17 Validating new Exchange SMTP Virtual Service.

Perform the above operation from step 1 to 13 on Pittsburg LoadMaster to configure External SMTP Virtual IP Address 192.168.1.104 and internal alternative Virtual IP as 20.20.20.104. Finally, make sure to add internal Exchange server IP Address – 20.20.20.104 under Real Servers and validate the same.

We have almost done with the configuration of LoadMaster in the lab and below Figure 2.18 is the final Exchange 2013 LAB using Kemp Free LoadMaster. It has all the necessary VIP address for client connection.

image

Figure 2.18 Exchange 2013 LAB using Kemp Free LoadMaster

Importing Exchange 2013 Certificate into the LoadMaster

Currently LoadMaster is not configured with SSL Offloading. SSL Offloading terminates the client SSL connection at the LoadMaster and generate the new connection to the Exchange server in the backend. This improve the security and performance for client connection. This is an optional settings and below are the steps to perform the same:

1. Export the SAN Certificate from the Exchange server 2013 with private key in PFX format and password.

2. Connect to the Dallas LoadMaster through internet Explorer

3. Click on Mail Menu -> Certificate -> SSL certificate and click on Import Certificate

image

Figure 2.19 SSL Certificate Import option on LoadMaster.

4. Specify the Exchange Certificate file path, Pass Phrase (password applied during the export) and Certificate Identifier. Click on Save to import the certificate into the LoadMaster

image

Figure 2.20 Importing SSL Certificate into the LoadMaster

5. Modify the Exchange HTTPS virtual Service and expand SSL Properties

6. Enable SSL Acceleration and Reencrypt option. Then set the available Exchange certificate and move it to assigned certificates. Lastly select Best Practices under Cipher set and click on Modify Cipher Set.

image

Figure 2.21 Configuring SSL Offloading and assigning Exchange certificate on the LoadMaster

Follow the above instruction from step 1-6 on the Pittsburg LoadMaster to import the Exchange certificates and configure SSL offloading.

We are almost at the end of the Part 2 article series and configured with LoadMaster for Exchange 2013 HTTPS and SMTP Services. In the next and final part of the article series, we will configure Geo Redundancy. Which allows clients to connect to the available Exchange servers, if any of the Exchange servers/services /AD sites goes down.

Deploying a Free LoadMaster at Your Exchange 2013 lab – Part 1

In this article series, we will understand the step by step instruction to deploy Kemp LoadMaster for Exchange Server 2013 services in multi- site (Dallas and Pittsburg) lab environment and also configure geo-redundancy between the two AD sites. Where the Kemp LoadMaster load balances the client requests( from internal and internet network) within the AD site and also routes the client request automatically to the available Exchange 2013 servers in other site when one AD site goes down.

Current LAB Setup

Current Lab is built on Microsoft Hyper-V and it is configured with two AD sites Dallas (10.10.10.0/24 network) and Pittsburg (20.20.20.0/24 network). It is also installed with domain controller on each site with the domain name happy.com. In each of the AD site, one Exchange Server 2013 (multirole) is installed and configured Database Availability Group (DAG01) between them. Below Figure 1.1 has the details of the AD sites, Domain Controller, Exchange nodes and DAG.

image

Figure 1.1 Exchange 2013 deployed in the lab environment.

 

LAB Hyper-V Virtual Network Configuration Requirement:

LoadMaster interfaces with both internal network and external/internet network. Hence, Hyper-V needs to configure with two Virtual networks: DMZ Network and Internal Network.

1. DMZ Network: Create a new DMZ Network virtual network of type External network, it should connect to the host Machine network interface card (NIC) and communicates with the external world. Make sure to select Allow management operation system to share this network adapter. The Figure 1.2 has the details of the same. Host machine NIC should be connected to the internet.

image

Figure 1.2 DMZ Network configuration

2. Internal Network: Create new Internal Network virtual network of type Internal Network. Internal network is isolated network which can communicate within themselves. We will configure all Exchange Servers 2013 guest machines NICs to use internal network.

image

Figure 1.3 Hyper-V Internal Network Configuration.

A Windows VM is configured as Router with two NICs pointing to internal network. This windows Router will route traffic between two network segment Dallas (10.10.10.x) and Pittsburg (20.20.20.x) within the internal network.

Configuring Kemp LoadMaster in the Exchange 2013 lab

In this part , we will Install and configure LoadMaster on both the AD site in Two-Arm Deployment as defined in the below Figure 1.4. With one NIC pointing the DMZ network and other NIC pointing to the internal network.

image

Figure 1.4 Kemp LoadMaster deployment and IP address configuration plan

Configuring LoadMaster for Dallas Network

1. Register a new Kemp ID at http://freeloadbalancer.com and download the latest version of Free Kemp loadmaster for Hyper-V

image

Figure 1.5 Free Kemp LoadMaster Website

2. Extract the KEMP LoadMaster Virtual Machine (VM) file on the Hyper-V server.

3. Start Hyper-V Manager and click on Import VM from the Actions menu then click on Next at the Welcome Screen.

image

Figure 1.6 Importing LoadMaster Virtual Machine into Hyper-V

4. At Local Folder page, Click on Browse button to specify the Kemp LoadMaster virtual machine and click Next and on Select Virtual Machine page keep the settings as default and click on Next

image

Figure 1.7 Specify the Folder containing virtual machine to import

5. Choose the option Copy the Virtual Machine (create a unique ID) to make of copy of the VM with the new unique ID and click Next. (This will help us to create multiple copy of the downloaded LoadMaster image)

image

Figure 1.8 Choose the virtual machine Import type

6. Choose folders paths to store the new copy of the LoadMaster VM and click Next.

image

Figure 1.9 Choose Virtual Machines files path

7. Then Choose Storage folders path for the new VM and click Next

8. Validate the Summary page and click on Finish to import the Virtual Machine (VM) into Hyper-V console.

image

Figure 1.10 Completing Import Wizard.

9. To identify the Dallas Load Master in the Hyper-V, rename the newly imported LoadMaster VM to DalKemp.

image

Figure 1.11 Renaming LoadMaster VM in Hyper-V Manager

10. To configure the virtual network on the DalKemp VM, right click on LoadMaster VM and select Settings. Select DMZ Network for the first VM-Bus network Adapter and Internal Network for the second VM-Bus network adapter and click on Apply. Below Figure 1.12 has the reference details.

image

Figure 1.12 Configure Network Adapter on DalKemp VM

Connecting to LoadMaster and Activating Free License

1. Boot the DalKemp LoadMaster VM from Hyper-V console

2. Post booting it displays IP Address of the Appliance (in our case it is 192.168.1.100). It is been assigned by the LAB router via External DMZ network. Since, this network is sharing the host NIC and also connected to the Lab router with DHCP Configured.

Note: The Default Username/Password: bal /1fourall

image

Figure 1.13 DalKemp LoadMaster connection information.

3. To configure the DalKemp LoadMaster, start Internet Explorer and connect to it using the IP address assigned – https://192.168.1.100

4. Provide the Default credentials Username/Password: bal /1fourall

5. Click on Agree to accept the End User Agreement

image

Figure 1.14 Accepting EULA

6. Select License Type as Free LoadMaster and click on Allow to connect back to KEMP home for license activation.

image

Figure 1.15 Selecting License Type

7. Use the registered KEMP ID and activate free LoadMaster license.

image

Figure 1.16 Activating Free LoadMaster License

8. Once activated, it will prompt to reset the default user (bal) password. Once password is changed then re-login back to the VM using new password.

Configuring LoadMaster Network Interface

1. Login to Kemp LoadMaster from Internet Explorer and under Main menu expand system configuration -> interface

2. Select eth0 (Network Interface 0) and validate IP Address – 192.168.1.100/24. It is been assigned by DHCP and we could use the same on the interface and change it if required. Make sure Use for GEO Responses and Request is checked and this interface will be used to communicate with Pittsburg LoadMaster for geo redundancy.

image

Figure 1.17 Configuring LoadMaster eth0 Interface

3. Select eth1 (network Interface 1) and assign the IP address 10.10.10.9/24 from the Dallas internal network segment and click on Set Address. This interface will be used to communicate with internal Exchange 2013 servers in the Dallas and Pittsburg AD sites.

image

Figure 1.18 Configuring LoadMaster eth1 Interface

Follow the above Instructions: Configuring LoadMaster for Dallas Network, Connecting to LoadMaster and Activating Free License, Configuring LoadMaster Network Interface to import and configure Pittsburg LoadMaster. Configure interface Eth0 with IP address 192.168.1.101/24 (Pittsburg DMZ network) and Eth1 with 20.20.20.9/24 (Pittsburg internal network) on it.

With this we are at the end of the part 1 of the article service with Exchange server 2013 installed and configured, Hyper-V networks configured and finally installed and configured LoadMaster in both the AD site. In the next part we will configure the LoadMaster for the Exchange HTTPS/ SMTP Services and validate the same.

Publishing Exchange and ADFS Server for Office 365 using IIS ARR Server

I found this article series extremely helpful where you want to publish Exchange and also want to publish ADFS servers for Office 365 SSO using  free IIS Application Request Routing proxy server.

 

Part 1 : Reverse Proxy for Exchange Server 2013 using IIS ARR
Part 2: Reverse Proxy for Exchange Server 2013 using IIS ARR
Part 3: Reverse Proxy for Exchange Server 2013 using IIS ARR
Part 4: IIS ARR as a Reverse Proxy and Load balancing solution for O365 Exchange Online in a Hybrid Configuration

 

Happy Reverse proxy

Exchange 2013 Designing Factor

Exchange 2013 designing plays a major role in the successful deployment and long running Exchange without any issues. The main objective before designing the solution is to understand the technical and business requirement. These requirements has be understood, reviewed and documented thoroughly. Given below are the business and technical requirements which need to be considered before designing a new solution. These requirements vary from customer to customer depending on their type of business, country regulation, infrastructure, budget etc.

Business requirement

Total Cost of Ownership

It is both the direct and indirect cost and benefits with the implementation of the new solution. It includes purchase of the hardware, license, power, maintenance, engineers, hidden cost, etc.

Reduction in Implementation Time

There are tons of works required to implement Exchange and management, always looks for the automated process to deploy the new solution. Projects allow bounded by tight time lines and automated deployment, configuration and versions are the only to meet the tight deadlines. Automated process reduces human efforts, time and errors.

Service Uptime

Uptime of the server and uptime of the service are two different things. Server can be up but the services can be down. It has no meaning when the servers are up and services are down. Service uptime is measure in percentage and business expects to have very minimal down time. To provide 99.999 percent of uptime, it comes with the huge cost.

No or Minimum user impact

Migration of users to the new environment should have minimum or no impact. Users should be able to continue to send and receive emails with continue access calendars, shared mailboxes, and delegate mailbox during the migration.

Compliance and Legal requirement

New solution comply with compliance and legal requirement of the organization. It should support legal hold, eDiscovery, Role based access control etc. to meet the needs.

Supportable and expandable

There are tons of dependent applications which integrate with exchange. New version of exchange should be fully supported by both in house and vendor applications. It should also be scalable to accommodate the expansion of the organization growth.

Security

It should offer strong encryption and security from any kind of security threats and breaches. Security threats or breaches can be like spoofing, phishing or spamming, which can be very unhealthy to the organization and cause damage in terms of reputation and money. Mobile devises are easy sources of security threats, implementing encrypting thought Active sync policy would secure it. Other comment

Data Retention and Recovery

Companies have different retention polices for different types of emails. Some needs to retain the email forever and some would need to retain them for 7 years and some may be for just a year. One the other side, these retained email should be available for recovery depending on the requirement. These recoveries can be for the accidental deletion of email or recovery of email of several years back for the legal dispute.

Exchange Recovery time Objective (RTO) and Recovery point object (RPO)

RTO is defined as part of the disaster recovery and business continuity plan. Exchange Recovery time object is the acceptable account of time taken to restore after a disaster or service distribution occurs. Depending on the criticality of the service, RTO time varies and exchange being one of the most critical applications, which would need to have the RTO time as low as possible. It can be specified in seconds, minutes, hours or days. For example, if the RTO time is around 4 hours then you need to invest huge amount of money to provide redundant infrastructure but If RTO time is about day or two, then it would give some time to restore the service at the reduced infrastructure investment

RPO is also defined as part of the business continuity and disaster recovery plan. It is the maximum acceptable level of data loss after any disaster or catastrophe. It represents the point in time data to be recovered to resume the normal operation. It is calculated in Seconds, minutes, hours or days. If the RPO is 5 hours, then exchange data must be backed up once in 5 hours. Lower the RPO, higher the infrastructure investment cost and vice versa.

Technical Requirement

Easy Administration and implementation

It should be easy to manage, implement and the interface should be easy to use and provide remote PowerShell management. It should also provide the scope for automation to reduce the management and administration efforts.

Bigger Mailbox size

Users never want to delete anything from their mailbox; they want to keep everything. To address this issue, new Exchange solution should support bigger mailboxes

Bigger Database

Bigger database support reduces the number of databases in the organization. It also reduces the maintenance and management efforts. Newer disk provides larger storage space and accommodate bigger Database.

Client Support

It should support rich clients like Outlook, Outlook Web access, various mobile devices like blackberry, Active sync and Mobile device management solution.

High Availability and Disaster recovery

High Availability (HA) and Disaster recovery (DR) are very important for a business. Loss or unavailability of emails can be huge loss to the business; thus, HA and DR can help in reducing the complexity of delivering business continuity.

Integration

It should be able to integrate with another application and systems in the organization. Some of other applications like Lync, SharePoint, office application, some in house and other third party application. Organization will have many in-house built or third party application like SAP, HR etc. and this needs to be supported by the exchange server.

Virtual or Physical

Exchange is resource intensive application and depending on the organization policy some may want to implement physical and other want to go for virtual. Over the years, virtualization has proved to provide better performance with CPU and Memory. Microsoft also supports Exchange 2013 on virtualization technology like HyperV and Vmware, they also have provided some guidelines and best practice when Exchange is been implemented on vitalization. It’s totally a technical requirement from the Exchange team on the path they wanted to take to implement exchange.

Understanding Current Environment

Understanding current environment plays a major rules in designing the solution. It is very important to understand every component of the exchange and its depending tools which works in collaboration of exchange. Without understanding current environment, it would be impossible for anyone to design the new solution.

To start with need:

1. Exchange Architecture diagram

2. Exchange designing document

3. Exchange Configuration Information document

4. Exchange Server CPU Utilization and specifications

5. Exchange server Memory utilization and specifications

6. Exchange Mailbox Database configuration and Size

7. Exchange server Storage utilization type and design

8. Network diagram

9. Current High Availability and Disaster recovery model

10. Vendor support documents and support number.

11. Active directory diagram with server details

12. Blackberry and Mobile device Management (MDM) software and server details

13. SharePoint solution

14. Instant Messaging and Unified Messaging solution

15. Backup Solution

16. Fax solution

17. Archiving solution

18. Journaling

19. Antivirus Software

20. Gateway and Spam filtering solution

21. Email Encryption

22. Business Custom Application

23. Monitoring and reporting solution

24. Custom Outlook plugins

25. Signature Software

26. Server Patching Solution etc.

There are various native and exchange built-in tool available to pull the necessary information on the current environment and they play the vital role in designing.

1. Exchange profile Analyzer

2. Exchange Environment Report

3. Microsoft Exchange Server User Monitor (ExMON)

4. ExIISLogParser

5. Exchange Best Practice Analyzer

With these information, we get some good idea on the all the business and technical requirement and also help to get the complete knowledge on the existing environment. It helps provide solution which is ideal for the requirement and to accommodate the business growth. I hope this article helps you to considers all the factors before designing a messaging solution for your organization.