4

I have a large database in an AWS instance running SQL Server 2008 on a Windows Server 2008 R2.

The database is constantly changing and writing information, and its size is about ~100GB

I wish to migrate from our Amazon services to Microsoft Azure.

But I cannot afford any lost of information more them for more than 20-30 minutes

I don't mind using the Azure SQL or running a SQL Server under a VM in Azure Cloud, but I must keep the databases live and updated, there are few main tables that information is being added to them constantly

What would be the best way to do so ?

1 Answer 1

5

if you are using an AWS instance and not RDS and you are going to an Azure instance and not "Azure SQL Database" you can use log-shipping or something similar to get the downtime down to a few seconds: http://msdn.microsoft.com/en-us/library/ms187103.aspx

The steps you need to take:

  1. Take full backup on AWS
  2. restore full backup without recovery on Azure
  3. take log backup on AWS
  4. restore log backup without recovery on Azure
  5. repeat 3 and 4 until the time it takes is short enough (you probably want to script this out)
  6. take app offline
  7. take another log backup on AWS
  8. restore that log backup WITH recovery on Azure
  9. repoint App to Azure
  10. bring App online again.

3, 4 and 5 is what log-shipping would automate, but you could just write a powershell script too.

Sign up to request clarification or add additional context in comments.

5 Comments

Well this is exactly the situation i am having (not using RDS or azure SQL) But can you explain to how exactly am i suppose to do so in the best way ?
The idea is to take a full backup and restore it on the new server. Then take a log backup and restore it. Then the next, and so on. Once you are ready to switch, take the last backup, restore it on the new instance, bring the database online and repoint the app(s). Log-Shipping is a build-in solution that does automate this process, but you could do it manually too.
If i go by this order -> backup the database -> load it in the new server -> load the log-shipping scripts -> repoint the apps to the new server. where should i start the log-shipping ? if i start it before i deploy the database in the new server, will it know later on what it needs to execute and what not ?
@SebastianMeine we have a similar problem but how can we do this going to "Azure SQL database"?
@ChrisKooken there is no easy way to do that as Azure SQL Database does not allow log shipping. An alternative would be replication, but that is not possible either. You could write your own replication engine relying on change data capture (or triggers) and using an external app to transfer data changes to azure.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.