Azure HDInsight Kafka best practice

This practice is trying to use HDInsight Kafka to save to the streaming data into Azure DW. Even though Kafka has been added into the HDInsight of Azure, but there is not too much resources can be referred except you have good knowledge on Apache Kafka before.  

The practice will use two environment which are not the default in Windows system: JAVA JDK 7 or 8 (this can be download from Oracle website) and Maven (Apache Maven project, this can be downloaded from here: http://maven.apache.org/

  1. In windows OS -> Environment Variables, Add JAVA_HOME system variable to the JDK installation folder, for example: c:\projram Files\Java\jdk1.8...
  2. In Environment variables, add JAVA_HME BIN folder to the Path variable, for example: %JAVA_HAME%\bin;....
  3. In Environment variables, add Maven bin folder into the Path as well, for example: c:\maven\bin;...
  4. Download the project from here: https://github.com/Azure-Samples/hdinsight-kafka-java-get-started. This sample looks like to use Kafka as producer and consumer...
  5. In windows command line to build the sample: 
    • cd producer-Consumer
    • mvn clean package
  6. In Azure, create VNET, this VNET will be used to hose Kafka cluster or other clusters (Storm or Spark in the future)
  7. In Azure, create a Storage Account, this storage account will be used to save Kafka stream data. (Download and install MS Storage Explorer will be big help on access the Storage Account)
  8. In Azure, create Kafka HDInsight cluster, specify the storage account in step 7) and VNET in step 6)
  9. In Azure, open Cloud Shell (Use Bash, as the sample created by Apache and Java)
  10. Copy the compiled java file into Kafka cluster. (couldn't duplicate the command line which mentioned in the sample, but WinSCP can be downloaded, it is much better than command line). Login WinSCP by using sshuser@kafkaclustername-ssh.azurehdinsight.net, copy the jar file to kafka sshuser folder;
  11. In Azure Cloud shell: use the SSH to connect to the cluster: ssh sshuser@@kafkaclustername-ssh.azurehdinsight.net
    1. sudo apt -y install jq (use lower case). 
    2. Run the step 7 in the project webpage (those two statement have been verified)
    3. Run the step 8 in the project webpage (those two statement have been verified) to verify the kafkahosts and kafkabrokers
  12. Create the 'Test' topic: 
    /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic test --zookeeper $KAFKAZKHOSTS

Migrate to Azure SQL Data Warehouse - 7, encrypt data transformation

Data security consideration is the top priority when migrating to Cloud environment, especially when data transformation happened in public internet pipeline. Technically, when moving into Azure from on-premises, no matter data flow or files, there are several ways can help to protect the data transformation based on what ways we are using. 

  1. Data Connection Encryption. 

    Azure DB or Azure SQL DW will generate Connection strings (ADO.NET, JDBC, ODBC and PHP) that can be used in client applications. Be aware of that the "Encrypt = True" will be always added (in ADO.NET, JDBC only). This configuration should be always kept when using this connection string in client applications.

  2. SSIS (SQL Server Integration) Data Flow Encryption. 

    Believe that most of the MS SQL Server guys will be considering the technical tips when using SSIS as the major tool to do the data transformation. when creating the connection to Azure DW, Azure SQL Server or Azure NoSQL Database  by using OLE DB, please ensure to change the 'Use Encryption for Data' to True, this will use SSL in HTTP connection:

    If user ADO.NET provider to connect to Azure DW, there is more controls should be aware of:

    Column Encryption Setting: This feature is now only being supported in SQL Server 2016 (Not in Azure DW). If the target is Azure DW, this value should be disabled. This property is being used to setup the encryption for particular column.

Encrypt: When True, SQL Server uses SSL encryption for all data sent between the client and server if the server has a certificate installed. In Azure

  1. Data Factory Data Encryption

Data Factory is Azure-based data movement service, it can not only move data from on-Premises to Azure but also move data inside Azure environment as well..

 

Migrate to Azure SQL Data Warehouse - 2, Load SQL Server on-Premises to Azure

Our first practice will be loading a local SQL server database (on-premises) into Azure SQL DW.  There are several things we need to think about the migrating.

  1. Technical approach.  There are several ways to move data: SQL Server Integration Services, Data Factory, Data Warehouse Migration utility and Azure commands...
  2. Historical Loading and Incremental Loading. Consider the different way when loading incremental data.
  3. Security configuration.  How to move the user/group accounts into Azure DW
  4. SQL Code migration. Even though this is SQL to SQL, there is still some problems need to be addressed. For example, limitation in Azure DW

Technical Approach

No matter which approach we will be using, there are four major step we have to follow:  Move Schema, Move Data and Move Code.

1) Move Schema. Move schema means we have to create the tables' schema before loading the data. Typically in SQL Server, you can generate tables' script from SSMS, or by using other 3rd party tools. Keep in mind that there are not all SQL Server objects will be needed in Azure DW, for example 'primary key'. In the generated scripts, please add the following statement to follow the 'Create Table' :

.....

WITH
(
DISTRIBUTION = ROUND_ROBIN,
CLUSTERED COLUMNSTORE INDEX
)

This will create the Distribution column (will discuss later) and Columnstore index.

2) Move Data. This is the core step. Moving data from on-premises SQL to Azure DW has two kinds of 'Moving': Historical Loading and Incremental Loading.  In Data Ware housing methodology, the Historical Loading means one time loading or fresh loading, it is being used to load the entire on-premises data into Azure DW. In most of articles, it introduces the Historical Loading. Incremental Loading means we only load new records into Azure DW, this process should be happened based on scheduled frequencies. Both Incremental Loading and Historical Loading can use same technique, but may be not. I will cover this discussion later.

Move data by using SQL Server Integration Services

Probably this is easiest way to do both Historical Loading and Incremental Loading.

 

 

Migrate to Azure SQL Data Warehouse - 1, Setup Azure SQL Data Warehouse

I am using this series to describe the topics that related with migrating into Azure SQL Data Warehouse - MS MPP (Massively Parallel Processing) Data platform in Azure. In fact the name is confused for most of people who are familiar with SQL on-premises, in fact, Azure SQL Data Warehouse is a Azure SQL Database which supports MPP by using distributed Compute Nodes, so it created has to be under Azure SQL Database services.

I am trying to using multiple different DW systems to evaluate the 'Migrating': SQL Server, Oracle, DB2, RDS etc. Also I will use this series to compare pros and cons between different DWs in technical perspective (I hope I can.:)). 

Setup and initial the Azure SQL Data Warehouse is pretty simple, much easier than installing a SQL Server instance in the local server :).  Several Steps:

  1. Create SQL DW (Specify or create new SQL Server)
  2. Specify Performance Tire (DWU unit, 100 DWU initially)
  3. Specify administrator account

The following screen shot shows the server name just created:

Very basic request at this point is how can we connect and operate this Azure DW? Fortunately we have a very simple and easy way do that by using SQL Server Management Studio 2016. In the screen shot below, it shows we are trying to connect to the Azure SQL Server instead of Azure SQL DW as Azure SQL DW is a DATABASE.

When connecting the Azure SQL, the system requests to login Azure from client computer so that Azure SQL firewall can allow the computer's IP address to connect to Azure SQL.  The screen shot below shows the result when connected to Azure SQL, and the first Azure SQL DW: bridatadw1

 

 

 

Migrate to Amazon RDS 1 - Create SQL Server Instance

SQL Server has been brought into AWS for years. I am trying to use this series of posts to do a best practice about migrating data from on-premises into AWS RDS SQL Server Instance. (I am also using this series to compare RDS, Redshift and Azure DB, Azure DW)

Create a RDS SQL Server is pretty easy, the only thing should be aware of is that when creating SQL Server in VPC, we have to setup VPC security group to connect to RDS SQL Server:

  1. Amazon RDS -> Instance, find the Security groups, click it;
  2. Select default security group
  3. From Inbound, add new TCP/IP record to allow particular IP connect to this instance (client IP address with /32 subset):

Then we can connect to RDS SQL Instance from SSMS easily.

Next step, we will restore our local backup file (.bak) into RDS SQL Instance.

  1. Create local DB backup file. 
  2. Upload the backup file to S3 bucket