Previously we tried to solve the word count problem with a Scala and Spark approach.
The next step is to deploy our solution to HDInsight using spark, hdfs, and scala
We shall provision a Sprak cluster.
Since we are going to use HDInsight we can utilize hdfs and therefore use the azure storage.
Then we choose our instance types.
And we are ready to create the Spark cluster.
Our data shall be uploaded to the hdfs file system
To do so we will upload our text files to the azure storage account which is integrated with hdfs.
For more information on managing a storage account with azure cli check the official guide. Any text file will work.
azure storage blob upload mytextfile.txt sparkclusterscala example/data/mytextfile.txt
Since we use hdfs we shall make some changes to the original script
val text = sc.textFile("wasb:///example/data/mytextfile.txt") val counts = text.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_) counts.collect
Then we can upload our scala class to the head node using ssh
scp WordCountscala.scala demon@{your cluster}-ssh.azurehdinsight.net:/home/demo/WordCountscala.scala
Again in order to run the script, things are pretty straightforward.
spark-shell -i WordCountscala.scala
And once the task is done we are presented with the spark prompt. Plus we can now save our results to the hdfs file system.
scala> counts.saveAsTextFile("/wordcount_results")
And do a quick check.
hdfs dfs -ls /wordcount_results/ hdfs dfs -text /wordcount_results/part-00000
Do you mind if I quote a few of your posts as
long as I provide credit and sources back to your weblog?
My blog site is in the exact same niche as yours and my visitors would truly benefit from a lot of the
information you provide here. Please let me know if
this okay with you. Appreciate it!