2015-03-03 23:32:59 +08:00
{
"metadata": {
"name": "",
2015-03-13 20:25:50 +08:00
"signature": "sha256:41d7aa3100998f60caf63db5380eee43e78d4de66c65af923e4877df81248ae7"
2015-03-03 23:32:59 +08:00
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Spark\n",
"\n",
2015-03-03 23:36:30 +08:00
"* Python Shell\n",
2015-03-04 21:28:07 +08:00
"* RDDs\n",
2015-03-05 20:26:54 +08:00
"* Pair RDDs\n",
2015-03-06 20:53:17 +08:00
"* Running Spark on a Cluster\n",
2015-03-07 22:07:18 +08:00
"* Viewing the Spark Application UI\n",
2015-03-08 17:55:05 +08:00
"* Working with Partitions\n",
2015-03-08 17:55:45 +08:00
"* Caching RDDs\n",
2015-03-12 18:25:40 +08:00
"* Checkpointing RDDs\n",
2015-03-13 20:25:50 +08:00
"* Writing and Running a Spark Application\n",
"* Configuring Spark Applications"
2015-03-03 23:32:59 +08:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Python Shell"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Start the pyspark shell:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"pyspark"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"View the spark context:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sc"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-03 23:36:30 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## RDDs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create an RDD from the contents of a directory:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_data = sc.textFile(\"file:/path/*\")"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Count the number of lines in the data:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_data.count()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Display the data in the data:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_data.collect()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Return the first 10 lines in the data:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_data.take(10)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create an RDD with lines matching the given filter:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"my_data.filter(lambda line: \".txt\" in line)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Chain a series of commands:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sc.textFile(\"file:/path/file.txt\") \\\n",
" .filter(lambda line: \".txt\" in line) \\\n",
" .count()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a new RDD mapping each line to an array of words, taking only the first word of each array:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"first_words = my_data.map(lambda line: line.split()[0])"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Output each word in first_words:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for word in first_words.take(10):\n",
" print word"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Save the first words to a text file:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"first_words.saveAsTextFile(\"file:/path/file\")"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-04 21:28:07 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pair RDDs\n",
"\n",
"Pair RDDs contain elements that are key-value pairs. Keys and values can be any type."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Given a log file with the following space deilmited format: [date_time, user_id, ip_address, action], map each request to (user_id, 1):"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"DATE_TIME = 0\n",
"USER_ID = 1\n",
"IP_ADDRESS = 2\n",
"ACTION = 3\n",
"\n",
"log_data = sc.textFile(\"file:/path/*\")\n",
"\n",
"user_actions = log_data \\\n",
" .map(lambda line: line.split()) \\\n",
" .map(lambda words: (words[USER_ID], 1)) \\\n",
" .reduceByKey(lambda count1, count2: count1 + count2)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Show the top 5 users by count, sorted in descending order:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"user_actions.map(lambda pair: (pair[0], pair[1])).sortyByKey(False).take(5)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Group IP addresses by user id:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"user_ips = log_data \\\n",
" .map(lambda line: line.split()) \\\n",
" .map(lambda words: (words[IP_ADDRESS],words[USER_ID])) \\\n",
" .groupByKey()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Given a user table with the following csv format: [user_id, user_info0, user_info1, ...], map each line to (user_id, [user_info...]):"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"user_data = sc.textFile(\"file:/path/*\")\n",
"\n",
"user_profile = user_data \\\n",
" .map(lambda line: line.split(',')) \\\n",
" .map(lambda words: (words[0], words[1:]))"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Inner join the user_actions and user_profile RDDs:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"user_actions_with_profile = user_actions.join(user_profile)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Show the joined table:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"for (user_id, (user_info, count)) in user_actions_with_profiles.take(10):\n",
" print user_id, count, user_info"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-05 20:26:54 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running Spark on a Cluster"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Start the standalone cluster's Master and Worker daemons:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
2015-03-13 20:09:01 +08:00
"!sudo service spark-master start\n",
"!sudo service spark-worker start"
2015-03-05 20:26:54 +08:00
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Stop the standalone cluster's Master and Worker daemons:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
2015-03-13 20:09:01 +08:00
"!sudo service spark-master stop\n",
"!sudo service spark-worker stop"
2015-03-05 20:26:54 +08:00
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Restart the standalone cluster's Master and Worker daemons:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
2015-03-13 20:09:01 +08:00
"!sudo service spark-master stop\n",
"!sudo service spark-worker stop"
2015-03-05 20:26:54 +08:00
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"View the Spark standalone cluster UI:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"http://localhost:18080//"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Start the Spark shell and connect to the cluster:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
2015-03-13 20:09:01 +08:00
"!MASTER=spark://localhost:7077 pyspark"
2015-03-05 20:26:54 +08:00
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Confirm you are connected to the correct master:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sc.master"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-06 20:53:17 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Viewing the Spark Application UI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2015-03-07 22:07:18 +08:00
"From the following [reference](http://spark.apache.org/docs/1.2.0/monitoring.html):\n",
"\n",
2015-03-06 20:53:17 +08:00
"Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:\n",
"\n",
"A list of scheduler stages and tasks\n",
"A summary of RDD sizes and memory usage\n",
"Environmental information.\n",
"Information about the running executors\n",
"\n",
"You can access this interface by simply opening http://<driver-node>:4040 in a web browser. If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc).\n",
"\n",
2015-03-07 22:07:18 +08:00
"Note that this information is only available for the duration of the application by default. To view the web UI after the fact, set spark.eventLog.enabled to true before starting the application. This configures Spark to log Spark events that encode the information displayed in the UI to persisted storage."
2015-03-06 20:53:17 +08:00
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"http://localhost:4040/"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-07 22:07:18 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Working with Partitions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the following [reference](http://blog.cloudera.com/blog/2014/09/how-to-translate-from-mapreduce-to-apache-spark/):\n",
"\n",
"The Spark map() and flatMap() methods only operate on one input at a time, and provide no means to execute code before or after transforming a batch of values. It looks possible to simply put the setup and cleanup code before and after a call to map() in Spark:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"val dbConnection = ...\n",
"lines.map(... dbConnection.createStatement(...) ...)\n",
"dbConnection.close() // Wrong!"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, this fails for several reasons:\n",
"\n",
"* It puts the object dbConnection into the map function\u2019s closure, which requires that it be serializable (for example, by implementing java.io.Serializable). An object like a database connection is generally not serializable.\n",
"* map() is a transformation, rather than an operation, and is lazily evaluated. The connection can\u2019t be closed immediately here.\n",
"* Even so, it would only close the connection on the driver, not necessarily freeing resources allocated by serialized copies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In fact, neither map() nor flatMap() is the closest counterpart to a Mapper in Spark \u2014 it\u2019s the important mapPartitions() method. This method does not map just one value to one other value, but rather maps an Iterator of values to an Iterator of other values. It\u2019s like a \u201cbulk map\u201d method. This means that the mapPartitions() function can allocate resources locally at its start, and release them when done mapping many values."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def count_txt(partIter):\n",
" for line in partIter: \n",
" if \".txt\" in line: txt_count += 1\n",
" yield (txt_count)\n",
"\n",
2015-03-08 17:55:05 +08:00
"my_data = sc.textFile(\"file:/path/*\") \\\n",
2015-03-07 22:07:18 +08:00
" .mapPartitions(count_txt) \\\n",
2015-03-08 17:55:05 +08:00
" .collect()\n",
" \n",
"# Show the partitioning \n",
"print \"Data partitions: \", my_data.toDebugString()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Caching RDDs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Caching an RDD saves the data in memory. Caching is a suggestion to Spark as it is memory dependent.\n",
"\n",
"By default, every RDD operation executes the entire lineage. Caching can boost performance for datasets that are likely to be used by saving this expensive recomputation and is ideal for iterative algorithms or machine learning.\n",
"\n",
"* cache() stores data in memory\n",
"* persist() stores data in MEMORY_ONLY, MEMORY_AND_DISK (spill to disk), and DISK_ONLY\n",
"\n",
"Disk memory is stored on the node, not on HDFS.\n",
"\n",
"Replication is possible by using MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc. If a cached partition becomes unavailable, Spark recomputes the partition through the lineage.\n",
"\n",
"Serialization is possible with MEMORY_ONLY_SER and MEMORY_AND_DISK_SER. This is more space efficient but less time efficient, as it uses Java serialization by default."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# Cache RDD to memory\n",
"my_data.cache()\n",
"\n",
"# Persist RDD to both memory and disk (if memory is not enough), with replication of 2\n",
"my_data.persist(MEMORY_AND_DISK_2)\n",
"\n",
"# Unpersist RDD, removing it from memory and disk\n",
"my_data.unpersist()\n",
"\n",
"# Change the persistence level after unpersist\n",
"my_data.persist(MEMORY_AND_DISK)"
2015-03-07 22:07:18 +08:00
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-08 17:55:45 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Checkpointing RDDs\n",
"\n",
"Caching maintains RDD lineage, providing resilience. If the lineage is very long, it is possible to get a stack overflow.\n",
"\n",
"Checkpointing saves the data to HDFS, which provide fault tolerant storage across nodes. HDFS is not as fast as local storage for both reading and writing. Checkpointing is good for long lineages and for very large data sets that might not fit on local storage. Checkpointing removes lineage."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a checkpoint and perform an action by calling count() to materialize the checkpoint and save it to the checkpoint file:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# Enable checkpointing by setting the checkpoint directory, \n",
"# which will contain all checkpoints for the given data:\n",
"sc.setCheckpointDir(\"checkpoints\")\n",
"\n",
"my_data = sc.parallelize([1,2,3,4,5])\n",
"\n",
"# Long loop that may cause a stack overflow\n",
"for i in range(1000):\n",
" my_data = mydata.map(lambda myInt: myInt + 1)\n",
"\n",
" if i % 10 == 0: \n",
" my_data.checkpoint()\n",
" my_data.count()\n",
"\n",
"my_data.collect()\n",
" \n",
"# Display the lineage\n",
"for rddstring in my_data.toDebugString().split('\\n'): \n",
" print rddstring.strip()"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-12 18:25:40 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Writing and Running a Spark Application"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a Spark application to count the number of text files:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import sys\n",
"from pyspark import SparkContext\n",
"\n",
"if __name__ == \"__main__\":\n",
" if len(sys.argv) < 2:\n",
" print >> sys.stderr, \"Usage: App Name <file>\"\n",
" exit(-1)\n",
" \n",
" count_text_files()\n",
" \n",
"def count_text_files():\n",
" sc = SparkContext()\n",
" logfile = sys.argv[1]\n",
" text_files_count = sc.textFile(logfile)\n",
" .filter(lambda line: '.txt' in line)\n",
" text_files_count.cache()\n",
" print(\"Number of text files: \", text_files_count.count())"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Submit the script to Spark for processing:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
2015-03-13 20:09:01 +08:00
"!spark-submit --properties-file dir/myspark.conf script.py data/*"
2015-03-12 18:25:40 +08:00
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-13 20:25:50 +08:00
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Spark Applications"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run a Spark app and set the configuration options in the command line:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"!spark-submit --master spark//localhost:7077 --name 'App Name' script.py data/*"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Configure spark.conf:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"spark.app.name App Name\n",
"spark.ui.port 4141\n",
"spark.master spark://localhost:7077"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run a Spark app and set the configuration options through spark.conf:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"!spark-submit --properties-file spark.conf script.py data/*"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set the config options programmatically:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sconf = SparkConf() \\\n",
" .setAppName(\"Word Count\") \\\n",
" .set(\"spark.ui.port\",\"4141\")\n",
"sc = SparkContext(conf=sconf)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set logging levels located in the following file, or place a copy in your pwd:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"$SPARK_HOME/conf/log4j.properties.template"
],
"language": "python",
"metadata": {},
"outputs": []
2015-03-03 23:32:59 +08:00
}
],
"metadata": {}
}
]
}