RDDs are the building blocks of Spark and what make it so powerful: they are stored in memory for fast processing. RDDs are broken down into partitions (blocks) of data, a logical piece of distributed dataset.
The underlying abstraction for blocks in Spark is a ByteBuffer, which limits the size of the block to 2 GB.
In brief, this error means that the block size for the resulting RDD is larger than 2GB: https://issues.apache.org/jira/browse/SPARK-1476
One way to work around this issue is to increase application’s parallelism. We can define the default number of partitions in RDDs returned by join and reduceByKey, by adjusting
spark.default.parallelism
What this configuration parameter does is basically to define how many blocks of data our dataset, in this case RDD, is going to be divided into.
As you have probably realized by now, we would need to set spark.default.parallelism to a higher value when processing large datasets. This way we can make sure the size of data blocks do not exceed 2GB limitations.
Hi Saeed,
Since you are BI and Hadoop user, have you faced similar situation ???
https://community.hortonworks.com/questions/82017/passing-parameters-from-ssrs-to-hive.html
Hi BalaVikram. I have not done BI for many years, but our users are able to pass parameters when they connect to Hive via JDBC connection, using “?” character.
See https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients on how to set up JDBC connector.