Since in SQL Server ,we can declare variables like declare @sparksql='<any query/value/string>' but in spark sql what alternative can be used .
So that we don't need to hard code any values/query/strings.
5 Answers
There is support for the variables substitution in the Spark, at least from version of the 2.1.x. It's controlled by the configuration option spark.sql.variable.substitute - in 3.0.x it's set to true by default (you can check it by executing SET spark.sql.variable.substitute).
With that option set to true, you can set variable to specific value with SET myVar=123, and then use it using the ${varName} syntax, like: select ${myVar}...
On Databricks, parser also recognizes that syntax, and creates a field to populate value, although it would be easier to use widgets from SQL as described in documentation
P.S. According to the code, besides variables themselves, it also supports getting the data from environment variables & from the Java system properties, like this:
select '${env:PATH}';
select '${system:java.home}';
P.S. This answer is about using variables defined in Spark SQL itself. If you're looking about using variables defined in Python/Scala in Spark SQL, then please refer to this answer.
Comments
If you are using a Databricks Notebook then one easy way is to use Scala or Python to declare the variable and execute the SQL statement.
Here's a simple example in Scala:
val x = 1
val df = spark.sql(s"select * from t where col1 = $x")
df.show()
3 Comments
The following widget simple solution works well within Databricks Spark SQL. Cluster runs on Spark 3.0.1 | Scala 2.12. Once you establish a widget, the databricks cluster will list them at the top and display their values. This comes in handy when you establish multiple.
CREATE WIDGET TEXT tableName DEFAULT 'db.table'
SELECT * from $tableName
1 Comment
The short answer is no, Spark SQL does not support variables currently.
The SQL Server uses T-SQL, which is based on SQL standard extended with procedure programming, local variables and other features.
Spark SQL is a pure SQL, partially compatible with SQL standard. Since Spark 3.0, Spark SQL introduces two experimental options to comply with the SQL standard, but no variables support introduced there.
https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html
