Consider spark jdbc dataframe to a rdbms table as given below
val df = spark.read("jdbc").option("url", url).option("dbtable", "schema.table").option("user", user).option("password",passwor).load()
df.count
This count action is not recomended since it will load data into spark layer and take count in the Spark layer instead of pushing down count query to jdbc datasource. What is the efficient way to get the count in this scenario?