I'm migrating the following function to a function to sql udf spark.
DROP FUNCTION IF EXISTS anyarray_enumerate(anyarray);
CREATE FUNCTION anyarray_enumerate(anyarray)
RETURNS TABLE (index bigint, value anyelement) AS
$$
SELECT
row_number() OVER (),
value
FROM (
SELECT unnest($1) AS value
) AS unnested
$$
LANGUAGE sql IMMUTABLE;
I do not get that spark sql output is similar to that obtained in SQL. Any help or idea?
demo=# select anyarray_enumerate(array[599,322,119,537]);
anyarray_enumerate
--------------------
(1,599)
(2,322)
(3,119)
(4,537)
(4 rows)
My current code is:
def anyarray_enumerate[T](anyarray: WrappedArray[T]) = anyarray.zipWithIndex
// Registers a function as a UDF so it can be used in SQL statements.
sqlContext.udf.register("anyarray_enumerate", anyarray_enumerate(_:WrappedArray[Int]))
Thank you