Tried different ways
sql
CREATE TABLE oracle_table
USING org.apache.spark.sql.jdbc
OPTIONS (
dbtable 'persons',
driver 'oracle.jdbc.driver.OracleDriver',
user '<user>',
password '<pass>',
url 'jdbc:oracle:thin://@<host>:1521/orcl')
above code returned OK
select * from oracle_table
throwed java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
python
jdbcUrl = "jdbc:oracle:thin:@<host>:1521/orcl"
properties = {
"user": "<user>",
"password": "<password>",
"driver": "oracle.jdbc.driver.OracleDriver"
}
pushdown_query = "( SELECT * FROM persons ) emp_alias"
df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=properties)
df.printSchema()
returned
|-- PERSON_ID: decimal(38,10) (nullable = true)
|-- FIRST_NAME: string (nullable = true)
|-- LAST_NAME: string (nullable = true)
But for below code
df.show()
throwed java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
help me!
CREATE TABLErun straight away? It's probably "lazy loading", meaning that it doesn't run until you actually select from it. So most likely the issue is actually in your create table, and your databricks actually can't connect to<host>:1521. Are you certain your databricks install has network connectivity to the Oracle server?DESCRIBE personsis also giving all columns and datatypes, if there is connectivity issue is shoudn't load the metadata also right!, thats what confusing me%shtelnet trick is pretty handy I'll have to remember that one