I have setup a data transfer job to get the data from Oracle Database and put the data in Google Bigquery.
Job flow: ExecuteSQL -> AvroToJson -> PutBigQueryBatch
I have transfered all tables which does not have BLOB object in it. When I run this job for the table with BLOB object, it throws the following error at ExecuteSQL processor:
ExecuteSQL failed to process session due to oracle.sql.BLOB.free()V;
Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError: oracle.sql.BLOB.free()V
This answer suggests to use ExecuteSQL processor to get BLOB data from Oracle, but I am using ExecuteSQL and it is showing this error. What am I doing wrong in using the ExecuteSQL?
Configuration:
- Source DB: Oracle 11g Release 2
- Source JDBC driver: source
- Destination: Bigquery
- Active services for the job: DBCPConnectionPool, GCPCredentialsControllerService
DBCPConnectionPool configuration:
- Database Connection URL:
jdbc:oracle:thin:@IP:host:sid - Database Driver Class Name:
oracle.jdbc.driver.OracleDriver - Database Driver Location(s):
C:\Users\91918\Desktop\OJDBC-Full\ojdbc5.jar,C:\Users\91918\Desktop\OJDBC-Full\ojdbc6.jar,C:\Users\91918\Desktop\OJDBC-Full\ons.jar,C:\Users\91918\Desktop\OJDBC-Full\orai18n.jar,C:\Users\91918\Desktop\OJDBC-Full\simplefan.jar,C:\Users\91918\Desktop\OJDBC-Full\ucp.jar,C:\Users\91918\Desktop\OJDBC-Full\xdb6.jar - Database User:
username - Password:
password - Validation query:
SELECT 1 FROM DUAL
ExecuteSQL configuration:
- Database Connection Pooling Service: DBCPConnectionPool
- SQL select query: select * from schema.table
- Max Wait Time: 0.1 sec
- Normalize Table/Column Names: False
- Use Avro Logical Types: False
- Compression format: none
- Default decimal precision: 10
Oracle schema:
WAFERMAPID CHAR(16)
CDOTYPEID NUMBER(10)
CHANGECOUNT NUMBER(10)
MAPCONVERTID CHAR(16)
ISFROZEN NUMBER(10)
NAME VARCHAR2(120)
WAFERMAP BLOB
My intention with the data:
Read the binary of BLOB object as string, put that in the Bigquery.
Information regarding BLOB: The BLOB object is a GZ file. Example: file.txt.gz. The file inside the GZ contains a text file with text in XML. Reading the text inside the file and put that in Bigquery would be the end result.