0

I am running the following code to generate parallel files of a big table in a small amount of time, even thought I have to join all the files afterwards in the operating system.

My environment is a Linux 8.10 and my DB is an Oracle 19.18

This is my code

create or replace type dump_parallel_object AS OBJECT
( file_name  VARCHAR2(256)
, no_records NUMBER
, seq_id     NUMBER
);
/

create or replace type dump_parallel_object_ntt AS TABLE OF dump_parallel_object ;
/

create or replace function fn_generate_parallel_file 
(
p_source    IN SYS_REFCURSOR,
p_filename  IN VARCHAR2,
p_directory IN VARCHAR2,
p_limit     IN NUMBER DEFAULT 1000
) return dump_parallel_object_ntt
pipelined
parallel_enable (partition p_source by any) 
as
   type row_ntt is table of varchar2(32767);
   v_rows    row_ntt;
   v_file    UTL_FILE.FILE_TYPE;
   v_buffer  VARCHAR2(32767);
   v_sid     NUMBER;
   v_name    VARCHAR2(512);
   v_lines   PLS_INTEGER := 0;
   c_eol     CONSTANT VARCHAR2(1) := CHR(10);
   c_eollen  CONSTANT PLS_INTEGER := LENGTH(c_eol);
   c_maxline CONSTANT PLS_INTEGER := 32767;

begin

   SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
   v_name := p_filename || '_' || TO_CHAR(v_sid) || '.txt';
   v_file := UTL_FILE.FOPEN(p_directory, v_name, 'w', 32767);

   LOOP
     FETCH p_source BULK COLLECT INTO v_rows LIMIT p_limit;

      FOR i IN 1 .. v_rows.COUNT LOOP

         IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
            v_buffer := v_buffer || c_eol || v_rows(i);
         ELSE
            IF v_buffer IS NOT NULL THEN
               UTL_FILE.PUT_LINE(v_file, v_buffer);
            END IF;
            v_buffer := v_rows(i);
         END IF;

      END LOOP;

      v_lines := v_lines + v_rows.COUNT;

      EXIT WHEN p_source%NOTFOUND;
   END LOOP;
   CLOSE p_source;

   UTL_FILE.PUT_LINE(v_file, v_buffer);
   UTL_FILE.FCLOSE(v_file);

   PIPE ROW (dump_parallel_object(v_name, v_lines, v_sid));
   RETURN;

END fn_generate_parallel_file;
/

When I use it in a table with no too much columns, there is no problem, but when I tried to do this:

 SELECT * FROM TABLE(fn_generate_parallel_file
         (CURSOR
            (SELECT /*+ PARALLEL(s,8) */ID||';'||
 ALFAAGREEMENTIDENTIFIER||';'||
 AGREEMENTNUMBER||';'||
 INVCUSID||';'||
 INVCUSBILLINGID||';'||
 EXPCUSID||';'||
 EXPCUSBILLINGID||';'||
 DEALERID||';'||
 DEALERBILLINGID||';'||
 INVCOMPANYID||';'||
 AGRCOMPANYID||';'||
 AGRCURRENCYID||';'||
 AGRBRANCHID||';'||
 PRICINGTERMID||';'||
 SALESPERSONID||';'||
 BILLADDFRAMEAGREEMENTID||';'||
 AGREEMENTTYPE||';'||
 PRODUCTCODE||';'||
 PRODUCT||';'||
 CUSTOMERREFERENCE||';'||
 DEALERREFERENCE||';'||
 SALESPERSONCODE||';'||
 SALESPERSONNAME||';'||
 SALESPERSONGROUP||';'||
 BRANCH||';'||
 BRANCHCODE||';'||
 COUNTRY||';'||
 REGION||';'||
 REGIONCODE||';'||
 ISFLOATINGRATE||';'||
 HASGUARANTEE||';'||
 AGREEMENTSUSPENSE||';'||
 SALESADMINISTRATORCODE||';'||
 SALESADMINISTRATORNAME||';'||
 ISCCAREGULATED||';'||
 TAXALLOWANCEPERIODYEARS||';'||
 TAXALLOWANCEENDDATE||';'||
 PRODUCTGROUPNAME||';'||
 PRODUCTGROUPCODE||';'||
 TAXVARIATIONCLAUSE||';'||
 LPIRATECODE||';'||
 CURRENTLPIRATE||';'||
 TAXEXEMPTIONREASONCODE||';'||
 PRODUCTDOCUMENTCODE||';'||
 PRODUCTDOCUMENT||';'||
 INVOICECOMBINATIONLEVELCODE||';'||
 INVOICECOMBINATIONLEVEL||';'||
 INTERESTCALCULATIONBASIS||';'||
 INVOICEFEECLASSID||';'||
 ARELINKEDRATESZEROLIMITED||';'||
 ARREARSGRACEPERIOD||';'||
 ETLLOGID as csv
 from ODSAGREEMENT s),
 'ODSAGREEMENT_20250124',
 'DIR_ODSVIEWS'
 )) nt
;

ERROR at line 1:
ORA-12801: error signaled in parallel query server P006
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "FN_GENERATE_PARALLEL_FILE", line 34
ORA-06512: at line 1

So, it looks like I have a bug somewhere, but I cannot find it. A table with a small number of columns, I have no problems:

SELECT * FROM TABLE(fn_generate_parallel_file
        (CURSOR
           (SELECT /*+ PARALLEL(s,2) */ID||';'||
ACCOUNTNUMBER||';'||
ACCOUNTNAME||';'||
ACCOUNTTYPE||';'||
EXTERNALQUALIFIER1||';'||
EXTERNALQUALIFIER2||';'||
ETLLOGID as csv
from ODSACCOUNT s),
'ODSACCOUNT_20250124',
'DIR_ODSVIEWS'
)) nt;

PL/SQL procedure successfully completed.

I guess one of my variables is not big enough for getting all the columns, or maybe is something else.

Anyone can help, please?

2
  • 2
    use clob for the buffer Commented Jan 25 at 23:21
  • I tried, then I got ORA-06502: PL/SQL: numeric or value error ORA-06512: at "ODSVIEWS.FN_GENERATE_PARALLEL_FILE", line 38 Commented Jan 26 at 10:13

2 Answers 2

0

The error is coming from the loop:

  FOR i IN 1 .. v_rows.COUNT LOOP

     IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
        v_buffer := v_buffer || c_eol || v_rows(i);
     ELSE
        IF v_buffer IS NOT NULL THEN
           UTL_FILE.PUT_LINE(v_file, v_buffer);
        END IF;
        v_buffer := v_rows(i);
     END IF;

  END LOOP;

The reason for this is that you are exceeding the buffer limit. The combined string length of every column is greater than the PL/SQL VARCHAR2 limit of 32767 before writing it to the UTL_FILE. (As you said, you have no issues when working witha small table/ few columns BUT only when they are more).

To prevent this, you can instead write to the file line by line as the data is being fetched.

REPLACE:

       IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
            v_buffer := v_buffer || c_eol || v_rows(i);
         ELSE
            IF v_buffer IS NOT NULL THEN
               UTL_FILE.PUT_LINE(v_file, v_buffer);
            END IF;
            v_buffer := v_rows(i);
         END IF;

WITH

UTL_FILE.PUT_LINE(v_file, v_rows(i));

Here is how the whole updated code would be:

create or replace function fn_generate_parallel_file 
(
p_source    IN SYS_REFCURSOR,
p_filename  IN VARCHAR2,
p_directory IN VARCHAR2,
p_limit     IN NUMBER DEFAULT 1000
) return dump_parallel_object_ntt
pipelined
parallel_enable (partition p_source by any) 
as
   type row_ntt is table of varchar2(32767);
   v_rows    row_ntt;
   v_file    UTL_FILE.FILE_TYPE;
   v_sid     NUMBER;
   v_name    VARCHAR2(512);
   v_lines   PLS_INTEGER := 0;
   c_eol     CONSTANT VARCHAR2(1) := CHR(10);

begin

   SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
   v_name := p_filename || '_' || TO_CHAR(v_sid) || '.txt';
   v_file := UTL_FILE.FOPEN(p_directory, v_name, 'w', 32767);

   LOOP
     FETCH p_source BULK COLLECT INTO v_rows LIMIT p_limit;
     FOR i IN 1 .. v_rows.COUNT LOOP
       UTL_FILE.PUT_LINE(v_file, v_rows(i));
     END LOOP;
     v_lines := v_lines + v_rows.COUNT;
     EXIT WHEN p_source%NOTFOUND;
   END LOOP;
   CLOSE p_source;

   UTL_FILE.FCLOSE(v_file);

   PIPE ROW (dump_parallel_object(v_name, v_lines, v_sid));
   RETURN;

END fn_generate_parallel_file;
/

Check out these articles as they may contain some solution or information related to this error: Link one Link two

Sign up to request clarification or add additional context in comments.

1 Comment

that might ( and I dont say it would) solve the problem, but it would make the program superslow
0

In reply to someone's comment, you provided the full error message which was not given in your original question:

ORA-06502: PL/SQL: numeric or value error ORA-06512: at 
"ODSVIEWS.FN_GENERATE_PARALLEL_FILE", line 38 

That tells you that the issue is on this line:

v_buffer := v_buffer || c_eol || v_rows(i)

You are testing before making this assignment:

 IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN

However, you are using LENGTH which counts characters, instead of LENGTHB which counts bytes. If you have a single multibyte character in your data, you can have a character length < 32767 but a byte length > 32767. Try switching to LENGTHB.

3 Comments

that error came after trying to convert v_buffer from varchar2 to clob.tha tis the reason why it was not in the original code
@RobertoHernandez, oh on the speed thing - sure, if you pack rows into an artificial 32K buffer you can wring decent performance out of utl_file, but it still cannot match what selects can do. Using multithreaded selects I've pulled 10Gb/sec over a network limited only by the NIC card. If it had been local it would have been many times faster yet. The drawbacks to utl_file are many - you can only push to the local DB server, which many orgs would not allow, you have complexities if it's RAC knowing which server it will end up on, it's limited to 32K, you are flattening datatypes in SQL, etc.
and so forth. if you've managed to get it to work fast enough for your need and all these limitations are not problems for your use case, that's great.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.