Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 14 hours 8 min ago

Save compressed BLOB to OS file

Fri, 2020-07-31 08:06
Hello - We're doing following in oracle db - Extract comma delimited file from Siebel database using SQL query - we're using BLOB to store comma delimited file - Compressing BLOB data using UTL_compress LN_COMPRESS - Sending data to REST API using HTTP post REST service is erroring out due to Oracle compression and can't read oracle compressed data. Btw, its fine when I zip file manually using winzip and post using POSTMAN. REST API owner would like to know Oracle UTL_compress COMPRESS specification. Also there are asking to send OS file (oracle compressed file) Need to know following... 1)what specification Oracle is using for UTL_COMPRESS 2) How can I create file using Oracle COMRESSED blob? Thanks in advance
Categories: DBA Blogs

Generating and Storing a Report in the Database

Fri, 2020-07-31 08:06
>Hi Tom 's team, I am happy for your quick response my old question <b>https://asktom.oracle.com/pls/apex/asktom.search?tag=storing-and-accessing-reports-in-the-database-on-oracle-apex-of-version-20</b> I make a new question for my present problem. I listed those steps I created for that problem: a. I created a table named from file report_archive_tbl.sql on my database or directly on APEX. b. I created a process named store_blobquery_report when press Print button. Print button redirect to url I created for a report query. <code>declare l_report blob; begin l_report := apex_util.get_print_document ( p_application_id => :APP_ID, -- assign id of app or no need p_report_query_name => 'multiquery', -- <b>create name for a report query</b> p_report_layout_name => 'multiquery', -- <b>create name for a report layout</b> p_report_layout_type => 'rtf', -- <b>create a format for a report</b> p_document_format => 'pdf'-- <b>define format for a report<b> ); insert into report_archive ( filename, mimetype, report, created_date, created_by ) values ( 'Multi Query Search Results ('||to_char(sysdate,'DDMonYYYY')||')', 'application/pdf', l_report, sysdate, :USER ); end;</code> <b>c. When I want to click print report, one row were inserted into the database for each report you ran. </b>. That step while I pressed Print button, it only show that report downloaded but on a table of report_archive had no data. Thank you in advance ^_^.
Categories: DBA Blogs

Auditing of all action in a session not created by specific MACHINE

Thu, 2020-07-30 13:46
Hi Tom thanks for previous solutions . Please guide me or share some helpful links to achieve below requirement. 1--> we want to audit all actions of a session not created by application. 2--> Will a cursor having 1000 records will generate 1000 rows in auditing. Note* We have 12c R2 SE Oracle Database.So no fine grained auditing . I need to store only information like (action,object_name,schema_name,host,OS_user,IP address,time,date). "ALL actions" includes everything (alter,create,drop,update,delete etc) like "audit all by schema whenever successful ". Actual scenario:: An application user is used by many developers ,we want to only audit actions of developer session ,not application. How can we achieve it with minimum impact on performance.
Categories: DBA Blogs

Extract xml node value with existnode filter condition

Thu, 2020-07-30 13:46
Hi Chris, Appreciate for your support. I have a requirement where want to return specific node values if a particular node exist. Example - for the below XML, my output will be 2 records because (Jobsdata/jobList/jobData/job/document) node exist 2 times and returning nodes are (job/document/fileID and job/directoryid ). O/p --> 100,D100 200,D200 Please find the XML table in the Live SQL link . My O/P should be as given below- <code>id fileID directoryid 1 100 D100 1 200 D200</code> Filter Criteria - I don't want to display the directoryid : D300 because there is no <document> tag. Hence, file ID and directoryid should be displayed as <document> tag available for those. Thanks,Asit
Categories: DBA Blogs

Determine The Column Font

Thu, 2020-07-30 13:46
Greetings , Please inform me how to change The Column font Type and size In Apex classic report? In legacy report Builder 6 it is easy Just Select The column and choose the font you want and the size for it how this can be accomplished in APEX ? Looking forward to hearing from you
Categories: DBA Blogs

Query performance difference in RAC instances

Thu, 2020-07-30 13:46
Hi The Oracle DB version I am working on is : Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Its a RAC with 2 instances. This is about a query which runs quickly (under 15s) under instance #1 but takes between 10-11 mins when it runs under Instance #2. The query is utilizing indexes as expected. User provided me the query and I got the sql_ids of its historical runs via DBA_HIST_SQLTEXT. That gave me 5 sql_ids of the same query run between 20-Jul and 28-Jul. <code>select sql_id from dba_hist_sqltext where sql_text like '%CLAIMTYP%claimtype%');</code> <b>Observation 1:</b> Via following query, I discovered that slower runs are on Instance #2, & faster ones on #1. <code>select distinct h.instance_number,trunc(h.sample_time) run_date,h.sql_exec_start, h.sql_id , h.session_id, h.sql_child_number, h.sql_full_plan_hash_value, to_char(trunc(elapsed_time_total/1000000/60/60/24),'9') || ':' ||to_char(trunc(MOD(elapsed_time_total/1000000/60/60, 24)),'09') ||':' || to_char(trunc(MOD(elapsed_time_total/1000000,3600)/60),'09') || ':' ||to_char(MOD(MOD(elapsed_time_total/1000000, 3600),60),'09') as "Time Taken [dd:hh:mi:ss]" from dba_hist_active_sess_history h,DBA_HIST_SQLSTAT s where s.sql_id=h.sql_id and h.instance_number=s.instance_number and h.sql_id in ('73b4smcjjk38s','bt8sw7vj42sp3',........);</code> <b>Observation 2:</b> In dba_hist_active_sess_history, under column DELTA_INTERCONNECT_IO_BYTES, I found several non-null values under Instance #2 run, which as per Oracle documentation means "Number of I/O bytes sent over the I/O interconnect over the last DELTA_TIME microseconds". Value of that column is NULL in all other cases where same query ran on Instance #1. And there are many rows in ASH for slower run sql_ids but faster runs have only couple of rows (each row indicating 10s interval). Does above observations indicate some issue with server of Instance #2? Thanks.
Categories: DBA Blogs

encrypts file using PGP public

Thu, 2020-07-30 13:46
Hi, I don't know this is the right place or not but need to complete this task using PL/SQL. Actually I need to write PLSQL Code need to create a file and encrypt the file using PGP public key. I don't know how to achieve the second part of this task (file encryption) can I use dbms_crypto package for file encrypt and decrypt or need to some other rout. Our client already provides the PGP public key we don't need to create a PGP key need to use that key and encrypt/decrypt the file. Hope I clear enough my requirement. Thanks, Zahid
Categories: DBA Blogs

Need to to create index on a column for partial Data

Thu, 2020-07-30 13:46
Hi Tom, below is my table <code>create table invoice_fact( invoice_id number(7), INV_CHK_ID_DATE varchar2(1000), ACCOUNTING_DATE date, INV_AMOUNT number(12,4) );</code> and below is data INVOICE_ID ACCOUNTING_DATE INV_AMOUNT INV_CHK_ID_DATE 0000001 12-Mar-16 10000.77 0000001,19-Mar-2016,10000.77 0000002 13-Mar-16 10070.74 0000002,21-Mar-2016,10070.74 0000003 14-Mar-16 10124.7 0000003,16-Mar-2016,10124.7 0000004 15-Mar-16 10136.56 0000004,17-Mar-2016,10136.56 0000005 16-Mar-16 10190.35 0000005,23-Mar-2016,10190.35 0000006 17-Mar-16 10200.94 0000006,20-Mar-2016,10200.94 0000007 18-Mar-16 10255.44 0000007,20-Mar-2016,10255.44 0000008 19-Mar-16 10341.86 0000008,20-Mar-2016,10341.86 and my query is <code>select * from invoice_fact where to_date(substr(INV_CHK_ID_DATE,9,instr(INV_CHK_ID_DATE,',',1,2)-9),'DD-MON-RRRR') between '01-MAR-2016' and '01-MAR-2017'; </code> i have a date in inv_chk_id_date column i have to extract that date and need to apply filter in where clause. no of records in my table is more than 20 millions so if i apply filter like above query is taking very long time to execute, is there any way to speed up above query
Categories: DBA Blogs

DataPump crashed, now cant create new jobs

Thu, 2020-07-30 13:46
Hy Tom, a sheduled task that worked for month crashed sudenly. Its an EXPDP job transfering Data to an external SSD (USB3.0). The Disk still runs. The errors listet after a new test to export are like: (EXPDP SYSTEM/XXXX@....) ORA-31626: now job exists ORA-31638: Job SYS_EXPORT_SCHEMA_01 for User SYSTEM cannot be attached ORA-06512: in "SYS.DBMS_SYS_ERROR", Line 95 ...... I read some hints resulting in deleting all DataPump jobs (was too much I think).
Categories: DBA Blogs

Slow Delete with Cascade Delete

Thu, 2020-07-30 13:46
I'm trouble-shooting a very slow running delete statement in a stored procedure. The delete statement attempts to delete one record from 1 table. The record to be deleted contains a key field that is pervasive throughout the database. Call it Order_Header. Sixteen tables in the database have foreign keys containing cascade delete constraints referencing Order_Header (children tables). The database also contains 11 tables with foreign keys containing cascade delete constraints referencing the children tables (grand children). I had a similar situation with another large cascade delete that was resolved by adding indexes to the foreign keys. The only potential issue that I see in this case is that two of the grandchildren are also children of Order_Header. Stated a little differently, two of the children of Order_Header are also grandchildren of other children of Order_Header. Could this be the cause of the slow performing delete?
Categories: DBA Blogs

Rebuild partition index through procedure is slow

Mon, 2020-07-27 12:26
Hello Tom, I have come across a very strange problem. I have a partitioned table where my ETL loads data. Before data load generally the local partitioned indexes for a particular month say June month will be marked as UNUSABLE and once load is done it is rebuilt using a procedure where the procedure simply does below - ALTER TABLE SCOTT.PART_TABLE MODIFY PARTITION JUN2020 REBUILD UNUSABLE LOCAL INDEXES ; This statement rebuilds all 15 individual partitions for JUN2020 that were marked UNUSABLE and runs for about 30 min through the procedure Strange thing is if I run the same statement directly in my sqlplus session it completes in about 30 seconds. SQL> ALTER TABLE SCOTT.PART_TABLE MODIFY PARTITION JUN2020 REBUILD UNUSABLE LOCAL INDEXES ; Table altered. Elapsed: 00:00:35.87 Can you please throw some light on this ?
Categories: DBA Blogs

cursor code in PLSQL comparision

Mon, 2020-07-27 12:26
I want to compare following cursor example. and i want to know which one is better for 10 millions record for processing data. And how to check internally execution plan and internal process. <code> declare type type_1 is table of user_tab_columns%rowtype; cursor cursor_1(p_table_name varchar2) is select * from user_tab_columns s where s.table_name = p_table_name; v_type_1 type_1; begin open cursor_1('TABLE_NAME_1'); fetch cursor_1 bulk collect into v_type_1; close cursor_1; for i in v_type_1.first .. v_type_1.last loop dbms_output.put_line(' ' || v_type_1(i).column_id || ' ' || v_type_1(i).column_name || ' ' || v_type_1(i).data_type); end loop; end; ------------------------------------------------------------ declare cursor cursor_2(p_table_name varchar2) is select * from user_tab_columns s where s.table_name = p_table_name; begin for i in cursor_2('TABLE_NAME_1') loop dbms_output.put_line(' ' || i.column_id || ' ' || i.column_name || ' ' || i.data_type); end loop; end; ------------------------------------------------------------ declare cursor cursor_3(p_table_name varchar2) is select * from user_tab_columns s where s.table_name = p_table_name; v_type cursor_3%rowtype; begin open cursor_3('TABLE_NAME_1'); loop fetch cursor_3 into v_type; dbms_output.put_line(' ' || v_type.column_id || ' ' || v_type.column_name || ' ' || v_type.data_type); exit when cursor_3%notfound; end loop; close cursor_3; end; </code>
Categories: DBA Blogs

Performance literal/bind variables vs JOIN CLAUSE

Mon, 2020-07-27 12:26
Hello! my client's DBA states that Oracle works better when instead of setting, in the where clause, literals or bind variables is explicit in the condition the reference to a JOIN column. please forgive my english I hope the simple example attached is clearer it's true that QUERY 2 is better for Oracle? is there any literature about it? if it is true to what extent does the performance improvement occur? What evidence can I find, for example, in the trace file? <code>CREATE TABLE TABLE1 ( PKCOL1 NUMBER NOT NULL, COL2 VARCHAR2 (10), COL3 VARCHAR2 (10) ); ALTER TABLE TABLE1 ADD (CONSTRAINT TABLE1_PK PRIMARY KEY (PKCOL1) ENABLE VALIDATE); CREATE TABLE TABLE2 ( PKCOL1 NUMBER NOT NULL, COL2 VARCHAR2 (10), COL3 VARCHAR2 (10) ); ALTER TABLE TABLE2 ADD (CONSTRAINT TABLE2_PK PRIMARY KEY (PKCOL1) ENABLE VALIDATE); BEGIN FOR I IN 1 .. 100000 LOOP INSERT INTO TABLE1 VALUES (i, 'A' || I, 'B' || I); INSERT INTO TABLE2 VALUES (i, 'A' || I, 'B' || I); END LOOP; COMMIT; END; / --QUERY 1 SELECT A.PKCOL1, A.COL2, A.COL3, B.PKCOL1, B.COL2, B.COL3 FROM TABLE1 A, TABLE2 B WHERE A.PKCOL1 = 200 AND B.PKCOL1 = 200; --QUERY 2 SELECT A.PKCOL1, A.COL2, A.COL3, B.PKCOL1, B.COL2, B.COL3 FROM TABLE1 A, TABLE2 B WHERE A.PKCOL1 = 200 AND B.PKCOL1 = A.PKCOL1;</code>
Categories: DBA Blogs

DBMS_SYSTEM.ksdwrt affecting the database operations.

Mon, 2020-07-27 12:26
Hi Team, Recently we faced performance degradation situation which impacted overall operations of application. From awr report, a proc is taking more than 3 or 4 seconds to complete. Lets call that proc 'A'. In addmrpt, we are getting a statement similar to one given below: Tune the entry point PL/SQL 'A' of type "PROCEDURE". Refer to the PL/SQL documentation for addition information. Rationale 7821 seconds spent in executing PL/SQL "SYS.DBMS_SYSTEM.KSDWRT" of type "PACKAGE". We are focusing on proc 'A' because performance degradation happens when this proc is called on a regular basis (every one or two seconds). But this proc is spending almost all of its time in executing SYS.DBMS_SYSTEM.KSDWRT. We have inferred this from awr and addm reports. Since SYS.DBMS_SYSTEM.KSDWRT is undocumented, we are not getting any information on how this process is working internally. My doubts are as follows: 1) How does SYS.DBMS_SYSTEM.KSDWRT manage to access the alert.log? 2) Is it using any oracle background process to access this log? 3) Can multipe sessions write to log concurrently or a wait occur there? 4) Does writing to log impact other DML operations?
Categories: DBA Blogs

Advanced Queueing tables in InMemory

Mon, 2020-07-27 12:26
Could I load AQ tables into InMemory and what are possible downsides when doing that? We are using ExaCC rack.
Categories: DBA Blogs

Best approach for information logging in PL/SQL

Mon, 2020-07-27 12:26
Hi Tom, I'm wondering what is the best approach (in term of performance) for logging information about PL/SQL code execution and procedures/functions usage. I know that the most common approach is to use PRAGMA AUTONOMOUS_TRANSACTION and insert any data into table or use audit log but I have many systems written in PL/SQL as backend whith API exposed as package procedures and functions. In many situations I need to gather some information about values in parameters passed to procedure. P.S. <i>Generally in my opinion logging this kind of information on production environment isn't the best approach but unfortunately I'm not able to convince decision makers.</i>
Categories: DBA Blogs

DIFFERENCES BETWEEN PROCESSES, SESSIONS AND CONNECTIONS

Mon, 2020-07-27 12:26
Hi Tom What's the difference between connections, sessions and processes? I read a note from Metalink about the difference but I simply dont get it! May you give a brief explanation? Thank you
Categories: DBA Blogs

Oracle Database XE 18c User Data Limitations: how to get user data size to check a limit?

Fri, 2020-07-24 11:06
Hello! As it's said https://docs.oracle.com/en/database/oracle/oracle-database/18/xeinl/oracle-database-xe-user-data-limitations.html: "user data in an Oracle Database XE database cannot exceed 12 gigabytes". Would You explain how to get exactly that user data size by means of SQL? Would You also clarify should we add here separate APEX tablespace's size? Thank You.
Categories: DBA Blogs

Queries from a big table are slow during merge

Fri, 2020-07-24 11:06
Our application reads data from a big table (~120mil rows). Indexes are set properly, so read queries are quick (tens of ms at max). Application does not change the data in table just reads using same query just were clause Once a day there is an ETL job populating changes (from external system) into delta table and consequently runs Stored Procedure which populates changes from delta into main table via merge. Last time there were 16mil of records in delta table. Our problem is that some of application queries are slow (1-35sec instead of few ms) during merge. It is just ~0.056% of daily queries but we need to mitigate that anyway. Queries are same just were clause can have more values in "IN" set for some columns (e.g. <code>PRODUCT_CODE IN (:5 , :6, :7)</code> vs. <code>PRODUCT_CODE IN (:5)</code> ). What solution we should use to eliminate data update influence to read queries? We see 2 possibilities: 1) use materialized view - 1:1 with main table, refreshed only after ETL+merge is performed on main table 2) using table synonym together with 2 versions of main table (active and passive). Updates would be done to passive table and once done (including indexes and statistics recalculation) the synonym will be change to passive table making it active. Would any of mentioned solution resolve our problem? Thanks, Robert
Categories: DBA Blogs

Update Table 2 based on data from Table 1 with Conditions

Fri, 2020-07-24 11:06
I have two tables (target & source) which have the same structure. Target has a subset of accounts & telephone info from the source table. Source has duplicate account # with different contact information in each row. I?m attempting to update the target table contact information so that a single row / account number has all contact information; additional contact info goes into extended columns on the target table. Keying on the account #, I would like to compare source telephone information to see if it exists in the target table, if it doesn?t exist, add the information in the extended columns as in the example below REF Account No Cell Site Contact SRC 3000030 1112374016 SRC 3000030 2226883696 SRC 3000030 3334023239 SRC 3000030 4442548632 TGT 3000030 1112374016 Target Table Account No Cell Site Contact Phone5 Phone6 Phone7 Phone8 3000030 1112374016 <b>All non duplicate contact numbers go into these over flow columns</b> Through some research & help from StackOverflow, I crafted a View with a UNION ALL, and a Merge Statement with a Pivot table: <code>CREATE or REPLACE VIEW col_combine_v ( accountno, cell, site, contact, email) AS select accountno, cell, site, contact, email1 from target union all select accountno, cell, site, contact, email1 from source order by accountno desc; MERGE INTO target tgt USING (select accountno, r1_cell, r1_site, r1_contact, r1_email, r2_cell, r2_site, r2_contact, r2_email from (select s.*,row_number()over(partition by accountno order by accountno) rn from col_combine_v s ) pivot ( min(cell) cell,min(site) site, min(contact) contact, min(email) email FOR rn IN (1 R1,2 R2) ) ) src ON (src.accountno = tgt.accountno) WHEN MATCHED THEN UPDATE SET tgt.phone5 = CASE WHEN src.r1_contact <> src.r2_contact AND src.accountno = tgt.accountno THEN src.r1_contact END, tgt.phone6 = CASE WHEN src.r1_contact <> src.r2_contact AND src.r1_contact <> tgt.phone6 AND src.accountno = tgt.accountno THEN src.r1_contact END</code> Unfortunately, after I perform the PIVOT, I don?t see any of the other contact numbers except for the next to the last one (4442548632). This is the only number that gets updated on the Target table. But I need to extract all the telephone numbers that don?t exist in the Target table. I?m relatively new to Oracle / SQL and appreciate your help and guidance. 7/14/20: I have updated the LiveSQL link previously provided in a attempt to mimic as much as possible the use case I'm trying to solve. I created a view to return 3 rows of data(1). If I select the data from the view, it returns 3 rows of data, (3). Once I create a Pivot table from the view, only two rows are return, (2). I need access to all the data in the view so I can update the target table accordingly. Basically I'm trying to determine if the Source "Contact" information exists in the Target table and if it doesn't exist, put the Source data in extended columns, i.e. phone4, phone5, etc. Note that the Target columns cell, site, contact are duplicates of the Source table equivalent columns. It's my understanding that I can't compare two tables independently because I have no assurances that the row numbers are the same. Essentially how does one compare two tables (A & B) so that I can use data in Table A to update the data in...
Categories: DBA Blogs

Pages