Feed aggregator

Eleven Table Tennis: Basics

The Oracle Instructor - 44 min 29 sec ago

Assuming you are an IRL player who wants to get as close to the real thing as possible, that’s what I’d recommend:

Make sure you have enough space to play

The green box is your playing space. It should be a square of 2.50 m X 2.50 m ideally. Make sure to leave some space at the front, so you can reach balls close to the net and even a little across the net. Otherwise you may become a victim of ghost serves. Leave enough room at the sides – some opponents play angled, just like IRL.

If you don’t have enough space for this setup – maybe you shouldn’t play multiplayer mode then. You can still have fun, playing against the ballmachine or against the AI. Actually, I think it’s worth the money even in that case.

Use the discord channel

The Eleven TT community is on this discord channel: https://discord.gg/s8EbXWG

I recommend you register there and use the same or a similar name as the name you have in the game. For example, I’m Uwe on discord and uwe. in the game (because the name uwe was already taken). This is handy to get advice from more experienced players, also the game developers are there. They are very responsive and keen to improve Eleven TT even more, according to your feedback.

There’s a preview version presently, that has improved tracking functionality. You can just ask the developers there to get you this preview version. I did, and I find it better than the regular version, especially for fast forehand strokes.

Setup your paddle

When you have the Sanlaki paddle adapter (as recommended in the previous post), go to the menu and then to Paddle Settings:

Click on Paddle Position and select the Sanlaki Adapter:

As an IRL player, you may start with an Advanced Paddle Surface:

Se how that works for you. Bounciness translates to the speed of your blade. An OFF ++ blade would be maximum bounciness. Spin is self-explaining. You have no tackiness attribute, though. Throw Coefficient translates to the sponge thickness. The higher that value, the thicker the sponge.

Serving

This takes some time to get used to. You need to press the trigger on the left controller to first “produce” a ball, then you throw it up and press the trigger again to release the ball. Took me a while to practice that and still sometimes I fail to release the ball as smoothly as I would like to.

What I like very much: You have a built-in arbiter, who makes sure your serve is legal according to the ITTF rules. That is applied for matches in multiplayer mode as well as for matches in single player mode. But not in free hit mode! Check out the Serve Practice:

It tells you what went wrong in case:

Remove AI Spin Lock

I recommend you practice with the AI opponent in single player mode for a while. It has spin lock on per default, which means it will never produce any side spin. I find that unrealistic. After some practicing against the AI in single player mode, you’re ready for matches in multiplayer mode against other human opponents.

Categories: DBA Blogs

Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster

Dietrich Schroff - Fri, 2021-01-22 15:12

 Kubernetes cluster with microk8s on raspberry pi

If you want to join a node and you get the following error:

microk8s join 192.168.178.57:25000/6a3ce1d2f0105245209e7e5e412a7e54

Contacting cluster at 192.168.178.57
Traceback (most recent call last):
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
    join_dqlite(connection_parts)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
    update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
    with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'

 This error happens, if you have not enabled dns on your nodes.

So just run "microk8s.enable dns" on every machine:

microk8s.enable dns

Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.178.57
Adding argument --cluster-dns to nodes.
Configuring node 192.168.178.57
Restarting nodes.
Configuring node 192.168.178.57
DNS is enabled

And after that the join will work like expected:

root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/ed3f57a3641581964cad43f0ceb2b526
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..  
root@ubuntu:/home/ubuntu# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
ubuntu   Ready    <none>   3m35s   v1.20.1-34+97978f80232b01
zigbee   Ready    <none>   37m     v1.20.1-34+97978f80232b01
 

Google Cloud Services and Tools

Online Apps DBA - Fri, 2021-01-22 04:59

Google Cloud Services is a set of Computing, Networking, Storage, Big Data, Machine Learning, and Management services offered by Google which runs on the same cloud infrastructure that Google uses internally for YouTube, Gmail, and other end-user products. Want to know more about the tools and services offered by Google Cloud? Read the blog post […]

The post Google Cloud Services and Tools appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction To Amazon Lex | Conversational AI for Chatbots

Online Apps DBA - Fri, 2021-01-22 04:08

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging […]

The post Introduction To Amazon Lex | Conversational AI for Chatbots appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction To Amazon SageMaker Built-in Algorithms

Online Apps DBA - Fri, 2021-01-22 04:01

Amazon SageMaker provides a suite of built-in algorithms to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. Want to know more about the Amazon SageMaker Built-in Algorithms? Read the blog post at https://k21academy.com/awsml12 to learn more. The blog post covers: • What Is Amazon SageMaker and […]

The post Introduction To Amazon SageMaker Built-in Algorithms appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partner Webcast – Hitchhikers Guide to Oracle Cloud (Part 2)

Oracle Cloud Infrastructure is a set of complementary cloud services that enable you to build and run a wide range of applications and services in a highly available hosted environment. Oracle Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcing SLOB 2.5.3

Kevin Closson - Thu, 2021-01-21 15:08

This is just a quick blog post to inform readers that SLOB 2.5.3 is now available at the following webpage: click here.

SLOB 2.5.3 is a bug fix release. One of the fixed bugs has to do with how SLOB sessions get connected to RAC instances. SLOB users can surely connect to the SCAN service but for more repeatable testing I advise SLOB 2.5.3 and SQL*Net services configured one per RAC node. This manner of connectivity establishes affinity between schemas and RAC nodes. For example, repeatability is improved if sessions performing SLOB Operations against, say, user7’s schema, it is beneficial to do so connected to the same RAC node as you iterate through your testing.

The following is cut and pasted from SLOB/misc/sql_net/README:

The tnsnames.ora in this directory offers an example of
service names that will allow the user to test RAC with
repeatable results. Connecting SLOB sessions to the round
robin SCAN listener will result in SLOB sessions connecting
to random RAC nodes. This is acceptable but not optimal and
can result in varying run results due to slight variations
in sessions per RAC node from one test to another.
As of SLOB 2.5.3, runit.sh uses the SQLNET_SERVICE_BASE and
SQLNET_SERVICE_MAX slob.conf parameters to sequentially
affinity SLOB threads (Oracle sessions) to numbered service
names. For example:
SQLNET_SERVICE_BASE=rac
SQLNET_SERVICE_MAX=8
With these assigned values, runit.sh will connect the first
SLOB thread to rac1 then rac2 and so forth until rac8 after
which the connection rotor loops back to rac1. This manner
of RAC affinity testing requires either a single SLOB
schema (see SLOB Single Schema Model in the documentaion)
or 8 SLOB schemas to align properly with the value assigned
to SQLNET_SERVICE_MAX. The following command will connect
32 SLOB threads (Oracle sessions) to each RAC node in an
8-node RAC configuration given the tnsnames.ora example
file in this directory:
$ sh ./runit.sh -s 8 -t 32

Find sku_no values from the table which does not have any records for ven_type='P'

Tom Kyte - Thu, 2021-01-21 09:26
Hi Tom and Team, I have a very simple table with the following records. site_no, sku_no, vendor_id, ven_type A sku_no can have multiple vendor_id values associated to it. A vendor_id can have a ven_type = P (Primary) or S (Secondary) I want to find all those sku_no values from the table that do not have any ven_type='P' records. Kindly help me write a query for this. Please let me know if the explanation is not clear or if you want me to provide any other details. Thank you.
Categories: DBA Blogs

Troubleshooting heavy hash joins

Tom Kyte - Thu, 2021-01-21 09:26
Hello Chris, Hello Connor. I know that hash join performance might suffer a lot if it has to store the hash table on a disk, but I have no idea how to check if that's happening and that is the reason a query is slow. Admittedly I do not know much about memory usage in Oracle and how to track it. Could you please, point me toward some articles on the topic or data dictionaries, that would help? Unfortunately I have neither a dba access nor an access to trace files.
Categories: DBA Blogs

Spooling data to .csv file via SQL Plus

Tom Kyte - Thu, 2021-01-21 09:26
It is probably apparent from my job title that my role is far from a dba. I am, however, a frequent user of Oracle SQL Developer. I work in mass appraisal and use SQL Developer to select, update, insert, delete, etc., on a regular basis. I would like to bring some automation to some of the select statements that I run most frequently and spool the results to a .csv file. An example of this is a select statement for identifying recently sold properties that need to be reviewed. The following command [using the Windows Command Prompt] has been stored as a scheduled task [using Windows Task Scheduler]: G:\>sqlplus username/password@connection @G:\SALES_VALIDATION\bat_files\weekly_salesval_rev.sql Weekly_salesval_rev.sql is the script containing the spool commands and select statement. I have included a link so you can view the script. What command [or commands] can I incorporate so that the data will be formatted appropriately [and include column headings] for review in Excel? While there should be approximately 21 columns of data, the results are currently displaying in a scattered fashion within the first 3 columns of the .csv file. If you need any other detail, please let me know. Also, if you would suggest or recommend other approaches to automating frequently run SQL select statements, let me know and I would be glad to look into those alternatives. Thank you for your time and help! Berkley Rose Product Versions: SQL Plus Release 12.2.0.1.0 Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 Oracle SQL Developer 4.0.3.16 Windows Version 10.0.17763.1577
Categories: DBA Blogs

Datapump in Oracle ADB using SQL Developer Web

Hemant K Chitale - Thu, 2021-01-21 04:20

 If you have a small schema in the Oracle Cloud Autonomous Database, you can actually run DataPump from SQL Developer Web.  DATA_PUMP_DIR maps to a DBFS mount inside the Oracle Database.


Logged in to my Oracle ADB as "ADMIN"

I check if DATA_PUMP_DIR exists  and I find that it is in dbfs  :

Query


I run a PLSQL Block to export the HEMANT schema using the DBMS_DATAPUMP API :

PLSQL Block


After I drop the two tables in the schema, I run the import using the DBMS_DATAPUMP API and then refresh the list of Tables owned by "HEMANT" :

PL/SQL Block


This method is a quick way of using the Autonomous Database itself when you don't have an external Object Store to hold the Datapump file.  So, I'd use this only for very small schemas as the dump is itself loaded into DBFS.


The PLSQL Code is :



REM  Based on Script posted by Dick Goulet, posted to oracle-l@freelists.org
REM With modifications by me.
REM Hemant K Chitale

REM Export schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'export';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;





REM Import schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'import';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value=>'SKIP');
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;



Again, I emphasise that this is only for small dumps.  


Categories: DBA Blogs

Oracle 19c Automatic Indexing: Non-Equality Predicates Part II (Let’s Spend The Night Together)

Richard Foote - Wed, 2021-01-20 21:42
In my previous post in this series, I discussed out Automatic Indexing currently does not consider Non-Equality predicates. Automatic Indexing will index columns based only on Equality predicates. So how does Oracle handle the scenario when an SQL has a mixture of both Equality and Non-Equality predicates? I’ll begin by creating two very similar tables, […]
Categories: DBA Blogs

MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady

Dietrich Schroff - Wed, 2021-01-20 15:44

On my little kubernetes cluster with microK8s

 


 i got this problem:

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
zigbee   NotReady   <none>   59d   v1.19.5-34+b1af8fc278d3ef
ubuntu   Ready      <none>   59d   v1.19.6-34+e6d0076d2a0033

The solution was:

kubectl describe node zigbee

and in the output i found:

Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 14m                kubelet     Starting kubelet.
  Warning  SystemOOM                14m                kubelet     System OOM encountered, victim process: influx, pid: 3256628
  Warning  InvalidDiskCapacity      14m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientMemory
Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.

But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...

Need help working with PL/SQL FOR LOOP

Tom Kyte - Wed, 2021-01-20 15:06
We have batch_table with column batch_id having values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Running sample code for demonstration. <code> PACKAGE XXX PROCEDURE YYY IS variables... BEGIN FOR i IN (SELECT batch_id FROM batch_table) LOOP -- Function call IF get_running_threads('XXEXTRACT_DATA') <= l_max_allowed_threads THEN l_request_id := fnd_request.submit_request ( application => g_application_name, program => 'XXEXTRACT_DATA', description => 'XXEXTRACT_DATA', start_time => SYSDATE, sub_request => FALSE, argument1 => i.batch_id); ELSE BEGIN -- WAIT LOGIC -- Wait till previous concurrent program completes successfully. -- Call API FND_CONCURRENT.WAIT_FOR_REQUEST DBMS_LOCK.sleep(10); END; END IF; END LOOP; END YYY; END XXX; </code> My ask is : Batch_id 1, 2, 3 successfully processed by calling the concurrent program XXEXTRACT_DATA. When FOR Loop starts processing batch_id 4, control comes to ELSE part. In the next iteration, this starts with batch_id 5. So batch_id 4 is never processed. How can I process batch_id 4 ? Is this possible to achieve without reopening the cursor more than one ? Pls respond.
Categories: DBA Blogs

Historical question about the definition of the constraining table in the Oracle documentation

Tom Kyte - Wed, 2021-01-20 15:06
Hello, AskTom! Excuse me for the "idle" question, but none of my colleagues could answer this. Many-many years ago, back in Oracle 7, was a "constraining error" ORA-04094 "Table is constraining, trigger may not modify it". This restriction was relaxed since Oracle 8i. In the Oracle (8i and later) documentation you can read - "constraining error prevented a row trigger from modifying a table when the parent statement implicitly read that table to enforce a foreign key constraint" and it's absolutely clear for me. But the Oracle 7 documentation says - "A constraining table is a table that a triggering statement might need to read either directly, for a SQL statement, or indirectly, for a declarative referential integrity constraint". I never undestood the first part of the sentence - "read directly, for a SQL statement": how a reading of the table can "constraint" the table in the triggering statement if the SQL operator reads the data in the table in the state at the moment in time BEFORE the row trigger can do any changes in the table to be read? My opinion is - it was a mistake in the Oracle documentation and a correct definition of the constraining table would be "constraining table is a table that a triggering statement might need to read indirectly, for a declarative referential integrity constraint", without the first part.
Categories: DBA Blogs

Table with LONG data type not being freed

Tom Kyte - Wed, 2021-01-20 15:06
Hi Tom, We are dealing with a system which has some legacy tables with LONG column. One of these tables contains critical data (email bodies and attachments, chat transcripts, etc) in LONG column. Unfortunately we cannot remove or even change this table in any way, system is heavily dependent on it. 2 years ago we introduced the following solution: - New table with same columns, but CLOB instead of LOB - Minutely job, which copies newly inserted records from original table into new table, and converts LONG to CLOB - Daily job which deletes already copied records from old table When this solution was introduced, we did a manual cleanup of the original table: - Stop the system which uses the table - Create new table with exact same definition (LONG column) - Copy over all records which had to be kept - Rename original table to "_OLD" postfix and new table to original table name - Drop old table However, after 2 years, customer where this database is located wanted to drop old partitions (note: the problematic table is NOT partitioned), and while checking disk space, noticed that the original table is still growing to be very large (this was the original problem also 2 years ago). Our expectation from the above solution was that while Oracle will not free up the disk space after the DELETE because of the LONG column, it will still reuse existing blocks, so that table will not grow after a certain point. However this is not what is happening, the table keeps growing, day by day. Currently it has about 13 000 records, but over 19 million blocks. About 18 000 records are added during the day, and each of those are deleted at night. I tried to reproduce the problem using the below SQL, but I'm not sure it's fully correct, it's producing a different result, something which is what we were expecting: after we delete and insert again, used blocks count is not increasing. <code> -- Helper proc CREATE OR REPLACE PROCEDURE show_space ( PI_TABLE_NAME IN VARCHAR2 ) AS l_unformatted_blocks NUMBER; l_unformatted_bytes NUMBER; l_fs1_blocks NUMBER; l_fs1_bytes NUMBER; l_fs2_blocks NUMBER; l_fs2_bytes NUMBER; l_fs3_blocks NUMBER; l_fs3_bytes NUMBER; l_fs4_blocks NUMBER; l_fs4_bytes NUMBER; l_full_blocks NUMBER; l_full_bytes NUMBER; PROCEDURE p ( p_label IN VARCHAR2, p_num IN NUMBER ) IS BEGIN dbms_output.put_line(rpad(p_label, 40, '.') || p_num); END; BEGIN dbms_space.space_usage( 'CCQ', PI_TABLE_NAME, 'TABLE', l_unformatted_blocks, l_unformatted_bytes, l_fs1_blocks, l_fs1_bytes, l_fs2_blocks, l_fs2_bytes, l_fs3_blocks, l_fs3_bytes, l_fs4_blocks, l_fs4_bytes, l_full_blocks, l_full_bytes); p('Unformatted Blocks ', l_unformatted_blocks); p('FS1 Blocks (0-25) ', l_fs1_blocks); p('FS2 Blocks (25-50) ', l_fs2_blocks); p('FS3 Blocks (50-75) ', l_fs3_blocks); p('FS4 Blocks (75-100)', l_fs4_blocks); p('Full Blocks ', l_full_blocks); END; / -- Dummy table, exact same definition as original table CREATE TABLE test_long ( qwkey NUMBER(38, 0) NOT NULL, tablename VARCHAR2(90 BYTE), fieldname VARCHAR2(90 BYTE), textkey NUMBER(38, 0), text LONG ); ALTER TABLE test_long ADD CONSTRAINT pk_test_long PRIMARY KEY (qwkey) USING INDEX; -- Original stats BEGIN dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1); END; BEGIN show_space('TEST_LONG'); END; /* Output: Unformatted...
Categories: DBA Blogs

Hint Errors

Jonathan Lewis - Wed, 2021-01-20 05:06

This is a list of possible explanations of errors that you might see in the Hint Report section of an execution plan. It’s just a list of the strings extracted from a chunk of the 19.3 executable around the area where I found something I knew could be reported, so it may have some errors and omissions – but there are plenty of things there that might give you some idea why (in earlier versions of Oracle) you might have seen Oracle “ignoring” a hint:

internally generated hint is being cleared
hint conflicts with another in sibling query block
hint overridden by another in parent query block
conflicting optimizer mode hints
duplicate hint
all join methods are excluded by hints
index specified in the hint doesn't exist
index specified in hint cannot be parallelized
incorrect number of indexes for AND_EQUAL
partition view set up
FULL hint is same as INDEX_FFS for IOT
access path is not supported for IOT
hint on view cannot be pushed into view
hint is discarded during view merging
duplicate tables in multi-table hint
conditions failed for array vector read
same QB_NAME hints for different query blocks
rejected by IGNORE_OPTIM_EMBEDDED_HINTS
specified number must be positive integer
specified number must be positive number
specified number must be >= 0 and <= 1
hint is only valid for serial SQL
hint is only valid for slave SQL
hint is only valid for dyn. samp. query
hint is only valid for update join ix qry
opt_estimate() without value list
opt_estimate() with conflicting values spec
hint overridden by NO_QUERY_TRANSFORMATION
hinted query block name is too long
hinted bitmap tree wasn't fully resolved
bitmap tree specified was invalid
Result cache feature is not enabled
Hint is valid only for select queries
Hint is not valid for this query block
Hint cannot be honored
Pred reorder hint has semantic error
WITH_PLSQL used in a nested query
ORDER_SUBQ with less than two subqueries
conflicting OPT_PARAM hints
conflicting optimizer_feature_enable hints
because of _optimizer_ignore_parallel_hints
conflicting JSON_LENGTH hints

CBO Example

Jonathan Lewis - Wed, 2021-01-20 04:01

A little case study based on an example just in on the Oracle-L list server. This was supplied with a complete, working, test case that was small enough to understand and explain very quickly.

The user created a table, and used calls to dbms_stats to fake some statistics into place. Here, with a little cosmetic editing, is the code they supplied.

set serveroutput off
set linesize 180
set pagesize 60
set trimspool on

drop table t1 purge;

create table t1 (id number(20), v varchar2(20 char));
create unique index pk_id on t1(id);
alter table t1 add (constraint pk_id primary key (id) using index pk_id enable validate);
exec dbms_stats.gather_table_stats(user, 't1');
 
declare
        srec               dbms_stats.statrec;
        numvals            dbms_stats.numarray;
        charvals           dbms_stats.chararray;
begin
  
        dbms_stats.set_table_stats(
                ownname => user, tabname => 't1', numrows => 45262481, numblks => 1938304, avgrlen => 206
        );

        numvals := dbms_stats.numarray (1, 45262481);
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, numvals);
        dbms_stats.set_column_stats (
                ownname => user, tabname => 't1', colname => 'id', 
                distcnt => 45262481, density => 1/45262481,
                nullcnt => 0, srec => srec, avgclen => 6
        );

        charvals := dbms_stats.chararray ('', '');
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, charvals);
        dbms_stats.set_column_stats(
                ownname => user, tabname => 't1', colname => 'v', 
                distcnt => 0,  density => 0, 
                nullcnt => 45262481, srec => srec, avgclen => 0
        );
        dbms_stats.set_index_stats( 
                ownname => user, indname =>'pk_id', numrows => 45607914, numlblks => 101513,
                numdist => 45607914, avglblk => 1, avgdblk => 1, clstfct => 33678879, indlevel => 2
        );
end;
/
 
variable n1 nvarchar2(32)
variable n2 number

begin
        :n1 := 'D';
        :n2 := 50;
end;
/
 

select 
        /*+ gather_plan_statistics */ 
        * 
from    ( 
        select  a.id col0,a.id col1
        from    t1 a
        where   a.v = :n1 
        and     a.id > 1
        order by 
                a.id 
        ) 
where 
        rownum <= :n2 
;
 
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost peeked_binds '));

From Oracle’s perspective the table has 45M rows, with a unique sequential key starting at 1 in the id column. The query looks like a pagination query, asking for 50 rows, ordered by id. But the in-line view asks for rows where id > 1 (which, initiall, means all of them) and applies a filter on the v column.

Of course we know that v is always null, so in theory the predicate a.v = :n1 is always going to return false (or null, but not true) – so the query will never return any data. However, if you read the code carefully you’ll notice that the bind variable v has been declared as an nvarchar2() not a varchar2().

Here’s the exection plan I got on an instance running 19.3 – and it’s very similar to the plan supplied by the OP:

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |  3747 (100)|      0 |00:00:00.01 |
|*  1 |  COUNT STOPKEY                |       |      1 |        |            |      0 |00:00:00.01 |
|   2 |   VIEW                        |       |      1 |     50 |  3747   (1)|      0 |00:00:00.01 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |    452K|  3747   (1)|      0 |00:00:00.01 |
|*  4 |     INDEX RANGE SCAN          | PK_ID |      0 |   5000 |    14   (0)|      0 |00:00:00.01 |
----------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   2 - :2 (NUMBER): 50

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=:N2)
   3 - filter(SYS_OP_C2C("A"."V")=:N1)
   4 - access("A"."ID">1)

The question we were asked was this: “Why does the optimizer estimate that it will return 5,000 entries from the index range scan at operation4?”

The answer is the result of combining two observations.

First: In the Predicate Information you can see that Oracle has applied a character-set conversion to the original predicate “a.v = :n1” to produce filter(SYS_OP_C2C(“A”.”V”)=:N1). The selectivity of “function of something = bind value” is one of those cases where Oracle uses one of its guesses, in this case 1%. Note that the E-rows estimate for operation 3 (table access) is 452K, which is 1% of the 45M rows in the table.

In real life if you had optimizer_dynamic_sampling set at level 3, or had added the hint /*+ dynamic_sampling(3) */ to the query, Oracle would sample some rows to avoid the need for guessing at this point.

Secondly: the optimizer has peeked the bind variable for the rownum predicate, so it is optimizing for 50 rows (basically doing the arithmetic of first_rows(50) optimisation). The optimizer “knows” that the filter predicate at the table will eliminate all but 1% of the rows acquired, and it “knows” that it has to do enough work to find 50 rows in total – so it can calculate that (statistically speaking) it has to walk through 5,000 (= 50 * 100) index entries to visit enough rows in the table to end up with 50 rows.

Next Steps (left as exercise)

Once you’ve got the answer to the question “Why is this number 5,000?”, you might go back and point out that the estimate for the table access was 95 times larger than the estimate for the number of rowids selected from the index and wonder how that could be possible. (Answer: that’s just one of the little defects in the code for first_rows(n).)

You might also wonder what would have happened in this model if the bind variable n1 had been declared as a varchar2() rather than an nvarchar2() – and that might have taken you on to ask yet another question about what the optimizer was playing at.

Once you’ve modelled something that is a little puzzle there’s always scope for pushing the model a little further and learning a little bit more before you file the model away for testing on the next version of Oracle.

Question about sequence with lower nextval than column

Tom Kyte - Tue, 2021-01-19 20:46
Hi, I have a sequence name s1 start with 1 and incremented by 1 and end with 999. I am inserting these sequence value into one table name 'e'. E table contains eno (pk) column. insert into e values(s1.nextval); I inserted 9 rows. sequence current value is 9 and 10 is already inserted in backend. I try to insert into e values(s1.nextval); then it will come pk violation error. In this case i want to insert 11 in place of 10
Categories: DBA Blogs

Choosing a view based on result of view

Tom Kyte - Tue, 2021-01-19 20:46
Hi all, I am having a struggle with a view. The outcome of the view can be 1 row of 3 different views with the same fields. Can someone point me in the right direction how to think please? :) Very basic, this is my problem: <code> /* MAINVIEW */ SELECT * FROM (SELECT * FROM VIEW1) --returns n rows /* IF VIEW1 IS EMPTY THEN SELECT ALL FROM VIEW2 */ (SELECT * FROM VIEW2) -- returns n rows /* IF VIEW2 IS EMPTY THEN SELECT VIEW3 (=HAS ALWAYS DATA) */ (SELECT * FROM VIEW3) -- returns 1 row </code> I don't need full code, just a hint in the right direction ;) . Thanks in advance.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator