Feed aggregator

The use and misuse of %TYPE and %ROWTYPE attributes in PL/SQL APIs

Andrew Clarke - 9 hours 17 min ago
PL/SQL provides two attributes which allow us to declare a data structure with its datatype derived from a database table or a previously declared variable.

We can use %type attribute for defining a constant, a variable, a collection element, record field or PL/SQL program parameters. While we can reference a previously declared variable, the most common use case is to tie the declaration to a table column. The following snippet declares a variable with the same datatype and characteristics (length, scale, precision) as the SAL column of the EMP table.


l_salary emp.sal%type;
We can use the %rowtype attribute to declare a record variable which matches the projection of a database table or view, or a cursor variable. The following snippet declares a variable with the same projection as the preceeding cursor.

cursor get_emp_dets is
select emp.empno
, emp.ename
, emp.sal
, dept.dname
from emp
inner join dept on dept,deptno = emp.deptno;
l_emp_dets get_emp_dets%rowtype;
Using these attributes is considered good practice. PL/SQL development standards will often mandate their use. They deliver these benefits:
  1. self-documenting code: if we see a variable with a definition which references emp.sal%type we can be reasonably confident this variable will be used to store data from the SALARY column of the EMP table.
  2. datatype conformance: if we change the scale or precision of the the SALARY column of the EMP table all variables which use the %type attribute will pick up the change automatically. If we add a new column to the EMP table, all variables defined with the %rowtype attribute will be able to handle that column without us needing to change those programs.
That last point comes with an amber warning: the automatic conformance only works when the %rowtype variable is populated by SELECT * FROM queries. If we are using an explicit projection with named columns then we have now broken our code and we need to fix it. More generally, this silent propagation of changes to our data structures means we need to pay more attention to impact analysis. Is it right that we can just change a column's datatype or amend a table's projection without changing the code which depends on them? Maybe it's okay, maybe not. By shielding us from the immediate impact of broken code, using these attributes also withholds the necessity to revisit our programs: so we have to remember to do it.

Overall I think the benefits listed above outweigh the risks, and I think we should always use these attributes whenever it is appropriate, for the definition of local variables and constants. However, complications arise if we use them to declare PL/SQL program parameters, specifically for procedures in package specs and standalone program units. It's not so bad if we're writing an internal API but it becomes a proper headache when we are dealing with a public API, one which will be called by programs owned by another user, one whose developers are in another team or outside our organisation, or even using Java, dotNet or whatever. So why is the use of these attributes so bad for those people?

  1. obfuscated code: these attributes are only self-documenting when we have a familiarity with the underlying schema, or have easy access to it. This will frequently not be the case for developers in other teams (or outside the organisation) who need to call our API. They may be able to guess at the datatype of SALARY or HIREDATE, but they really shouldn't have to. And, of course, a reference to emp%rowtype is about as unhelpful as it could be. Particularly when we consider ...
  2. loss of encapsulation: one purpose of an API is to shield consumers of our application from the gnarly details of its implementation. However, the use of %type and %rowtype is actually exposing those details. Furthermore, a calling program cannot define their own variables using these attributes unless we grant them SELECT on the tables. Otherwise the declaration will hurl PLS-00201. This is particularly problematic for handling %rowtype, because we need to define a record variable which matches the row structure.
  3. breaking the contract: an interface is an agreement between the provider and the calling program. The API defines input criteria and in return guarantees outcomes. It forms a contract, which allows the consumer to write code against stable definitions. Automatically propagating changes in the underlying data structures to parameter definitions creates unstable dependencies. It is not simply that the use of %type and %rowtype attributes will cause the interface to change automatically, the issue is that there is no mechanism for signalling the change to an API's consumers. Interfaces demand stable dependencies: we must manage any changes to our schema in a way which ideally allows the consumers to continue to use the interface without needing to change their code, but at the very least tells them that the interface has changed.
Defining parameters for public APIsThe simplest solution is to use PL/SQL datatypes in procedural signatures. These seem straightforward. Anybody can look at this function and understand that input parameter is numeric and the returned value is a string.

function get_dept_manager (p_deptno in number) return varchar2;
So clear but not safe. How long is the returned string? The calling program needs to know, so it can define an appropriately sized variable to receive it. Likewise, in this call, how long is can a message be?

procedure log_message (p_text in varchar2);
Notoriously we cannot specify length, scale or precision for PL/SQL parameters. But the calling code and the called code will write values to concretely defined types. The interface needs to communicate those definitions. Fortunately PL/SQL offers a solution: subtypes. Here we have a substype which explicitly defines the datatype to be used for passing messages:

subtype st_message_text is varchar2(256);

procedure log_message (p_text in st_message_text);
Now the calling program knows the maximum permitted length of a message and can trim its value accordingly. (Incidentally, the parameter is still not constrained in the called program so we can pass a larger value to the log_message() procedure: the declared length is only enforced when we assign the parameter to something concrete such as a local variable.)

We can replace %rowtype definitions with explicit RECORD defintions. So a function which retrieves the employee records for a department will look something like this:


subtype st_deptno is number(2,0);

type r_emp is record(
empno number(4,0),
ename varchar2(10),
job varchar2(9),
mgr number(4,0),
hiredate date
sal number(7,2),
comm number(7,2),
deptno st_deptno
);

type t_emp is table of r_emp;

function get_dept_employees (p_deptno in st_deptno) return t_emp;
We do this for all our public functions.

subtype st_manager_name is varchar2(30);

function get_dept_manager (p_deptno in st_deptno) return st_manager_name;
Now the API clearly documents the datatypes which calling programs need to pass and which they will receive as output. Crucially, this approach offers stability: the datatype of a parameter cannot be changed invisibly, as any change must be implemented in a new version of the publicly available package specification. Inevitably this imposes a brake on our ability to change the API but we ought not to be changing public APIs frequently. Any such change should arise from either new knowledge about the requirements or a bug in the data model. Wherever possible we should try to handle bugs internally within the schema. But if we have to alter the signature of a procedure we need to communicate the change to our consumers as far ahead of time as possible. Ideally we should shield them from the need to change their code at all. One way to achieve that is Edition-Based Redefinition. Other ways would be to deploy the change with overloaded procedures or even using a different procedure name, and deprecate the old procedure. Occasionally we might have no choice but to apply the change and break the API: sometimes with public interfaces the best we can do is try to annoy the fewest number of people. Transitioning from a private to a public interface There is a difference between internal and public packages. When we have procedures which are intended for internal usage (i.e. only called by other programs in the same schema) we can define their parameters with %type and %rowtype attributes. We have access and - it is to be hoped! - familiarity with the schema's objects, so the datatype anchoring supports safer coding. But what happens when we have a package which we wrote as an internal package but now we need to expose its functionality to a wider audience? Should we re-write the spec to use subtypes instead?

No. The correct thing to do is to write a wrapper package which acts as a facade over the internal one, and grant EXECUTE privileges on the wrapper. The wrapper package will obviously have the requisite subtype definitions in the spec, and procedures declared with those subtypes. The package body will likely consist of nothing more than those procedures, which simply call their equivalents in the internal package. There may be some affordances for translating data structures, such as populating a table %rowtype variable from the public record type, but those will usually be necessary only for the purposes of documentation (this publicly defined subtype maps to this internally defined table column). There is an obvious overhead to writing another package, especially one which is really just a pass-through to the real functionality, but there are clear benefits which justify the overhead:

  • Stability. Not re-writing an existing package is always a good thing. Even if we are mechanically just replacing one set of datatype definitions with a different set which have the same characteristics we are still changing the core system, and that's a chunk of regression testing we've just added to the task.
  • Least privilege escalation. Even if the internal package has been written with a firm eye on the SOLID principles, the chances are it contains more functionality than we need to expose to other consumers. Writing a wrapper package gives us the opportunity to grant access to only the required procedures.
  • Composition. It is also likely that the internal package doesn't have the exact procedure the other team needs. Perhaps there are actually two procedures they need to call, or there's one procedure but it has some confusing internal flags in its signature. Instead of violating the Law of Demeter we can define one simple procedure in the wrapper package spec and handle the internal complexity in the body.
  • Future proofing. Writing a wrapper package gives us an affordance where we can handle subsequent changes in the internal data model or functionality without affecting other consumers. By definition a violation of YAGNI, but as it's not the main reason why we're doing this I'm allowing this as a benefit.
Design is always a trade offThe use of these attributes is an example of the nuances which Coding Standards often lack. In many situations their use is good practice, and we should employ them in those cases. But we also need to know when their use is a bad practice, and why, so we can do something better instead.

Part of the Designing PL/SQL Programs series

静岡の水道修理店選びと依頼する利点

The Feature - 11 hours 55 min ago

静岡には数多くの水道修理店が存在していますが、日頃利用することは滅多にないからこそ、いざ依頼する時には何を基準に選んだら良いかで悩むのではないでしょうか。水まわりのトラブルは頻繁に起きることではありませんが、トラブル発生時に自分では修理できないと感じた場合には、専門としている修理店に依頼して直してもらうべきという点は確かです。一般的には素人でも簡単に直せると言われているトラブルでも、知識や経験が全くない素人が無理に修理すると、予期せぬ二次被害になってしまったり、その後使い続けることを考えると不安になるのではないでしょうか。静岡には水道修理店が豊富に存在しているからこそ、プロに依頼して直してもらうべきです。

依頼先により修理する上での安全面や、修理にかかる費用や時間など異なる点が多くなるのでしっかり比較しておくようにしましょう。少しでも野菜に越したことはないと考えるのは当然であり、今の時代ならインターネットを使えば簡単に見積もりを取ることができます。しかしインターネット上の見積もりは簡易的なものであり、本当に必要となる費用とは異なるケースも多いので気を付けなくてはなりません。本当に必要な料金を知るためには、現地調査に来てもらう必要があります。

現地調査をお願いする時には、出張料や見積もり料、キャンセル料が発生するのか確かめる必要があります。静岡の水道修理もプロに依頼することにより、将来的にも安心して水道を使えるようになります。

Categories: APPS Blogs

静岡の水道修理店選びと依頼する利点

Marian Crkon - 11 hours 55 min ago
静岡には数多くの水道修理店が存在していますが、日頃利用することは滅多にないからこそ、いざ依頼する時には何を基準に選んだら良いかで悩むのではないでしょうか。水まわりのトラブルは頻繁に起きることではありませんが、トラブル発生時に自分では修理できないと感じた場合には、専門としている修理店に依頼して直してもらうべきという点は確かです。一般的には素人でも簡単に直せると言われているトラブルでも、知識や経験が全くない素人が無理に修理すると、予期せぬ二次被害になってしまったり、その後使い続けることを考えると不安になるのではないでしょうか。静岡には水道修理店が豊富に存在しているからこそ、プロに依頼して直してもらうべきです。 依頼先により修理する上での安全面や、修理にかかる費用や時間など異なる点が多くなるのでしっかり比較しておくようにしましょう。少しでも野菜に越したことはないと考えるのは当然であり、今の時代ならインターネットを使えば簡単に見積もりを取ることができます。しかしインターネット上の見積もりは簡易的なものであり、本当に必要となる費用とは異なるケースも多いので気を付けなくてはなりません。本当に必要な料金を知るためには、現地調査に来てもらう必要があります。 現地調査をお願いする時には、出張料や見積もり料、キャンセル料が発生するのか確かめる必要があります。静岡の水道修理もプロに依頼することにより、将来的にも安心して水道を使えるようになります。

Service Accounts suck - why data futures require end to end authentication.

Steve Jones - Thu, 2020-09-17 10:33
 Can we all agree that "service" accounts suck from a security perspective.  Those are the accounts that you set up so what system/service can talk to another one.  Often this will be a database connection so the application uses one account (and thus one connection pool) to access the database.  These service accounts are sometimes unique to a service or application, but often its a standard
Categories: Fusion Middleware

Need to calculate Age as part of select

Tom Kyte - Thu, 2020-09-17 10:06
Hi, We just went live on Oracle a couple of weeks ago. I have a legacy process that includes running a script that was coded for Sybase. I have most of it converted to Oracle, but I'm having trouble with the Age field (it's the last piece I need to get working). I thought about just including the Age piece... then thought to include the entire script for context if nothing else. Thanks in advance for the assist! -Denise Current legacy code <code>SELECT DISTINCT meme.MEME_MEDCD_NO, meme.MEME_BIRTH_DT, AGE = CASE WHEN ( month(convert(datetime, meme.MEME_BIRTH_DT, 103))*100)+ day(convert(datetime, meme.MEME_BIRTH_DT, 103)) - ((month(getdate())*100)+day(getdate())) <= 0 THEN DATEDIFF(YEAR,convert(datetime, meme.MEME_BIRTH_DT, 103),getdate())</b> ELSE DATEDIFF(YEAR,convert(datetime, meme.MEME_BIRTH_DT, 103),getdate())-1 END, sbsb.SBSB_ID, mepe.MEPE_EFF_DT, mepe.MEPE_TERM_DT, mepe.MEPE_ELIG_IND, mepe.CSPI_ID, sbad.SBAD_COUNTY AS 'Member_County', pdpd.LOBD_ID FROM dbo.CMC_MEME_MEMBER meme INNER JOIN dbo.CMC_MEPE_PRCS_ELIG mepe ON mepe.MEME_CK =meme.MEME_CK INNER JOIN dbo.CMC_SBSB_SUBSC sbsb ON sbsb.SBSB_CK = meme.SBSB_CK INNER JOIN CMC_PDPD_PRODUCT pdpd ON mepe.PDPD_ID = pdpd.PDPD_ID INNER JOIN CMC_SBAD_ADDR sbad ON sbsb.SBSB_CK = sbad.SBSB_CK AND sbsb.SBAD_TYPE_MAIL = sbad.SBAD_TYPE WHERE mepe.GRGR_CK IN (1,3,8) AND mepe.MEPE_ELIG_IND = 'Y' AND mepe.MEPE_EFF_DT <= '09/01/2020' AND -- Match file date mepe.MEPE_TERM_DT >= '09/01/2020' AND -- Match file date meme.MEME_MEDCD_NO IN ( )</code>
Categories: DBA Blogs

Database Wallet

Tom Kyte - Thu, 2020-09-17 10:06
Hi Team, We have SSL certificates imported on database server using ORAPKI after creating wallet. We are using utl_http for external system communication from database and using utl_http.set_wallet to access the certificates. Now, we are en-queuing the messages to database queue and writing logic in middle ware to read message from queue and send messages to external system. but the problem is certificates are database server and the communication to external system is from middleware. Can we read the SSL certificate from database server and pass it to middleware? is there a way to pass the certificate from DB to middleware. Can you please advise. Thank You.
Categories: DBA Blogs

Process in order to estimate how many DBAs are needed to support a project

Tom Kyte - Thu, 2020-09-17 10:06
GM, Do you guys know of a whitepaper or training that describes an approach to estimating how many Oracle DBA hours are needed to perform X, Y, Z? We are bidding on an effort, and I would prefer not to have to reinvent the wheel if something already exists. For instance, if a project has these requirements: DBA shall setup a three node 19c RAC cluster with ASM. DBA shall tune the system DBA shall be able to restore the database within a day with minimal data loss DBA shall It will need to be tuned. DBA shall setup a disaster recovery site using data guard DBA shall setup security to meet NIST-3029 I have started to break down all of the 50+ major tasks to satisfy the above requirements, and and threw in there rough daily estimates for each step. Database Security 10 days: o Database instance security hardening setup 3 o Database server security hardening implementation - 2 o Security scanner software setup and troubleshooting - 1 o Troubleshooting false positive security findings and waivers - 2 Oracle install and dB creation with RAC- 5 days o Clusterware setup- 3 days o RAC database creation- 1 o Licensing - .5 o Database Shutdown and Startup setup 1 Backup and Recovery Setup- 2 etc. Thanks, John
Categories: DBA Blogs

Choice State in AWS Step Functions

Pakistan's First Oracle Blog - Thu, 2020-09-17 02:47

Richly asynchronous server-less applications can be built by using AWS step functions. Choice State in AWS Step Functions is the newest feature which was long awaited.

In simply words, we define steps and their transitions and call it State Machine as a whole. In order to define this state machine, we use Amazon States Language (ASL). ASL is a JSON-based structured language that defines state machines and collections of states that can perform work (Task states), determines which state to transition to next (Choice state), and stops execution on error (Fail state). 

So if the requirement is to add a branching logic like if-then-else or case statement in our state transition, then Choice state comes handy. The choice state introduces various new operators into the ASL and the sky is now limit with the possibilities. Operators for choice state include comparison operators like Is Null, IsString etc, Existence operators like Ispresent, glob wildcards where you match some string and also variable string comparison.

Choice State enables developers to simplify existing definitions or add dynamic behavior within state machine definitions. This makes it easier to orchestrate multiple AWS services to accomplish tasks. Modelling complex workflows with extended logic is now possible with this new feature.

Now one hopes that AWS introduces doing it all graphically instead of dabbling into ASL.

Categories: DBA Blogs

Table vs Index Fragmentation

Tom Kyte - Wed, 2020-09-16 15:46
Hello, This is more of a fundamental question, sorry i dont have any test cases. Does table fragmentation also imply index fragmentation for the same table. ?
Categories: DBA Blogs

PARALLEL HINT and DML ERROR logging

Tom Kyte - Wed, 2020-09-16 15:46
HI, <code> CREATE TABLE TEMP_TEST ( ID NUMBER(10) ) ALTER TABLE TEMP_TEST ADD ( CONSTRAINT temp_test_pk UNIQUE (ID) ); </code> Scenario:1: <code> truncate table TEMP_TEST; ALTER SESSION ENABLE PARALLEL DML; INSERT INTO /*+ NOAPPEND PARALLEL(5) */ TEMP_TEST SELECT /*+ PARALLEL */DISTINCT BUCKET FROM source LOG ERRORS INTO ERR$_TEMP_TEST ('insert failed') REJECT LIMIT UNLIMITED; </code> Scenario:2: <code> truncate table TEMP_TEST; ALTER SESSION ENABLE PARALLEL DML; INSERT INTO /*+ NOAPPEND PARALLEL(5) */ TEMP_TEST SELECT DISTINCT BUCKET FROM source LOG ERRORS INTO ERR$_TEMP_TEST ('insert failed') REJECT LIMIT UNLIMITED; </code> Scenario:1 is failing with unique constraint error instead of error records inserting into error table, but scenario:2 error records are inserting into ERR$_TEMP_TEST? The only difference between these two is PARALLEL hint in select statement.
Categories: DBA Blogs

getting ora 01017, invalid username/password when configuring oracle mobile server to my repository db

Tom Kyte - Wed, 2020-09-16 15:46
my local db is a 19c i downloaded the latest version of the mobile server and while going through the installation, i came to this error, i have check my sqlnet.ora file, the tns configuration is good because i am able to connect with toad and sql developer. this is the sqlnet.ora #SQLNET.AUTHENTICATION_SERVICES= (NTS) SQLNET.AUTHENTICATION_SERVICES = (NONE) NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) #SQLNET.ALLOWED_LOGON_VERSION=12 SQLNET.ALLOWED_LOGON_VERSION=9 WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY=C:\Users\TEKYI\Documents\wallet\oracle)))
Categories: DBA Blogs

Anomaly detection

Tom Kyte - Wed, 2020-09-16 15:46
Hello, We have an application that monitors applications, detects anomalies, does correlation between metrics, performs Root Cause Analysis based on a few machine learning algorithms. We are planning onboard oracle monitoring for this application with a few metrics like below. Could you please suggest where we could get some baseline monitoring SQL's to plugin to our application, especially the SQLs that are used to generate ASH/AWR reports. We want to start small and expand over a period of time. Redo (Mb per second) Transactions per second Latency: Log file Sync, Log file parallel write, single block read all in Avg Ms IO MB/per sec Physical Reads MB/sec Physical writes MB/sec DB CPU % usage Network MB/sec Logons per sec Logical Reads Mb/sec File Sync(Avg/ms) RMAN IO mb/ms Waits Locks Top SQL?s Stale statistics on objects Top Objects by Size, growth, Avg growth per day, month Space growth (total vs used), Avg per day, month Thanks, Ravi B
Categories: DBA Blogs

cycle detected in recursive query where it seems to be no cycle

Tom Kyte - Wed, 2020-09-16 15:46
I have recursive query on Oracle 11g table with undirected graph data. Each row represents one edge. The recursive query traverses all edges starting from given input edge. The idea of query is: - input edge is at the 0th level - for n>0, edge is on n-th level if it incides with node of some edge on (n-1)-th level. Query: <code>with edges (id, a, b) as ( select 1, 'X', 'Y' from dual union select 2, 'X', 'Y' from dual ), r (l, id, parent_a, parent_b, child_a, child_b, edge_seen) as ( select 0, id, null, null, a, b, cast(collect(id) over () as sys.ku$_objnumset) from edges where id = 1 union all select r.l + 1, e.id, r.child_a, r.child_b, e.a, e.b , r.edge_seen multiset union distinct (cast(collect(e.id) over () as sys.ku$_objnumset)) from r join edges e on (r.child_a in (e.a, e.b) or r.child_b in (e.a, e.b)) and e.id not member of (select r.edge_seen from dual) ) select * from r; </code> The query worked well with other inputs until two parallel edges between same node pair occured. In this case, there is edge 1 on 0th level of recursion (initial row). I expected edge 2 would be added to result on 1st level of recursion since join condition holds. Instead I get "ORA-32044: cycle detected while executing recursive with query". I know this error is reported when the row newly joined to recursive query result would be same as some existing row. What I don't understand is why Oracle treats row with same node ids but different edge id as duplicate. Adding <code>cycle child_a, child_b set iscycle to 1 default 0</code> clause gives iscycle=1 for new row, adding <code>cycle id, child_a, child_b set iscycle to 1 default 0</code> gives iscycle=0, which is both correct. <b>Is it some known Oracle 11g bug and what's the best way to handle it?</b> I cannot fill LiveSQL link form since LiveSQL supports only Oracle 19 and the problem is reproducible only in Oracle 11g which I can't migrate from. The dbfiddle equivalent is https://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=43af3cfae920e31f9a2748c1c31b54ad . Thanks.
Categories: DBA Blogs

ORA-22992, No LOB field selected

Tom Kyte - Wed, 2020-09-16 15:46
I have a SQL statement can run by itself and get the result back over db link. But if I want to put result set into a table either using ?create table?.as..? or ?insert into ?? before the same select statement, I will get ORA-22992 error. What caused this? The SQL statement like: <CODE> Select a.m, a.n, a.o, a.p, b.q, b.r, b.s, c.t, c.u,c.v From a@remote a left join b@remote b on b.m=a.m and b.n=a.n Left join c@remote c on c.m=a.m and c.m=a.n Where a.yr=2019 a.class=1 order by a.m </CODE> table ?a? and ?c? don?t have LOB fields, table ?b? has a field ?Ldesc? which is CLOB, But it is NOT in the select list. Local version : Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production Remote version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
Categories: DBA Blogs

Oracle 19c Automatic Indexing: Data Skew Part III (The Good Son)

Richard Foote - Wed, 2020-09-16 04:05
  I’m going to expand just a tad on my previous posts on data skew and run a simple query that returns a few rows based on a column predicate AND another query on the same column that returns many rows. The following table has a CODE column as with previous posts with the data […]
Categories: DBA Blogs

Planning And Recommendations Of Virtual Networks

Online Apps DBA - Wed, 2020-09-16 00:42

In this blog, you will see how to design a Virtual Network, we have discussed the planning and recommendation of virtual networks from Azure Solution Architect Design AZ-304 perspective. Check out the blog post at https://k21academy.com/az30413. Areas covered in this blog post: • What things you need to remember before taking subscription for single and […]

The post Planning And Recommendations Of Virtual Networks appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

The Island

Greg Pavlik - Tue, 2020-09-15 23:19

What is guilt? Who is guilty? Is redemption possible? What is sanity? Do persons have a telos, a destiny, both or neither? Ostrov (The Island) asks and answers all these questions and more.

A film that improbably remains one of the best of this century: "reads" like a 19th century Russian novel; the bleakly stunning visual setting is worth the time to watch alone.



[AZ-304] Microsoft Azure Architect Design (beta): Everything You Need To Know

Online Apps DBA - Tue, 2020-09-15 07:49

The AZ-304 exams replace the older AZ-301 exam, which will retire on September 30, 2020 Check out this blog post at k21academy.com/az30411 to know more about AZ-304 and other questions like, why is this certification important? What domains does it cover? What are the eligibility criteria? How to prepare for it? And whatnot. This blog […]

The post [AZ-304] Microsoft Azure Architect Design (beta): Everything You Need To Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Design Authentication and Authorization

Online Apps DBA - Tue, 2020-09-15 07:38

Want to get an idea about the authentication and authorization and other security aspects recommend in the cloud? Check out this blog post at K21Academy – k21academy.com/az30412, where you will get to know about the design authentication and authorization in brief from Azure Solution Architect Design AZ-304 perspective. Areas covered in this blog post: • […]

The post Design Authentication and Authorization appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator