Feed aggregator

Verifying White Listing for Oracle Integration Platform

Antony Reynolds - Sun, 2018-09-23 18:33
Verifying Your White List is Working

A lot of customers require all outbound connections from systems to be validated against a whitelist.  This article explains the different types of whitelist that might be applied and why they are important to Oracle Integration Cloud (OIC).  Whitelisting means that if a system is not specifically enabled then its internet access is blocked.

The Need

If your company requires systems to be whitelisted then you need to consider the following use cases:

  • Agent Requires Access to Integration Cloud
  • On-Premise Systems Initiating Integration Flows
  • On-Premise Systems Raising Events

In all the above cases we need to be able to make a call to Integration Cloud through the firewall which may require whitelisting.

Types of Whitelisting

Typically there are two components involved in whitelisting: the source system and the target system.  In our case the target system will be Oracle Integration Cloud, and if using OAuth then the Identity Cloud Service (IDCS) as well.  The source system will be either the OIC connectivity agent, or a source system initiating integration flows, possibly via an event mechanism.

Whitelisting Patterns   Source Whitelisted Target Whitelisted Target Only No Yes Source & Target Yes Yes Source Only No No

Only the first two are usually seen, the third is included for completeness but I have not seen it in the wild.

Information Required

When providing information to the network group to enable the whitelisting you may be asked to provide IP addresses of the systems being used.  You can obtain these by using the nslookup command.

> nslookup myenv-mytenancy.integration.ocp.oraclecloud.com Server: 123.45.12.34 Address: 123.45.12.34#53 Non-authoritative answer: myenv-mytenancy.integration.ocp.oraclecloud.com canonical name = 123456789ABCDEF.integration.ocp.oraclecloud.com. Name: 123456789ABCDEF.integration.ocp.oraclecloud.com Address: 123.123.123.123

You will certainly need to lookup your OIC instance hostname.  You may also need your IDCS instance which is the URL you get when logging on.

Testing Access

Once the whitelist is enabled we can test it by using the curl command from the machine from which we require whitelist access.

> curl -i -u 'my_user@mycompany.com:MyP@ssw0rd' https://myenv-mytenancy.integration.ocp.oraclecloud.com/icsapis/v2/integrations HTTP/1.1 200 OK Date: Sun, 23 Sep 2018 23:19:44 GMT Content-Type: application/json;charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive X-ORACLE-DMS-ECID: 1234567890abcdef X-ORACLE-DMS-RID: 0 Set-Cookie: iscs_auth=0123456789abcdef; path=/; HttpOnly ...

The -i flag is used to show the header of the response, if there is an error this flag will enable you to see the HTTP error code.

The -u glag is used to provide credentials.

In the example above we have listed all the integrations that are in the instance.  If you don't see the list of integrations then something is wrong.  Common problems are:

  • Wrong URL
  • Wrong username/password - pass them using single quotes to prevent interpretation of special characters by the shell.
  • Access denied due to whitelist not enabled - depending on the environment this may show as a timeout or an error from a proxy server.
Summary

As you can see gathering the information for whitelisting and then testing that it is correctly enabled are straightforward and don't require advanced networking skills.

New Batch Level Of Service Algorithms

Anthony Shorten - Sun, 2018-09-23 18:25

Batch Level Of Service allows implementations to register a service target on individual batch controls and have the Oracle Utilities Application Framework assess the current performance against that target.

In past releases of the Oracle Utilities Application Framework the concept of Batch Level Of Service was introduced. This is an algorithm point that assess the value of a performance metric against a target and returns the appropriate response to that target. By default, if configured, the return value is returned on the Batch Control maintenance screen or whenever called. For example:

Example Batch Control

It was possible to build an algorithm to set and check the target and return the appropriate level. This can be called manually, be available via the Health Service API or used in custom Query Portals. For example:

Example Portal

In this release a number of enhancements have been introduced:

  • Possible to specify multiple algorithms. If you want to model multiple targets it is now possible to link more than one algorithm to a batch control. The appropriate level will be returned (the worst case) for the multiple algorithms.
  • More Out Of The Box algorithms. In past releases a basic generic algorithm was supplied but additional algorithms are now provided to include additional metrics:
Batch Control Description F1-BAT-LVSVC Original Generic Algorithm to evaluate Error Count and Time Since Completion F1-BAT-ERLOS Compare Count Of Records in Error to Threshold F1-BAT-RTLOS Compare Total Batch Run Time to Threshold F1-BAT-TPLOS Compare Throughput to Threshold

For more information about these algorithms refer to the algorithm entries themselves and the online documentation.

Sea of Fertility

Greg Pavlik - Sun, 2018-09-23 18:24
In a discussion on some of my reservations on Murakami's take on 20th century Japanese literature, a friend commented on Mishima's Sea of Fertility tetrology with some real insights I thought worth preserving and sharing, albeit anonymously (if you're not into Japanese literature, now's a good time to stop reading):

"My perspective is different: it was a perfect echo of the end of “Spring Snow” and a final liberation of the main character from his self-constructed prison of beliefs. Honda’s life across the novels represents the false path: of consciousness the inglorious decay and death of the soul trapped in a repetition of situations that it cannot fathom being forced into waking. He is forced into being an observer of his own life eventually debasing himself into a “peeping Tom” even as he works as a judge. The irony is rich. Honda decays through the four novels since he clings to the memory of his friend (Kiyoaki) and does not understand the constructed nature his experience and desires. He is asleep. He wants Matsugae’s final dream to be the truth (that they will “...meet again under the Falls.”) His desires have been leading him in a circle and the final scene in the garden is his recognition of what the Abbess (Satoko from Spring Snow) was trying to convey to him. When she tells him, “There was no such person as Kiyoaki Matsugae”, it is her attempt to cure him of his delusion (and spiritual illness that has rendered him desperate and weak - chasing the ego illusions of his youth and seeking the reincarnation of his friend everywhere.) Honda lives in the dream of his ego and desire. In the final scene, he wakes up for the first time. I loved the image of the shadows falling on the garden. He is finally dying, stripped of illusion. I found it to be Mishima at his most powerful. I agree about “Sailor”, that is a great novel and much more Japanese in its economy of expression. Now, Haruki Murakami is a world apart from Kawabata and Mishima. I love his use of the unconscious/Id as a place to inform and enthrall: the labyrinth of dreams. Most of his characters are trapped (at least part of the time) in this “place”: eg Kafka on the Shore, Windup Bird Chronicle, Hard-boiled Wonderland and End of the World, etc. Literature has to have room for all of them. I like the other Murakami, Ryu Murkami, whose “Audition” and “Famous Hits of the Shōwa Era” are dark, psychotic tales of unrestrained, escalating violence but redeemed by deep probing of unconscious, hidden motives (the inhuman work of the unconscious that guides the characters like the Greek sense of fate (Moira)) and occasional black humor."
 

Oracle Offline Persistence Toolkit - Reacting to Replay Conflict

Andrejus Baranovski - Sat, 2018-09-22 08:01
This is next post related to Oracle Offline Persistence Toolkit. Check my previous writing on same subject - Implementing Handle Patch Method in JET Offline Toolkit. Read more about toolkit on GitHub repo.

When application goes online, we call synchronisation method. If at least one of the requests fails, then synchronisation is stopped and error callback is invoked, where we can handle failure. In error callback, we check if failure is related to the conflict - then we open dialog, where user will decide what to do (to force client changes or take server changes). Reading latest change indicator value from response in error callback (to apply it, if user decides to force client changes in the next request):


Dialog is simple - it displays dynamic text for conflicted value and provides user with a choice of actions:


Let's see how it works.

User A editing value Lex and saving it to backend:


User B is offline, editing same value B and saving it in local storage:


We can check it in the log - changes value was stored in local storage:


When going online, pending requests logged offline, will be re-executed. Obviously above request will fail, because same value was changed by another user. Conflict will be reported:


PATCH operation fails with conflict code 409:


User will be asked - how to proceed. To apply changes and override changes in the backend, or on opposite take changes from the backend and bring them to the client:


I will explain how to implement these actions in my next post. In the meantime you can study complete application available on GitHub repo.

[BLOG] Commonly Asked Questions Oracle GoldenGate 12c

Online Apps DBA - Sat, 2018-09-22 07:31

Visit: https://k21academy.com/goldengate26 and learn about: ✔Types of replication in Goldengate ✔Difference between classic & integral extract process ✔What is the datapump in Goldengate & much more… Leave a comment if you have any question related to Oracle Goldengate Visit: https://k21academy.com/goldengate26 and learn about: ✔Types of replication in Goldengate ✔Difference between classic & integral extract process […]

The post [BLOG] Commonly Asked Questions Oracle GoldenGate 12c appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to run PeopleTools 8.57 in the Cloud

PeopleSoft Technology Blog - Sat, 2018-09-22 01:13

Now that 8.57 is generally available to run in the cloud, I’ll outline the steps to get it up and running so you can take it for a spin. It was announced in an earlier blog that PeopleTools 8.57 will be initially released to run on the Oracle Cloud before it’s available for on-premises installs.  That means you’ll want to use PeopleSoft Cloud Manager 7 to get it up and running.  There’s probably a pretty good chance that this is the first exposure you’ve had to Cloud Manager so it’s worth explaining what you’ll do.

The process is pretty simple.  First, you’ll get an Oracle Cloud Infrastructure (OCI) account.  Then you’ll install Cloud Manager 7 onto that account.  Once installed, Cloud Manager 7 is used to download the 8.57 DPKs into your Cloud Repository.  Then you’ll use Cloud Manager 7 to upgrade a PUM image from 8.56 to 8.57 using the downloaded DPKs.  If you’ve been through a PeopleTools upgrade before, you’ll be amazed how Cloud Manager automates this process.

The steps outlined below assume that you are new to OCI and Cloud Manager.  If you are already familiar with Cloud Manager and have it running in your tenancy, then update your Cloud Manager instance using Interaction Hub Image 07 as described here and go to Step 4.

Step 1: Get your Oracle Cloud trial account.  If you are new to OCI and PeopleSoft Cloud Manager, then please go through all links below to help you get started quickly.

Using the link given above, request for a free account that will give you access to all Oracle Cloud services up to 3500 hours or USD 300 free credits (available in select countries and limited validity). When your request is processed, you will be provisioned a tenancy in Oracle Cloud Infrastructure. Oracle will send you a Welcome email with instructions for signing in to the console for the first time. There is no charge unless you choose to Upgrade to Paid from My Services in the console.

Step 2: Set up your OCI tenancy by creating users, policies and networks.  Refer to OCI documentation here for details or get a quick overview in this blog.  After this step, you will have your tenancy ready to deploy PeopleSoft Cloud Manager. If you are on OCI Classic then you can skip this step.

Step 3: Install and configure PeopleSoft Cloud Manager.  Follow the OCI install guide here. If you are on OCI Classic, follow the install guide here.

As part of this step, you will –

  • Download the Cloud Manager images
  • Upload images to your tenancy
  • Create a custom Microsoft Windows 2012 image
  • Spin up the Cloud Manager instance
  • Run bootstrap to install Cloud Manager
  • Configure Cloud Manager settings
  • Set up file server repository

Step 4: Subscribe to download channels.  The PeopleTools 8.57 download channel is now available under the unsubscribed channel list.  Navigate to Dashboard | Repository | Download Subscriptions.  Click on Unsubscribed tab. Subscribe to the new Tools_857_Linux channel using the related Actions menu. Also subscribe to any application download channels, for example, HCM_92_Linux. Downloading new PeopleTools version takes a while. Wait for the status to indicate success.

Step 5:  Provision a demo environment using PUM Fulltier topology.  Create a new environment template to deploy an application that you downloaded in Step 4.  Use the provided PUM Fulltier topology in the template.  Using this newly created template, provision a new environment. If you already have a PUM environment deployed through Cloud Manager, then you can skip this step. 

Step 6: After the environment is provisioned, navigate to Environment Details | Upgrade PeopleTools.  On the right pane, you’ll have an option to select the PeopleTools 8.57 version that you have downloaded.  Select the PeopleTools version that you want to evaluate. Click Upgrade to begin the upgrade process. 

You’ll be able to see the job details and the steps running on the same page.  Click on the ‘In progress’ status link to view the upgrade job status. 

After the upgrade is complete, click the status link for detailed upgrade job status.

PeopleTools upgrade is now complete.  Login to the upgraded environment PIA to evaluate the new features in PeopleTools 8.57.  For more info on PeopleTools 8.57 go to PeopleTools 8.57 Documentation

Connect Power BI to GCP BigQuery using Simba Drivers

Ittichai Chammavanijakul - Fri, 2018-09-21 21:56

Power BI can connect to GCP BigQuery through its provided connector. However, some reported that they’ve encountered the refresh failure as seen below. Even though the error message suggests that the quota for the API requests per user per minute may be exceeded, some reported that the error still occurs even if with a small dataset is being fetched.

In my case, by simply disabling parallel loading table (Options and settings > Options > Data Load), I no longer see this issue. However, some still said it did not help.

An alternative option is to use another supported ODBC or JDBC driver from Simba Technologies Inc. which is partnered with Google.

Setup

  • Download the latest 64-bit ODBC driver from here.
  • Install it on the local desktop where Power BI Desktop is installed. We will have to install the same driver on the Power BI Gateway Server if the published report needs to be refreshed on Power BI Service.

Configuration

  • From Control Panel > Administrator > ODBC Data Source Administrator > System DSN, click Configure on the Google BigQuery.
  • Follow the instructions from the screens below.

When connecting on Power BI, Get Data > choose ODBC.

Categories: DBA Blogs

RMAN-03002: ORA-19693: backup piece already included

Michael Dinh - Fri, 2018-09-21 18:36

I have been cursed trying to create 25TB standby database.

Active duplication using standby as source failed due to bug.

Backup based duplication using standby as source failed due to bug again.

Now performing traditional restore.

Both attempts failed with RMAN-20261: ambiguous backup piece handle

RMAN> list backuppiece '/bkup/ovtdkik0_1_1.bkp';
RMAN> change backuppiece '/bkup/ovtdkik0_1_1.bkp' uncatalog;

What’s in the backup?

RMAN> spool log to /tmp/list.log
RMAN> list backup;
RMAN> exit

There are 2 identical backuppiece and don’t know how this could have happened.

$ grep ovtdkik0_1_1 /tmp/list.log
    201792  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp
    202262  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp

RMAN> delete backuppiece 202262;

Restart restore and is running again.

PeopleTools 8.57 is Available on the Oracle Cloud

PeopleSoft Technology Blog - Fri, 2018-09-21 15:17

We are pleased to announce that PeopleTools 8.57 is generally available for install and upgrade on the Oracle Cloud.  As we announced earlier, PeopleTools 8.57 will initially be available only on the Oracle Cloud.  We plan to make PeopleTools 8.57 available for on-premises downloads with the 8.57.04 CPU patch in January 2019.  

There are many new exiting features in PeopleTools 8.57 including:

  • The ability for end-users to set up conditions in analytics that if met will notify the user
  • Improvements to the way Related Content and Analytics are displayed
  • Add custom fields to Fluid pages with minimum life-cycle impact
  • More capabilities for end user personalization
  • Improved search that supports multi-facet selections
  • Easier than ever to brand the application with your corporate colors and graphics
  • Fluid page preview in AppDesigner and improved UI properties interface
  • End-to-end command-line support for life-cycle management processes
  • And much more!

You’ll want to get all the details and learn about the new features in 8.57.  A great place to start is the PeopleTools 8.57 Highlights Video  posted on the PeopleSoft YouTube channel.  The highlights video gives you a overview of the new features and shows how to use them.

There is plenty more information about the release available today.  Here are some links to some of the other places you can go to learn more about 8.57:

In addition to releasing PeopleTools 8.57, version 7 of PeopleSoft Cloud Manager is also being released today.  CM 7 is similar in functionality to CM 6 with additional support for PeopleTools 8.57.  If you currently use a version of Cloud Manager you must upgrade to version 7 in order to install PT 8.57. 

There are a lot of questions about how to get started using PeopleTools 8.57 and Cloud Manager 7.  Documentation and installation instructions are available on the Cloud Manager Home Page.

More information will be published over the next couple of weeks to help you get started with 8.57 on the cloud.  Additional information will include blogs to help with details of the installation, an video that shows the complete process from creating a free trial account to running PT8.57, and a detailed Spotlight Video that describes configuring OCI and Cloud Manager 7.

PeopleTools 8.57 is a significant milestone for Oracle, making it easier than ever for customers to use, maintain and run PeopleSoft Applications.

OAC 18.3.3: New Features

Rittman Mead Consulting - Fri, 2018-09-21 07:58
 New Features

I believe there is a hidden strategy behind Oracle's product release schedule: every time I'm either on holidays or in a business trip full of appointments a new version of Oracle Analytics Cloud is published with a huge set of new features!

 New Features

OAC 18.3.3 went live last week and contains a big set of enhancements, some of which were already described at Kscope18 during the Sunday Symposium. New features are appearing in almost all the areas covered by OAC, from Data Preparation to the main Data Flows, new Visualization types, new security and configuration options and BIP and Essbase enhancements. Let's have a look at what's there!

Data Preparation

A recurring theme in Europe since last year is GDPR, the General Data Protection Regulation which aims at protecting data and privacy of all European citizens. This is very important in our landscape since we "play" with data on daily basis and we should be aware of what data we can use and how.
Luckily for us now OAC helps to address GDPR with the Data Preparation Recommendations step: every time a dataset is added, each column is profiled and a list of recommended transformations is suggested to the user. Please note that Data Preparation Recommendations is only suggesting changes to the dataset, thus can't be considered the global solution to GDPR compliance.
The suggestion may include:

  • Complete or partial obfuscation of the data: useful when dealing with security/user sensitive data
  • Data Enrichment based on the column data can include:
    • Demographical information based on names
    • Geographical information based on locations, zip codes

 New Features

Each of the suggestion applied to the dataset is stored in a data preparation script that can easily be reapplied if the data is updated.

 New Features

Data Flows

Data Flows is the "mini-ETL" component within OAC which allows transformations, joins, aggregations, filtering, binning, machine learning model training and storing the artifacts either locally or in a database or Essbase cube.
The dataflows however had some limitations, the first one was that they had to be run manually by the user. With OAC 18.3.3 now there is the option to schedule Data Flows more or less like we were used to when scheduling Agents back in OBIEE.

 New Features

Another limitation was related to the creation of a unique Data-set per Data Flow which has been solved with the introduction of the Branch node which allows a single Data Flow to produce multiple data-sets, very useful when the same set of source data and transformations needs to be used to produce various data-sets.

 New Features

Two other new features have been introduced to make data-flows more reusable: Parametrized Sources and Outputs and Incremental Processing.
The Parametrized Sources and Outputs allows to select the data-flow source or target during runtime, allowing, for example, to create a specific and different dataset for today's load.

 New Features

The Incremental Processing, as the name says, is a way to run Data Flows only on top of the data added since the last run (Incremental loads in ETL terms). In order to have a data flow working with incremental loads we need to:

  • Define in the source dataset which is the key column that can be used to indicate new data (e.g. CUSTOMER_KEY or ORDER_DATE) since the last run
  • When including the dataset in a Data Flow enable the execution of the Data Flow with only the new data
  • In the target dataset define if the Incremental Processing replaces existing data or appends data.

Please note that the Incremental Load is available only when using Database Sources.

Another important improvement is the Function Shipping when Data Flows are used with Big Data Cloud: If the source datasets are coming from BDC and the results are stored in BDC, all the transformations like joining, adding calculation columns and filtering are shipped to BDC as well, meaning there is no additional load happening on OAC for the Data Flow.

Lastly there is a new Properties Inspector feature in Data Flow allowing to check the properties like name and description as well as accessing and modifying the scheduling of the related flow.

 New Features

Data Replication

Now is possible to use OAC to replicate data from a source system like Oracle's Fusion Apps, Talend or Eloqua directly into Big Data Cloud, Database Cloud or Data Warehouse Cloud. This function is extremely useful since allows decoupling the queries generated by the analytical tools from the source systems.
As expected the user can select which objects to replicate, the filters to apply, the destination tables and columns, and the load type between Full or Incremental.

Project Creation

New visualization capabilities have been added which include:

  • Grid HeatMap
  • Correlation Matrix
  • Discrete Shapes
  • 100% Stacked Bars and Area Charts

In the Map views, Multiple Map Layers can now be added as well as Density and Metric based HeatMaps, all on top of new background maps including Baidu and Google.

 New Features

Tooltips are now supported in all visualizations, allowing the end user to add measure columns which will be shown when over a section of any graph.

 New Features

The Explain feature is now available on metrics and not only on attributes and has been enhanced: a new anomaly detection algorithm identifies anomalies in combinations of columns working in the background in asynchronous mode, allowing the anomalies to be pushed as soon as they are found.

A new feature that many developers will appreciate is the AutoSave: we are all used to autosave when using google docs, the same applies to OAC, a project is saved automatically at every change. Of course this feature can be turn off if necessary.
Another very interesting addition is the Copy Data to Clipboard: with a right click on any graph, an option to save the underline data to clipboard is available. The data can then natively be pasted in Excel.

Did you create a new dataset and you want to repoint your existing project to it? Now with Dataset replacement it's just few clicks away: you need only to select the new dataset and re-map all the columns used in your current project!

 New Features

Data Management

The datasets/dataflows/project methodology is typical of what Gartner defined as Mode 2 analytics: analysis done by a business user whitout any involvement from the IT. The step sometimes missing or hard to be performed in self-service tools is the publishing: once a certain dataset is consistent and ready to be shared, it's rather difficult to open it to a larger audience within the same toolset.
New OAC administrative options have been addressing this problem: a dataset Certification by an administrator allows a certain dataset to be queried via Ask and DayByDay by other users. There is also a dataset Permissions tab allowing the definition of Full Control, Edit or Read Only access at user or role level. This is the way of bringing the self service dataset back to corporate visibility.

 New Features

A Search tab allows a fine control over the indexing of a certain dataset used by Ask and DayByDay. There are now options to select when then indexing is executed as well as which columns to index and how (by column name and value or by column name only).

 New Features

BIP and Essbase

BI Publisher was added to OAC in the previous version, now includes new features like a tighter integration with the datasets which can be used as datasources or features like email delivery read receipt notification and compressed output and password protection that were already available on the on-premises version.
There is also a new set of features for Essbase including new UI, REST APIs, and, very important security wise, all the external communications (like Smartview) are now over HTTPS.
For a detailed list of new features check this link

Conclusion

OAC 18.3.3 includes an incredible amount of new features which enable the whole analytics story: from self-service data discovery to corporate dashboarding and pixel-perfect formatting, all within the same tool and shared security settings. Options like the parametrized and incremental Data Flows allows content reusability and enhance the overall platform performances reducing the load on source systems.
If you are looking into OAC and want to know more don't hesitate to contact us

Categories: BI & Warehousing

Clob data type error out when crosses the varchar2 limit

Tom Kyte - Fri, 2018-09-21 04:26
Clob datatype in PL/SQL program going to exception when it crosses the varchar2 limit and giving the "Error:ORA-06502: PL/SQL: numeric or value error" , Why Clob datatype is behaving like varchar2 datatype. I think clob can hold upto 4 GB of data. Pl...
Categories: DBA Blogs

Migrating Oracle 10g on Solaris Sparc to Linux RHEL 5 VM

Tom Kyte - Fri, 2018-09-21 04:26
Hi, if i will rate my oracle expertise i would give it 3/10. i just started learning oracle, solaris and linux 2months ago and was given this task to migrate. yes our oracle version is quite old and might not be supported anymore. Both platforms ...
Categories: DBA Blogs

"secure" in securefile

Tom Kyte - Fri, 2018-09-21 04:26
Good Afternoon, My question is a simple one. I've wondered why Oracle decided to give the new data type the name "securefile". Is it because we can encrypt it while before with basicfile, we couldn't encrypt the LOB? Also, why not call it "se...
Categories: DBA Blogs

Pre-allocating table columns for fast customer demands

Tom Kyte - Fri, 2018-09-21 04:26
Hello team, I have come across a strange business requirement that has caused an application team I support to submit a design that is pretty bad. The problem is I have difficulty quantifying this, so I'm going you can help me all the reasons why ...
Categories: DBA Blogs

move system datafiles

Tom Kyte - Fri, 2018-09-21 04:26
Hi Tom, When we install oracle and create the database by default (not manually) ...the system datafiles are located at a specific location .. Is is possible to move these (system tablespace datafiles) datafiles from the original location to...
Categories: DBA Blogs

how does SKIPEMPTYTRANS work?

Tom Kyte - Fri, 2018-09-21 04:26
I am wondering how does SKIPEMPTYTRANS work? when does ogg judge a transaction empty or not? if it does the judgement in the middle transction? how does ogg know it's a empty transaction? provided that it did not update mapped tables before the jud...
Categories: DBA Blogs

Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3)

Yann Neuhaus - Fri, 2018-09-21 00:53

There is no in-place upgrade for the OID 11.1.1.9 to OID 12C 12.2.1.3. The steps to follow are the following:

  1. Install the required JDK version
  2. Install the Fusion Middleware Infrastructure 12c (12.2.1.3)
  3. Install the OID 12C (12.2.1.3) in the Fusion Middleware Infrastructure Home
  4. Upgrade the exiting OID database schemas
  5. Reconfigure the OID WebLogic Domain
  6. Upgrade the OID WebLogic Domain

1. Install JDK 1.8.131+

I have used the JDK 1.8_161

cd /u00/app/oracle/product/Java
tar xvf ~/software/jdk1.8.0_161

set JAVA_HOME and add  $JAVA_HOME/bin in the path

2. Install Fusion Middleware Infrastructure 12.2.1.3  software

I will not go into the details as this is a simple Fusion Middleware Infrastructure 12.2.1.3 software installation.
This software contains the WebLogic 12.2.1.3. Thee is no need to install a separate WebLogic software.

I used MW_HOME set to /u00/app/oracle/product/oid12c

java -jar ~/software/fmw_12.2.1.3_infrastructure.jar

3. Install OID 12C software

This part is just a software installation, you just need to follow the steps in the installation wizard

cd ~/software/
./fmw_12.2.1.3.0_oid_linux64.bin

4. Check the existing schemas:

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID ;

The results:

MRC_NAME COMP_ID OWNER VERSION STATUS UPGRADED
-------------- -------------------- ------------------------------ ------------ --------- --------
DEFAULT_PREFIX    OID            ODS                  11.1.1.9.0    VALID      N
IAM               IAU            IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS            IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM            IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM           IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS           IAM_OPSS             11.1.1.9.0    VALID      N
OUD               IAU            OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS            OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS           OUD_OPSS             11.1.1.9.0    VALID      N

9 rows selected.

I have a OID 11.1.1.9 and a IAM 11.1.2.3 using the same database as repository

5. ODS Schema upgrade:

Take care to only upgrade the ODS schema and not the IAM schemas or the Internet Access Manager will not work any more.
Associated to OID 11.1.1.9, there was only the ODS schema installed, the ODS upgrade requires to create new Schemas.

cd /u00/app/oracle/product/oid12c/oracle_common/upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-11-13-37AM.log
Reading installer inventory, this will take a few moments...
...completed reading installer inventory.

In the following, I provide the most important screen shots for the “ODS schema upgrade”

ODS schema upgrade 1

ODS schema upgrade 2
Checked the schema validity:

ODS schema upgrade 3

ODS schema upgrade 4

ODS schema upgrade 5

ODS schema upgrade 6

ODS schema upgrade 7

ODS schema upgrade 8

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID;

MRC_NAME       COMP_ID            OWNER               VERSION    STATUS      UPGRADED
————– —————- ——————————– ———— ——— ——–
DEFAULT_PREFIX OID                ODS                  12.2.1.3.0    VALID      Y
IAM               IAU                IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS                IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM                IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM               IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS               IAM_OPSS             11.1.1.9.0    VALID      N
OID12C           IAU                OID12C_IAU           12.2.1.2.0    VALID      N
OID12C           IAU_APPEND        OID12C_IAU_APPEND    12.2.1.2.0    VALID      N
OID12C           IAU_VIEWER        OID12C_IAU_VIEWER    12.2.1.2.0    VALID      N
OID12C           OPSS               OID12C_OPSS          12.2.1.0.0    VALID      N
OID12C           STB                OID12C_STB           12.2.1.3.0    VALID      N
OID12C           WLS                OID12C_WLS           12.2.1.0.0    VALID      N
OUD               IAU                OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS                OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS               OUD_OPSS             11.1.1.9.0    VALID      N

15 rows selected.

I named the new OID repository schemas OID12C during the ODS upgrade.

6. reconfigure the domain

cd /u00/app/oracle/product/oid12c/oracle_common/common/bin/
./reconfig.sh -log=/tmp/reconfig.log -log_prority=ALL

See screen shots “Reconfigure Domain”
Reconfigure Domain 1
Reconfigure Domain 2
Reconfigure Domain 3
Reconfigure Domain 4
Reconfigure Domain 5
Reconfigure Domain 6
Reconfigure Domain 7
Reconfigure Domain 8
Reconfigure Domain 9
Reconfigure Domain 10
Reconfigure Domain 11
Reconfigure Domain 12
Reconfigure Domain 13
Reconfigure Domain 14
Reconfigure Domain 15
Reconfigure Domain 16
Reconfigure Domain 17
Reconfigure Domain 18
Reconfigure Domain 19
Reconfigure Domain 20
Reconfigure Domain 21
Reconfigure Domain 22
Reconfigure Domain 23
Reconfigure Domain 24
Reconfigure Domain 25

7. Upgrading Domain Component Configurations

cd ../../upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-12-18-12PM.log
Reading installer inventory, this will take a few moments…

The following are the screen shots of the upgrade of the WebLogic Domain configuration

upgrade domain component configuration 1
upgrade domain component configuration 2
upgrade domain component configuration 3
upgrade domain component configuration 4
upgrade domain component configuration 5
upgrade domain component configuration 6
upgrade domain component configuration 7

8. Start the domain

For this first start I will use the normal start scripts installed when upgrading the domain in separate putty session to see the traces

Putty Session 1:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# Start the Admin Server in the first putty
./startWebLogic.sh

Putty Session 2:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# In an other shell session start the node Manager:
./startNodeManager.sh

Putty Session 3:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
./startComponent.sh oid1

Starting system Component oid1 ...

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Reading domain from /u01/app/OID/user_projects/domains/IDMDomain

Please enter Node Manager password:
Connecting to Node Manager ...
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090905> <Disabling the CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to HMACDRBG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090909> <Using the configured custom SSL Hostname Verifier implementation: weblogic.security.utils.SSLWLSHostnameVerifier$NullHostnameVerifier.>
Successfully Connected to Node Manager.
Starting server oid1 ...
Successfully started server oid1 ...
Successfully disconnected from Node Manager.

Exiting WebLogic Scripting Tool.

Done

The ODSM application is now deployed in the WebLogic Administration Server and the WLS_ODS1 WebLogic Server from the previous OID 11C  administration domain is not used any more.

http://host01.example.com:7002/odsm

7002 is the Administration Server port for this domain.

 

Cet article Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3) est apparu en premier sur Blog dbi services.

Don’t Drop Your Career Using Drop Database

Michael Dinh - Thu, 2018-09-20 22:12

I first learned about drop database in 2007.

Environment contains standby database oltpdr.
Duplicate standby database olapdr on the same host using oltpdr as source failed during restore phase.
Clean up data files from failed olapdr duplication.

Check database olapdr.
olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select count(*) from gv$session;

  COUNT(*)
----------
        90

Elapsed: 00:00:00.00
olap1> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Elapsed: 00:00:00.03
olap1> startup force mount restrict exclusive;
ORACLE instance started.

Total System Global Area 2.5770E+10 bytes
Fixed Size                  6870952 bytes
Variable Size            5625976920 bytes
Database Buffers         1.9998E+10 bytes
Redo Buffers              138514432 bytes
Database mounted.

olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select count(*) from gv$session;

  COUNT(*)
----------
        92

Elapsed: 00:00:00.01
olap1> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Elapsed: 00:00:00.04
At this point, I was ready to run drop database and somehow an angel was watching over me and I decided to check v$datafile.
olap1> select name from v$datafile where rownum < 10;

NAME
-----------------------------------------------------------
+DATA/OLTPDR/DATAFILE/system.4069.986394171
+DATA/OLTPDR/DATAFILE/dev_odi_temp.4067.986394187
+DATA/OLTPDR/DATAFILE/sysaux.4458.985845085
+DATA/OLTPDR/DATAFILE/big_dmstaging_data_new_2.4687.986498821
+DATA/OLTPDR/DATAFILE/account_toll_index.3799.985714921
+DATA/OLTPDR/DATAFILE/users.2524.985777377
+DATA/OLTPDR/DATAFILE/dev_ias_temp.4141.985846937
+DATA/OLTPDR/DATAFILE/dev_stb.4143.985846937
+DATA/OLTPDR/DATAFILE/dev_odi_user.4144.985846937

9 rows selected.

Elapsed: 00:00:00.01

olap1> exit
Strange data files are the same for source and target.
oltp1> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

Elapsed: 00:00:00.07
oltp1> select name from v$datafile where rownum < 10;

NAME
-----------------------------------------------------------
+DATA/OLTPDR/DATAFILE/system.4069.986394171
+DATA/OLTPDR/DATAFILE/dev_odi_temp.4067.986394187
+DATA/OLTPDR/DATAFILE/sysaux.4458.985845085
+DATA/OLTPDR/DATAFILE/big_dmstaging_data_new_2.4687.986498821
+DATA/OLTPDR/DATAFILE/account_toll_index.3799.985714921
+DATA/OLTPDR/DATAFILE/users.2524.985777377
+DATA/OLTPDR/DATAFILE/dev_ias_temp.4141.985846937
+DATA/OLTPDR/DATAFILE/dev_stb.4143.985846937
+DATA/OLTPDR/DATAFILE/dev_odi_user.4144.985846937

9 rows selected.

Elapsed: 00:00:00.01
oltp1> exit
Check data files from ASM.
ASMCMD> cd DATA
ASMCMD> ls
OLAPDR/
OLTP/
OLTPDR/
SCHDDBDR/
_MGMTDB/

ASMCMD> cd OLAPDR
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ASMCMD> cd DATAFILE
ASMCMD> pwd
+DATA/OLAPDR/DATAFILE
ASMCMD> exit
Shutdown olapdr.
olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Elapsed: 00:00:00.03
olap1> shut abort;
ORACLE instance shut down.
olap1> exit
Manually remove data files from ASM.
$ asmcmd lsof -G +DATA|grep -ic OLAPDR
0
$ asmcmd ls +DATA/OLAPDR/DATAFILE|wc -l
1665
$ asmcmd lsof -G +DATA/OLAPDR/DATAFILE|wc -l
0
$ asmcmd
ASMCMD> cd datac1
ASMCMD> cd olapdr
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ASMCMD> cd datafile
ASMCMD> pwd
+DATA/olapdr/datafile
ASMCMD> rm *
You may delete multiple files and/or directories.
Are you sure? (y/n) y

What would have happened if drop database was executed?
Does anyone know for sure?
Would you have executed drop database?

Differences Between Validate Preview [Summary]

Michael Dinh - Thu, 2018-09-20 19:44

Summary is equivalent to – list backup of database summary versus list backup of database.

RMAN> restore database validate preview summary from tag=stby_dup;

Starting restore at 20-SEP-2018 21:19:48
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=hawk1 device type=DISK


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
119     B  0  A DISK        18-SEP-2018 13:56:33 1       1       NO         STBY_DUP
using channel ORA_DISK_1

RMAN> restore database validate preview from tag=stby_dup;

Starting restore at 20-SEP-2018 21:18:44
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=hawk1 device type=DISK


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
119     Incr 0  1.34G      DISK        00:00:15     18-SEP-2018 13:56:33
        BP Key: 121   Status: AVAILABLE  Compressed: NO  Tag: STBY_DUP
        Piece Name: /tmp/HAWK_djtde1c3_1_1.bkp
  List of Datafiles in backup set 119
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/system.306.984318067
  2    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/sysaux.307.984318067
  3    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs1.309.984318093
  4    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/users.310.984318093
  5    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs2.311.984318095
  6    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs3.312.984318095
using channel ORA_DISK_1

Following is the same for both.
RMAN> restore database validate preview summary from tag=stby_dup;
RMAN> restore database validate preview from tag=stby_dup;

List of Archived Log Copies for database with db_unique_name HAWKB
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
849     1    506     A 18-SEP-2018 13:55:08
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_506.551.987170199

852     1    507     A 18-SEP-2018 13:56:39
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_507.552.987199227

856     1    508     A 18-SEP-2018 22:00:26
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_508.554.987220639

860     1    509     A 19-SEP-2018 03:57:18
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_509.556.987258729

862     1    510     A 19-SEP-2018 14:32:07
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_510.557.987285627

864     1    511     A 19-SEP-2018 22:00:27
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_511.558.987287235

868     1    512     A 19-SEP-2018 22:27:15
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_512.560.987325879

872     1    513     A 20-SEP-2018 09:11:18
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_513.562.987364831

847     2    173     A 18-SEP-2018 13:55:08
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_2_seq_173.550.987170199

854     2    174     A 18-SEP-2018 13:56:38
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_174.553.987210305

858     2    175     A 19-SEP-2018 01:05:05
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_175.555.987253211

866     2    176     A 19-SEP-2018 13:00:10
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_176.559.987287239

870     2    177     A 19-SEP-2018 22:27:18
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_2_seq_177.561.987328815

Media recovery start SCN is 6038608
Recovery must be done beyond SCN 6038608 to clear datafile fuzziness

channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /tmp/HAWK_djtde1c3_1_1.bkp
channel ORA_DISK_1: piece handle=/tmp/HAWK_djtde1c3_1_1.bkp tag=STBY_DUP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:08
using channel ORA_DISK_1

channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_506.551.987170199
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_507.552.987199227
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_508.554.987220639
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_509.556.987258729
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_510.557.987285627
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_511.558.987287235
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_512.560.987325879
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_513.562.987364831
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_2_seq_173.550.987170199
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_174.553.987210305
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_175.555.987253211
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_176.559.987287239
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_2_seq_177.561.987328815
Finished restore at 20-SEP-2018 21:20:11

RMAN>

New File Adapter - Native File Storage

Anthony Shorten - Thu, 2018-09-20 17:59

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where hard coded to implement locations of files.

With the introduction of the Oracle Utilities Cloud SaaS Services, the location of files are standardized and to reduce maintenance costs, these paths are not parameterized using an Extendable Lookup (F1-FileStorage) defining the path alias and the physical location. The on-premise version of the Oracle Utilities Application Framework V4.3.0.6.0 supports local storage (including network storage) using this facility. The Oracle Utilities Cloud SaaS version supports both local (predefined) and Oracle Object Storage Cloud.

For example:

Example Lookup

To use the alias in any FILE-PATH (for example) the URL is used in the FILE-PATH:

file-storage://MYFILES/mydirectory  (if you want to specify a subdirectory under the alias)

or

file-storage://MYFILES

Now, if you migrate to another environment (the lookup is migrated using Configuration Migration Assistant) then this record can be altered. If you are moving to the Cloud then this adapter can change to Oracle Object Storage Cloud. This reduces the need to change individual places that uses the alias.

It is recommended to take advantage of this capability:

  • Create an alias per location you read or write files from in your Batch Controls. Define it using the Native File Storage adapter. Try and create the minimum number of alias as possible to reduce maintenance costs.
  • Change all the FILE-PATH parameters in your batch controls to the use the relevant file-storage URL.

If you decide to migrate to the Oracle Utilities SaaS Cloud, these Extensable Lookup values will be the only thing that changes to realign the implementation to the relevant location on the Cloud instance. For on-premise implementation and the cloud, these definitions are now able to be migrated using Configuration Migration Assistant.

Pages

Subscribe to Oracle FAQ aggregator