Feed aggregator

Oracle Speeds Clinical Trials with Quorum Integration

Oracle Press Releases - 2 hours 24 min ago
Press Release
Oracle Speeds Clinical Trials with Quorum Integration Collaboration between Quorum Review IRB and Oracle goBalto Activate eases IRB submissions and approvals for accelerated clinical trials

SCOPE Summit – Orlando, FL.—Feb 19, 2019

Delays in institutional review board (IRB) approvals often complicate clinical research and development and delay the introduction of new therapies to market. Such bottlenecks pose additional obstacles in an industry plagued by rising development costs and increasing complexities. Out-of-the-box integration between Quorum and Oracle Health Sciences goBalto Activate cloud service addresses the inefficiencies associated with submission errors and lengthy IRB submission review cycles.

Research indicates that principal investigators spend nearly half (42 percent) of their time dealing with “administrative burdens.”  This is a category in which IRB-related issues weigh heavily, with one third of that lost time emanating from researcher omissions and errors. IRB review takes up to 2.9 percent of the total time devoted to a study and represents up to 4.7 percent of study costs.

With the integration, country-specific workflows and a management-based approach to site activation in goBalto Activate creates seamless communication with Quorum. This eliminates the need to send IRB submission documents via traditional, error-prone, manual forms of communication. The integrated connection enables Quorum to push approval documents directly into Activate workflows, automatically triggering alerts and thereby saving significant time. Study teams benefit from point-and-click submission of packages, seamless data transfers and the confidence that their submission was completed.

“Quorum is proud to collaborate with Oracle Health Sciences on this critical industry need,” said Cami Gearhart, CEO of Quorum. “This collaboration aligns with our customer promise of providing exceptional service through One-Touch Collaboration™, by continuing to be a partner of choice for agile and innovative ethics review services while maintaining the highest quality of human subject protections.”

“Quorum joins other leading organizations that are committed to modernizing and rethinking how clinical trials are initiated,” said Steve Rosenberg, general manager, Oracle Health Sciences. “Automation has become critical for reducing the costs and complexities of clinical trials and this new level of integration eliminates time consuming processes and improves operational efficiencies by addressing an entrenched bottleneck in the initiation process.”

By enabling global anytime, anywhere access to purpose-built study startup technology Oracle Health Sciences brings measurable change to the inception of a trial. The adoption of this solution will help shorten cycle times, reduce study costs and, most importantly, speed the delivery of new therapies to patients.

Contact Info
Valerie Beaudett
Oracle
+1 650.400.7833
valerie.beaudett@oracle.com
Meghan Roman
Blanc & Otus for Oracle
+1 202.347.7113
meghan.roman@blancandotus.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As the number one vendor in Life Sciences (IDC, 2017), the number one provider of eClinical solutions (Everest Group, 2017) and powered by the number one data management technology in the world (Gartner, 2018), Oracle Health Sciences technology is trusted by 29 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for clinical trial and safety management around the globe.

About Quorum

Quorum Review IRB is the most preferred central IRB. We help clients accelerate research through faster study start-up, reduced fulfillment time, and the largest offering of complimentary study support services. The Quorum difference is One-Touch Collaboration. Your research benefits from outstanding service experiences, increased efficiency, one study contact, one start-up timeline, and one stream of coordinated communications. We are the only IRB to offer harmonized IRB and IBC review, API integrations, and Kinetiq consulting services that move your research forward.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Meghan Roman

  • +1 202.347.7113

REGEXP_LIKE - Pattern match with complex logic

Tom Kyte - 3 hours 38 min ago
I want to do the regexpr pattern matching and I couldn't. Please help. I want to evaluate the value that is going to be inserted in DB. I want to perform a check as below. Only four special characters and alphabets are allowed. other special ch...
Categories: DBA Blogs

The relationship between null and 0

Tom Kyte - 3 hours 38 min ago
I asked you one last time and I will ask you one more question. If !=0, we know we should include null, but the result is not null. If !=0, it is different from zero and null, should not it also contain null if it is not 0?
Categories: DBA Blogs

Select from a table where a key value has matching link for all key values of another table

Tom Kyte - 3 hours 38 min ago
I have a set of three tables (T1, T2, T3) that represent dictionary data from external sources. I need to match the data from table 1 with the data in table 3 where <b><i>all</i></b> the rows in table 1 for a given OR_ID are reflected in table 3. T...
Categories: DBA Blogs

Update statement to flag rows

Tom Kyte - 3 hours 38 min ago
Hello, Ask Tom team. I'm using the query below to load rows to a destination database based on some conditions. After this is done I want to flag those rows in order to exclude them in the next SSIS ETL run. <code>select t1.invoice_sender,t1.ei...
Categories: DBA Blogs

latch undo global data

Tom Kyte - 3 hours 38 min ago
Hi team, I see spikes in oem for wait event latch undo global data . This is on insert statement , having concurrency of 50 Inserts in one second Due to heavily loaded db ash report takes high time It would be helpful if you share some s...
Categories: DBA Blogs

bound variables

Tom Kyte - 3 hours 38 min ago
I would like to know more about sql injection. Why is it so hard to tell to the Oracle that a certain string is a parameter and not a part of a Sql command? For example, can a person call himself Delete and his name can not be used in a search? And i...
Categories: DBA Blogs

unable to connect using database link

Tom Kyte - 3 hours 38 min ago
DEAR TOM, I CREATED A DATABASE LINK ON MY LOCAL DATABASE USING THE FOLLOWING COMMANDS. SQL> CREATE DATABASE LINK RP 2 CONNECT TO PRINCE 3 IDENTIFIED BY PRINCE 4 USING 'ORB'; Database link created. SQL> SELECT COUNT(*) FROM DUAL@...
Categories: DBA Blogs

Using awk to remove attributes without values

Michael Dinh - Mon, 2019-02-18 20:41

Attributes without values are displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
CV_DESTLOC=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
HOSTING_MEMBERS=
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NEXT_CHECK_TIME=
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESOURCE_GROUP=
RESTART_ATTEMPTS=5
RESTART_DELAY=0
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
WORKLOAD_CPU=0
WORKLOAD_CPU_CAP=0
WORKLOAD_MEMORY_MAX=0
WORKLOAD_MEMORY_TARGET=0

[oracle@racnode-dc1-1 ~]$

Attributes without values are NOT displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|awk -F'=' '$2'
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESTART_ATTEMPTS=5
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
[oracle@racnode-dc1-1 ~]$

You might ask why I am doing this.

I am reviewing configuration before implementation and will compare them with after implementation.

The less items I need to look at before implementation, the better.

Performance issue with data densification process

Tom Kyte - Mon, 2019-02-18 11:26
Hi Tom, I am facing an issue while making sparse data dense. Here is the problem statement: We are getting price information for securities from upstream in a file and prices will come only when either there will be new security on boarded or t...
Categories: DBA Blogs

The relationship between null and 0

Tom Kyte - Mon, 2019-02-18 11:26
Coding 1) <code>select comm from emp where comm is not null and comm != 0;</code> Coding 2) <code>select comm from emp where comm != 0;</code> The results of Coding 1 and Coding 2 are the same. I know that the values ??of null and 0 ar...
Categories: DBA Blogs

Table and Index maintenance

Tom Kyte - Mon, 2019-02-18 11:26
Good Afternoon Tom, I'm going to develop a little SQL Tool for maintenance of compress tables and indexes for our DWH Schema, our clients have Oracle EE (11.2 and 12.2), my "big" doubt is use or not use parallel execution because i see that using ...
Categories: DBA Blogs

writing a stand-alone application to continuously monitor a database queue (AQ)

Tom Kyte - Mon, 2019-02-18 11:26
Hi Tom, A question regarding oracle AQ... I wish to write a small stand-alone application that would *constantly* monitor a queue (only one queue) for the arrival of a message and as soon as a mesage arrives, take some action. I figured I could use...
Categories: DBA Blogs

Best way to enforce cross-row constraints?

Tom Kyte - Mon, 2019-02-18 11:26
I use the database to declare (and enforce) as much application logic as I can. What I'd like to do is to enforce application constraints across related rows, if possible. As a contrived example, suppose we have a table of Agreements and a secon...
Categories: DBA Blogs

[Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue

Online Apps DBA - Mon, 2019-02-18 08:00

[Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue   Wants to know how to solve Online Patching (ADOP) issue? Visit: https://k21academy.com/appsdba23 & learn with follow the steps: ✔ Run Prepare Phase ✔ Look at ADOP Logs ✔ Errors in ADOP ✔ Root Causes Fix and Revise to troubleshoot the Error & Solve. Write […]

The post [Troubleshooting] Oracle Apps R12.2 Online Patching ADOP : Prepare Phase Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Understanding grid disks in Exadata

Amardeep Sidhu - Mon, 2019-02-18 07:07

Use of Exadata storage cells seems to be a very poorly understood concept. A lot of people have confusions about how exactly ASM makes uses of disks from storage cells. Many folks assume there is some sort of RAID configured in the storage layer whereas there is nothing like that. I will try to explain some of the concepts in this post.

Let’s take an example of an Exadata quarter rack that has 2 db and 3 storage nodes (node means a server here). Few things to note:

  • The space for binaries installation on db nodes comes from the local disks installed in db nodes (600GB * 4 (expandable to 8) configured in RAID5). In case you are using OVM, same disks are used for keeping configuration files, Virtual disks for VMs etc.
  • All of the ASM space comes from storage cells. The minimum configuration is 3 storage cells.

So let’s try to understand what makes a storage cell. There are 12 disks in each storage cell (latest X7 cells are coming with 10 TB disks). As I mentioned above that there are 3 storage cells in a minimum configuraiton. So we have a total of 36 disks. There is no RAID configured in the storage layer. All the redundancy is handled at ASM level. So to create a disk group:

  • First of all cell disks are created on each storage cell. 1 physical disk makes 1 cell disk. So a quarter rack has 36 cell disks.
  • To divide the space in various disk groups (by default only two disk groups are created : DATA & RECO; you can choose how much space to give to each of them) grid disks are created. grid disk is a partition on the cell disk. slice of a disk in other words. Slice from each cell disk must be part of both the disk groups. We can’t have something like say DATA has 18 disks out of 36 and the RECO has another 18. That is not supported. Let’s say you decide to allocate 5 TB to DATA grid disks and 4 TB to RECO grid disks (out of 10 TB on each disk, approx 9 TB is what you get as usable). So you will divide each cell disk into 2 parts – 5 TB and 4 TB and you would have 36 slices of 5 TB each and 36 slices of 4 TB each.
  • DATA disk group will be created using the 36 5 TB slices where grid disks from each storage cell constitute one failgroup.
  • Similarly RECO disk group will be created using the 36 4 TB slices.

What we have discussed above is a quarter rack scenario with High Capacity (HC) disks. There can be somewhat different configurations too:

  • Instead of HC disks, you can have the Extreme Flash (EF) configuration which uses flash cards in place of disks. Everything remains the same except the number. Instead of 12 HC disks there will be 8 flash cards.
  • With X3 I think, Oracle introduced an eighth rack configuration. In an eighth rack configuration db nodes come with half the cores (of quarter rack db nodes) and storage cells come with 6 disks in each of the cell. So here you would have only 18 disks in total. Everything else works in the same way.

Hope it clarified some of the doubts about grid disks.


Categories: BI & Warehousing

[Video 2 of 5] 3 Ways to Connect to Oracle Cloud

Online Apps DBA - Mon, 2019-02-18 03:17

There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how many you know. Note: We’ve covered these 3 ways in our 2nd video part of Networking In Oracle Cloud here: https://k21academy.com/1z093214 There are 3 ways to connect to the Oracle Cloud! Leave a comment below and share how […]

The post [Video 2 of 5] 3 Ways to Connect to Oracle Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Batch Architecture - Designing Your Cluster - Part 1

Anthony Shorten - Sun, 2019-02-17 18:42

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script:

$ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ...

Once enabled the capability can be used to build and maintain your batch architecture.

Using Batch Edit

The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations:

  • Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list.
  • In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic.
  • Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility.
  • Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session.
  • Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location.

Here is an example of an edit session:

$ bedit.sh -w

Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be
Includes the following push destinations:
  dir:/u01/ugtbk/etc/conf/tpw

Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties]
--------------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  daemon (true)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  jmxstartport (7540)
  l2 (READ_ONLY)
  devmode (false)
  ollogdir (/u02/sploutput/ugtbk)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)
> save
Changes saved
Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ...
> exit

Cluster Configuration

The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment).

Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration

This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t ss Uni-Cast Configuration

This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t wka

You then add each host as a socket using the command:

add socket

This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command:

set socket.<socketnumber> <parameter> <value>

where:

<socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1

To use this cluster style ensure the following:

  • Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port.
  • Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product).
Multi-Cast Configuration

This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice.

Cluster Guidelines

When setting up the cluster there are a few guidelines to follow:

  • Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort.
  • Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering.
  • Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment.
  • Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity.

The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster.

Cluster Configuration

Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

Microsoft Azure: account upgrade to keep going with free products

Dietrich Schroff - Sun, 2019-02-17 12:39
Today i got an e-mail about my azure account:


 Microsoft provides a little FAQ for this upgrading process:
So there is no option not to upgrade. "you won't be able to access any Azure services" is not really an option...

There 4 upgrade paths i can choose:
To use this account only for my blog, i decided to choose "no technical support", which is perfectly adequate.

After hitting the upgrade button, the notifications bar came up with the following message:
And the subscriptions tab inside the Azure portal shows:



GoldenGate XAG APP VIP Revisited

Michael Dinh - Sat, 2019-02-16 08:17

For unknown reasons, XAG integration for GoldenGate target was eradicated without any trace (I was not able to find any).

When running crsctl at target, no resources were available.

crsctl stat res -t -w 'TYPE = xag.goldengate.type'
crsctl stat res -t|egrep -A2 'dbfs|xag'

Here is an example from source:

$ crsctl stat res -t -w 'TYPE = xag.goldengate.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02           STABLE
--------------------------------------------------------------------------------

$ crsctl stat res -t|egrep -A2 'dbfs|xag'
dbfs_mount
               ONLINE  ONLINE       host_source01            STABLE
               ONLINE  ONLINE       host_source02            STABLE
--
ora.dbfs.db
      1        ONLINE  ONLINE       host_source01            Open,STABLE
      2        ONLINE  ONLINE       host_source02            Open,STABLE
--
xag.gg_ue-vip.vip
      1        ONLINE  ONLINE       host_source01            STABLE
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02            STABLE

Now, I need to setup XAG for target RAC Cluster.

FYI: XAG Bundled Agent was not downloaded, instead used the one available from GRID_HOME.

$ agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 3.1.0

$ agctl query deployment
The Oracle Grid Infrastructure Agents deployment is bundled

Creating XAG using 2 commands provided different metadata vs 1 command.

The difference between FILESYSTEMS is expected due to change from DBFS to ACFS.

Currently, the change is being implemented at target.

Here is an example using 2 commands:

As Root:
appvipcfg create -network=1 -ip=10.30.91.158 -vipname=xag.gg_ue-vip.vip -user=ggsuser -group=oinstall

As Oracle:
agctl add goldengate gg_ue \
--gg_home /u03/gg/12.2.0 \
--instance_type target \
--nodes target04,target02 \
--vip_name xag.gg_target-vip.vip \
--filesystems ora.acfs_data.acfs_vol.acfs \
--oracle_home /u01/app/oracle/product/12.1.0/client_2

Create source_xag_goldengate.txt target_xag_goldengate.txt using:
crsctl stat res -w "TYPE = xag.goldengate.type" -p
$ diff source_xag_goldengate.txt target_xag_goldengate.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
---
< AUTO_START=restore
> AUTO_START=never
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=
---
< VIP_CREATED=1
> VIP_CREATED=0

Here is an example using 1 command:

As Root:
agctl add goldengate gg_target
--gg_home /u03/gg/12.2.0
--instance_type target
--nodes target01,target02
-- filesystems ora.acfs_data.acfs_vol.acfs
--oracle_home /u01/app/oracle/product/12.1.0/client_2
--network 1 --ip 10.30.91.158
--user ggsuser
--group dba

$ diff source_xag_goldengate.txt target_xag_goldengate2.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=hard(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs) pullup(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs)
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:ora.acfs_data.acfs_vol.acfs)

In conclusion, I will be creating XAG using 1 command from now on to provide more metadata info.

Pages

Subscribe to Oracle FAQ aggregator