Feed aggregator

Oracle Sign up: more problems

Dietrich Schroff - Fri, 2019-09-13 14:03
I thought, i was successful, but:

I received a mail with the following content:

"We have re-authorized a new, specific amount on the credit/debit card used during the sign up process."

and

"To verify the account you have created, please confirm the specific amount re-authorized."


My problem: there is not any "re-authorized amount" on my banking account. I do not know, what is "re-authorized"?
Is this: this amount is charged on my credit card (then i should see it).
Or is this process buggy and i was for some reason not charged?
Or is re-authorization something else?



HVR | Real-Time CDC ::Oracle Autonomous DW::

Rittman Mead Consulting - Fri, 2019-09-13 09:49
Introduction :Oracle Autonomous DW::

High quality Business Intelligence is key in decision making for any successful organisation and for an increasing number of businesses this means being able to access real-time data.  At Rittman Mead we are seeing a big upturn in interest in technologies that will help our customers to integrate real-time data into their BI systems.  In a world where streaming is revolutionising the way we consume data, log-based Change Data Capture (CDC) comes into the picture as a way of providing access to real-time transactional data for reporting and analysis. Most databases support CDC nowadays; in Oracle for example the redo logs are a potential source of CDC data. Integration tools that support CDC are coming to the fore and one of these is HVR.

HVR is a real-time data integration solution that replicates data to and from many different data management technologies, both on-premise and cloud-based, including Oracle, SAP and Amazon. The HVR GUI makes it easy to connect together multiple databases, replicate entire schemas, initialise the source and target systems and then keep them in sync.

HVR have recently added Oracle’s Autonomous Data Warehouse (ADW) to their list of supported technologies and so in this blog we are stepping through the process of configuring an HVR Channel to replicate data from an Oracle database to an instance of Oracle ADW.


Setup

Before setting up replication you have to install HVR itself. This is simple enough, a fairly manual CLI job with a couple of files to create and save in the correct directories. Firewalls also needs to allow all HVR connections. HVR needs a database schema in which to store the repository configuration data and so we created a schema in the source Oracle database. It also needs some additional access rights on the Oracle source database.


Process 1.

The first step is to register the newly created Hub in the HVR GUI. The GUI can be run on any machine that is able to connect to the server on which the HVR hub is installed. We tested two GUI instances, one  running on a Windows machine and one on a MAC. Both were easy to install and configure.

:Oracle Autonomous DW::

The database connection details entered here are for the HVR hub database, where metadata about the hub configuration is stored.

2.

Next we need to define our source and target. In both cases the connection between the HVR and the data uses standard Oracle database connectivity. The source connection is to a database on the same server as the HVR hub and the target connection uses a TNS connection pointing at the remote ADW instance.

Defining the source database involves right clicking on Location Configuration and selecting New Location:

:Oracle Autonomous DW::

Configuring the target involves the same steps:

:Oracle Autonomous DW::

You can see from the screenshot that we are using one of the Oracle-supplied tnsnames entries to connect to ADW and also that we are using a separate Oracle client install to connect to ADW. Some actions within HVR use the Oracle Call Interface and require a more recent version of the Oracle client than provided by our 12c database server install.

Next up is creating the “channel”. A channel channel in HVR groups together the source and target locations and allows the relationship between the two to be defined and maintained.  Configuring a new channel involves naming it, defining source and target locations and then identifying the tables in the source that contain the data to be replicated.

3.

The channel name is defined by right clicking on Channel Definitions and selecting New Channel.

:Oracle Autonomous DW::

We then open the new channel and right click on Location Groups and select New Group to configure the group to contain source locations:

:Oracle Autonomous DW::

The source location is the location we defined in step 2 above. We then right click on the newly created group and select New Action, Capture  to define the role of the group in the channel:

:Oracle Autonomous DW::

The Capture action defines that data will be read from the locations in this group.

A second Location Group is needed for the for the target. This time we defined the target group to have the Integrate action so that data will be written to the locations in this group.

4.

The final step in defining the channel is to identify the tables we want to replicate. This can be done using the Table Explore menu option when you right-click on Tables.:

:Oracle Autonomous DW:: 5.

With the channel defined we can start synchronising the data between the two systems. We are starting with an empty database schema in our ADW target so we use the HVR Refresh action to first create the target tables in ADW and to populate them with the current contents of the source tables.  As the Refresh action proceeds we can monitor progress:

:Oracle Autonomous DW:: 6.

Now with the two systems in sync we can start the process of real-time data integration using the HVR Initialise action. This creates two new jobs in  the HVR Scheduler which then need to be started:

:Oracle Autonomous DW::

One more thing to do of course: test that the channel is working and replications is happening in real-time. We applied a series of inserts, updates and deletes to the source system and monitored the log files for the two scheduled jobs to see the activity captured from the redo logs on the source:

:Oracle Autonomous DW::

and then applied as new transactions on the target:

:Oracle Autonomous DW::

The HVR Compare action allows us to confirm that the source and target are still in sync.

:Oracle Autonomous DW::
Conclusion

Clearly the scenario we are testing here is a simple one. HVR can do much more - supporting one-to-many, many-to-many and also bi-directional replication configurations. Nonetheless we were impressed with how easy it was to install and configure HVR and also with the simplicity of executing actions and monitoring the channel through the GUI. We dipped in to using the command line interface when executing some of the longer running jobs and this was straightforward too.

Categories: BI & Warehousing

Oracle Cloud: First login

Dietrich Schroff - Thu, 2019-09-12 13:53
After signing up to Oracle cloud i tried my first login:

https://cloud.oracle.com/en_US/sign-in


but i only got:
I think the problem is, that there i a manual review step on Oracle's side which i have not passed for now:
So let's wait for a day or two...

Oracle Cloud: Sign up failed... [3] & solved

Dietrich Schroff - Tue, 2019-09-10 13:44
Finally (see my attempts here and here) i was able to sign up to Oracle cloud.
What did the trick?

I got help from Oracle support:
So i used my gmail address and this worked:

and then:

Let's see how this cloud will work compared to Azure and AWS

Oracle Cloud: Sign up failed... [2]

Dietrich Schroff - Fri, 2019-09-06 14:36
After my failed registration to Oracle cloud, i got very fast an email from Oracle support with the following requirements:
So i tried once again with a firefox "private" window - but this failed again.
Next idea was to use a completely new installed browser: so i tried with a fresh google-chrome.
But the error still remained:
Let's hope Oracle support has another thing which will put me onto Oracle cloud.

UPDATE:


There is a tiny link "click here" just abouve the blue button. This link a have to use with the verification code provided by Oracle support.
But then the error is:
I checked this a VISA and MASTERCARD. Neither of them worked...

UPDATE 2: see here how the problem was solved.

Tableau | Dashboard Design ::Revoke A50 Petition Data::

Rittman Mead Consulting - Mon, 2019-09-02 03:00

Dashboards are most powerful through visual simplicity. They’re designed to automatically keep track of a specific set of metrics and keep human beings updated. Visual overload is like a binary demon in analytics that many developers seem possessed by; but less is more.

For example, many qualified drivers know very little about their dashboard besides speed, revs, temperature and fuel gauge. When an additional dash warning light comes on, even if it is just the tyre pressure icon let alone engine diagnostics light, most people will just take their car to the garage. The most obvious metrics in a car are in regard to its operation; if you didn't know your speed while driving you'd feel pretty blind. The additional and not so obvious metrics (i.e. dash warning lights) are more likely to be picked up by the second type of person who will spend the most time with that car: its mechanic. It would be pointless to overload a regular driver with all the data the car can possibly output in one go; that would just intimidate them. That's not what you want a car to do to the driver and that's certainly not what any organisation would want their operatives to feel like while their “car” is moving.

In light of recent political events, the exact same can metaphorically be applied to the big red Brexit bus. Making sense of it all might be a stretch too far for this article. Still, with appropriate use of Tableau dashboard design it is possible to answer seemingly critical questions on the topic with publicly available data.



There's An Ongoing Question That Needs Answering?
Where did 6 million+ signatures really come from?


Back in the UK, the Brexit fiasco is definitely still ongoing. Just before the recent A50 extensions took place, a petition to revoke article 50 and remain in the EU attracted more than 6 million signatures, becoming the biggest and fastest growing ever in history and sparking right wing criticism over the origin of thousands of signatures, claiming that most came from overseas and discrediting its legitimacy. Government responded by rejecting the petition.

Thankfully the data is publicly available (https://petition.parliament.uk/petitions/241584.json) for us to use as an example of how a dashboard can be designed to settle such a question (potentially in real time too as more signatures come in).

Tableau can handle JSON data quite well and, to nobody’s surprise, we quickly discover that over 95% of signatures are coming from the UK.

Now that we know what we're dealing with, lets focus the map on Britain and provide additional countries data in a format that is easier to digest visually. As cool as it is to hover over the world map, there's simpler ways to take this in.

Because in this case we know more than 95% of signatures originate from the UK, the heatmap above is far more useful, showing us the signature count for each constituency at a glance. The hotter the shading, the higher the count.


Scales Might Need Calibration
Bar Chart All The Way


Humans of all levels compute a bar chart well and it's perfect for what we need to know on how many signatures are coming from abroad altogether and from what countries in descending order.

With a margin so tiny, it's trickier to get a visual that makes sense. A pie chart, for example, would hardly display the smaller slice containing all of the non-UK origin signatures. Even with a bar chart we are struggling to see anything outside of the UK in a linear scale; but it is perfect if using logarithmic scales, which are definitely a must in this scenario.

And voila! The logarithmic scale allows the remaining counts to appear alongside the UK, even though France, the next country after the UK with most signatures, has a count below 50k. This means we can keep an eye on the outliers in more detail quite effortlessly. Not much looks out of place right now considering the number of expats Britain produces to the countries on the list. Now we know, as long as none of the other countries turn red, we have nothing to worry about!


Innovate When Needed

The logarithmic scale in Tableau isn't as useful for these %, so hacking the visualised values in order to amplify the data sections of interest is a perfectly valid way of thinking outside the box. In this example, half the graph is dedicated to 90-100% and the other half 0-90%. The blue chunk is the percentage of signatures coming from the UK, while every other country colour chunk is still so small. Since the totals from other countries are about the same as each mainland constituency, it's more useful to see it as one chunk. Lastly, adding the heat colour coding keeps the visual integrity.

Interactivity

Now that we have the count, percentage and location breakdown into 3 simple graphs we feel much wiser. So it's time to make them interact with each other.

The constituency heatmap doesn't need to interact with the bar charts. The correlation between the hottest bars and the heatmap is obvious from the get go, but if we were to filter the bars using the map, the percentages would be so tiny you wouldn't see much on the % graph. The same occurs for the Country bar chart, meaning that only the percentage chart can be usefully used as a filter. Selecting the yellow chunk will show the count of signatures for every country within it only.

Another way in which interactivity can be introduced is through adding further visualisations to the tooltip. The petition data contains the MP responsible for each constituency, so we can effectively put a count of signatures to each name. It's nice to be able to see what their parliamentary voting record has been throughout this Brexit deadlock, which was obtained publicly from the House of Commons portal https://commonsvotes.digiminster.comand blended in; as more votes come in, the list will automatically increase.

Keep It Simple

As you can see, 3 is a magic number here. The trio of visuals working together makes a dashing delivery of intel to the brain. With very little effort, we can see how many signatures come from the UK compared to rest of the world, how many thousands are coming from each country, how many from each constituency, who the MP you should be writing to is and how they voted in the indicative votes. Furthermore, this dashboard can keep track of all of that in real time, flagging any incoming surge of signatures from abroad, continuously counting the additional signatures until August 2019 and providing a transparent record of parliamentary votes in a format that is very easy to visually digest.

Categories: BI & Warehousing

Oracle Cloud: Sign up failed...

Dietrich Schroff - Sun, 2019-09-01 08:38
Yesterday i tried to sign up for oracle cloud:

 So let's start the registration process:


The mobile number verification is done with SMS and after entering the 7 digit pin, you are allowed to enter a password:

As payment information only credit cards are accepted:
  • VISA
  • Mastercard
  • Amex


Eve though my credit card was accepted:



"Your credit card has been successfully validated. Please proceed to complete the Sign up."
I got the following error:

"We're unable to process your transaction. Please contact Oracle Customer Service."
The link "Oracle Customer Service" did not work, so i used the Chat Support. But inside the chat was no agent available and only "Send E-Mail" worked. Let's see what kind of response i will be given...

EDIT: Some further attempts...

EDIT 2: see here how the problem was solved.  

Ubuntu Linux: Change from Legacy boot to UEFI boot after installation

Dietrich Schroff - Sat, 2019-08-31 10:01
This weekend i did an installation of Linux on a laptop where already a windows 10 was installed.
Because laptop did not recognize my linux boot usb-stick i changed from UEFI to legacy mode and the installation went through without any problem.

At the end grub was in place but the windows installation was not listed. This is, because windows does not support booting.

The problem: If i switch back to UEFI the linux installation did not start anymore.

My solution:
  • Change to UEFI and boot with a live linux
  • Install boot-repair into the live linux
    (https://help.ubuntu.com/community/Boot-Repair)
    sudo add-apt-repository ppa:yannubuntu/boot-repair
    sudo apt-get update
    sudo apt-get install -y boot-repair
  • Then run boot repair
    boot-repair
  • Follow the instructions on the Boot-Repair homepage (s. above)
  • Enter the commands of the following popus:


And after removing the live CD i got an boot grub menu, where windows was in place and working (and the Ubuntu Linux worked, too ;-)

Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 11 Regions Latest Sydney @ Australia

Online Apps DBA - Sat, 2019-08-31 05:29

New Region Added: Sydney, Australia In 2019 till August Oracle added 7 new Regions in Gen 2 Cloud that’s OCI and a lot more in the pipeline This means you now have in total 11 regions, 4 with 3 availability domain while 7 with single availability domain If you want to get full picture related […]

The post Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 11 Regions Latest Sydney @ Australia appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Documentum – Encryption/Decryption of WebTop 6.8 passwords ‘REJECTED’ with recent JDK

Yann Neuhaus - Sat, 2019-08-31 03:00

Recently, we had a project to modernize a little bit a pretty old Documentum installation. As part of this project, there were a refresh of the Application Server hosting a WebTop 6.8. In this blog, I will be talking about an issue that we faced in encryption & decryption of passwords in the refresh environment. This new environment was using WebLogic 12.1.3 with the latest PSU in conjunction with the JDK 1.8u192. Since WebTop 6.8 P08, the JDK 1.8u111 is supported so a newer version of the JDK8 should mostly be working without much trouble.

To properly deploy a WebTop application, you will need to encrypt some passwords like the Preferences or Preset passwords. Doing so in the new environment unfortunately failed:

[weblogic@wls_01 ~]$ work_dir=/tmp/work
[weblogic@wls_01 ~]$ cd ${work_dir}/
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ jar -xf webtop_6.8_P27.war WEB-INF/classes WEB-INF/lib
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ kc="${work_dir}/WEB-INF/classes/com/documentum/web/formext/session/KeystoreCredentials.properties"
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ sed -i "s,use_dfc_config_dir=[^$]*,use_dfc_config_dir=false," ${kc}
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ sed -i "s,keystore.file.location=[^$]*,keystore.file.location=${work_dir}," ${kc}
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ grep -E "^use_dfc_config_dir|^keystore.file.location" ${kc}
use_dfc_config_dir=false
keystore.file.location=/tmp/work
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ enc_classpath="${work_dir}/WEB-INF/classes:${work_dir}/WEB-INF/lib/*"
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ java -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 11:02:23 AM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.cryptoj.o.nc, array length: -1, nRefs: 1, depth: 1, bytes: 72, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.decryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:64)
[weblogic@wls_01 work]$

 

As you can see above, the encryption of password is failing with some error. The issue is that starting with the JDK 1.8u171, Oracle introduced some new restrictions. From the Oracle release note (JDK-8189997):

New Features
security-libs/javax.crypto
Enhanced KeyStore Mechanisms
A new security property named jceks.key.serialFilter has been introduced. If this filter is configured, the JCEKS KeyStore uses it during the deserialization of the encrypted Key object stored inside a SecretKeyEntry. If it is not configured or if the filter result is UNDECIDED (for example, none of the patterns match), then the filter configured by jdk.serialFilter is consulted.

If the system property jceks.key.serialFilter is also supplied, it supersedes the security property value defined here.

The filter pattern uses the same format as jdk.serialFilter. The default pattern allows java.lang.Enum, java.security.KeyRep, java.security.KeyRep$Type, and javax.crypto.spec.SecretKeySpec but rejects all the others.

Customers storing a SecretKey that does not serialize to the above types must modify the filter to make the key extractable.

 

On recent versions of Documentum Administrator for example, there is no issue because it complies but for WebTop 6.8, it doesn’t and therefore to be able to encrypt/decrypt the password, you will have to modify the filter. There are several solutions to our current problem:

  • Downgrade the JDK: this isn’t a good solution since it might introduce security vulnerabilities and it will also prevent you to upgrade it in the future so…
  • Extend the ‘jceks.key.serialFilter‘ definition inside the ‘$JAVA_HOME/jre/lib/security/java.security‘ file: that’s a possibility but it means that any processes using this Java will use the updated filter list. Whether or not that’s fine, it’s up to you
  • Override the ‘jceks.key.serialFilter‘ definition using a JVM startup parameter on a per-process basis: better control on which processes are allowed to use updated filters and which ones aren’t

 

So the simplest way, and most probably the better way, to solve this issue is to simply add a command line parameter to specify that you want to allow some additional classes. By default, the ‘java.security‘ provides a list of some classes that are allowed and it ends with ‘!*‘ which means that everything else is forbidden.

[weblogic@wls_01 work]$ grep -A2 "^jceks.key.serialFilter" $JAVA_HOME/jre/lib/security/java.security
jceks.key.serialFilter = java.lang.Enum;java.security.KeyRep;\
  java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*

[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ grep "^security.provider" $JAVA_HOME/jre/lib/security/java.security
security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=com.rsa.jsse.JsseProvider
security.provider.3=sun.security.provider.Sun
security.provider.4=sun.security.rsa.SunRsaSign
security.provider.5=sun.security.ec.SunEC
security.provider.6=com.sun.net.ssl.internal.ssl.Provider
security.provider.7=com.sun.crypto.provider.SunJCE
security.provider.8=sun.security.jgss.SunProvider
security.provider.9=com.sun.security.sasl.Provider
security.provider.10=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.11=sun.security.smartcardio.SunPCSC
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Using an empty parameter allows everything (not the best idea)
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Encrypted: [4Fc6kvmUc9cCSQXUqGkp+A==], Decrypted: [MyP4ssw0rd]
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Using the default value from java.security causes the issue
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 12:05:08 PM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.cryptoj.o.nc, array length: -1, nRefs: 1, depth: 1, bytes: 72, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.encryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:63)
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Adding com.rsa.cryptoj.o.nc to the allowed list
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 12:06:14 PM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.jcm.f.di, array length: -1, nRefs: 3, depth: 2, bytes: 141, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.encryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:63)
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Adding com.rsa.jcm.f.* + com.rsa.cryptoj.o.nc to the allowed list
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='com.rsa.jcm.f.*;com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Encrypted: [4Fc6kvmUc9cCSQXUqGkp+A==], Decrypted: [MyP4ssw0rd]
[weblogic@wls_01 work]$

 

So as you can see above, to encrypt passwords for WebTop 6.8 using a JDK 8u171+, you will need to add both ‘com.rsa.cryptoj.o.nc‘ and ‘com.rsa.jcm.f.*‘ in the allowed list. There is a wildcard for the JCM one because it will require several classes from this package.

The above was for the encryption of the password. That’s fine but obviously, when you will deploy WebTop, it will need to decrypt these passwords at some point… So you will also need to put the same JVM parameter for the process of your Application Server (for the Managed Server’s process in WebLogic):

-Djceks.key.serialFilter='com.rsa.jcm.f.*;com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*'

 

You can change the order of the classes in the list, it just needs to be before the ‘!*‘ section because everything after that is ignored.

 

Cet article Documentum – Encryption/Decryption of WebTop 6.8 passwords ‘REJECTED’ with recent JDK est apparu en premier sur Blog dbi services.

Getting started with Hyper-V on Windows 10

The Oracle Instructor - Fri, 2019-08-30 03:27

Microsoft Windows 10 comes with its own virtualization software called Hyper-V. Not for the Windows 10 Home edition, though.

Check if you fulfill the requirements by opening a CMD shell and typing in systeminfo:

The below part of the output from systeminfo should look like this:

If you see No there instead, you need to enable virtualization in your BIOS settings.

Next you go to Programms and Features and click on Turn Windows features on or off:

You need Administrator rights for that. Then tick the checkbox for Hyper-V:

That requires a restart at the end:

Afterwards you can use the Hyper-V Manager:

Hyper-V can do similar things than VMware or VirtualBox. It doesn’t play well together with VirtualBox in my experience, though: VirtualBox VMs refused to start with errors like “VT-x is not available” after I installed Hyper-V. I also found it a bit trickier to handle than VirtualBox, but that’s maybe just because of me being less familiar with it.

The reason I use it now is because one of our customers who wants to do an Exasol Administration training cannot use VirtualBox – but Hyper-V is okay for them. And now it looks like that’s also an option. My testing so far shows that our educational cluster installation and management labs work also with Hyper-V.

Categories: DBA Blogs

Cloning of RDS Instance to Another Account

Pakistan's First Oracle Blog - Wed, 2019-08-28 21:01
Frequently, we need to refresh our development RDS based Oracle database from the production which is in another account in AWS. So we take a snapshot from production, share it with another account and then restore it in target from the snapshot.

I will post full process in a later post, but for now just sharing an issue we encountered today. While trying to share a snapshot with another account, I got the following error:


Sharing snapshots encrypted with the default service key for RDS is currently not supported.


Now, this snapshot was using default RDS keys and that is not supported. So in order to share it, we need to have customer managed keys and then copy this snapshot with these news keys and only then we can share it. You don't have to do anything at the target, as these customer managed keys become part of that snapshot. You can create customer managed keys in KMS console and may be assign to IAM user you are using.


I hope it helps.
Categories: DBA Blogs

Kafka | IoT Ecosystem ::Cluster; Performance Metrics; Sensorboards & OBD-II::

Rittman Mead Consulting - Wed, 2019-08-28 04:30
:Cluster; Performance Metrics; Sensorboards & OBD-II::

Infrastructure is the place to start and the keyword here is scalability. Whether it needs to run on premise, on cloud or both, Kafka makes it possible to scale at low complexity cost when more brokers are either required or made redundant. It is also equally easy to deploy nodes and nest them in different networks and geographical locations. As for IoT devices, whether it’s a taxi company, a haulage fleet, a racing team or just a personal car, Kafka can make use of the existing vehicle OBDII port using the same process; whether it’s a recording studio or a server room packed with sensitive electronic equipment and where climate control is critical, sensorboards can be quickly deployed and stream almost immediately into the same Kafka ecosystem. Essentially, pretty much anything that can generate data and touch python will be able to join this ecosystem.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

In large data centres it is fundamental to keep a close eye on misbehaving nodes, possibly overheating, constantly failing jobs or causing unexpected issues. Fires can occur too. This is quite a challenge with thousands and thousands of nodes. Though, Kafka allows for all of the node stats to individually stream in real time and get picked up by any database or machine, using Kafka Connect or kafka-python for consumption.

To demonstrate this on a smaller scale with a RaspberryPi 3 B+ cluster and test a humble variety of different conditions, a cluster of 7 nodes, Pleiades, was set up. Then, to make it easier to identify them, each computer was named after the respective stars of the Pleiades constellation.

  • 4 nodes {Alcyone; Atlas; Pleione; Maia} in a stack with cooling fans and heatsinks
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in metal case with heatsink {Merope}
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in plastic case {Taygeta}
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in touchscreen plastic case {Electra}
:Cluster; Performance Metrics; Sensorboards & OBD-II::::Yes. It's a portable Retropie, Kafka broker & perfect for Grafana dashboards too::

Every single node has been equipped with the same python Kafka-producer script, from which the stream is updated every second in real-time under 1 topic, Pleiades. Measures taken include CPU-Percentage-%, CPU-Temperature, Total-Free-Memory, Available-System-Memory, CPU-Current-Hz.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

Kafka then connects to InfluxDB on Pleione, which can be queried using the terminal through a desktop or android SSH client. Nothing to worry about in terms of duplication, load balancing or gaps in the data. Worst case scenario InfluxDB, for example, crashes and the data will still be retrievable using KSQL to rebuild gap in DB depending on the retention policy set.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

We can query InfluxDB directly from the command line. The Measure (InfluxDB table) for Pleiades is looking good and holding plenty of data for us to see in Grafana next.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

A live feed is then delivered with Grafana dashboards. It's worth noting how mobile friendly these dashboards really are.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

At a glance, we know the critical factors such as how much available memory there is and how much processing power is being used, for the whole cluster as well as each individual node, in real time and anywhere in the world (with an internet connection).

It has then been observed that the nodes in the stack remain fairly cool and stable between 37 °C and 43 °C, whereas the nodes in plastic cases around 63 °C. Merope is in the metal casing with a heatsink, so it makes sense to see it right in the middle there at 52 °C. Spikes in temperature and CPU usage are directly linked to running processes. These spikes are followed by software crashes. Moving some of the processes from the plastic enclosures over to the stack nodes stopped Grafana from choking; this was a recurring issue when connecting to the dashboards from an external network. Kafka made it possible to track the problem in real time and allow us to come up with a solution much quicker and effortlessly; and then immediately also track if that solution was the correct approach. In the end, the SD cards between Electra and Pleione were quickly swapped, effectively moving Pleione to the fan cooled stack where it was much happier living.

If too many spikes begin to occur, we should expect for nodes to soon need maintenance, repair or replacement. KSQL makes it possible to tap into the Kafka Streams and join to DW stored data to forecast these events with increased precision and notification time. It's machine-learning heaven as a platform. KSQL also makes it possible to join 2 streams together and thus create a brand new stream, so to add external environment metrics and see how they may affect our cluster metrics, a sensor board on a RaspberryPi Zero-W was setup producing data into our Kafka ecosystem too.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

To keep track of the room conditions where the cluster sits, an EnviroPhat sensor board is being used. It measures temperature, pressure, colour and motion. There are many available sensorboards for SBCs like RaspberryPi that can just as easily be added to this Kafka ecosystem. Again, important to emphasize both data streams and dashboards can be accessed from anywhere with an internet connection.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

OBDII data from vehicles can be added to the ecosystem just as well. There are a few ways this can be achieved. The most practical, cable free option is with a Bluetooth ELM327 device. This is a low cost adaptor that can be purchased and installed on pretty much any vehicle after 1995. The adaptor plugs into the OBDII socket in the vehicle, connects via Bluetooth to a Pi-Zero-W, which then connects to a mobile phone’s 4G set up as a wi-fi hotspot. Once the data is flowing as far as needing a Kafka topic, the create command is pretty straight forward.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

With the obd-producer python script running, another equivalently difficult command opens up the console consumer for the topic OBD in Alcyone, and we can check if we have streams and if the OBD data is flowing through Kafka. A quick check on my phone reveals we have flow.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

To make things more interesting, the non-fan nodes in plastic and metal enclosures {Taygeta; Electra; Merope} were moved to a different geographical location and setup under a different network. This helps network outages and power cuts become less likely to affect our dashboard services or ability to access the IoT data. Adding cloud services to mirror this setup at this point would make it virtually bulletproof; zero point of failure is the aim of the game. When the car is on the move, Kafka is updating InfluxDB + Grafana in real time, and the intel can be tracked live as it happens from a laptop, desktop or phone from anywhere in the world.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

In a fleet scenario, harsh braking could trigger a warning and have the on-duty tracking team take immediate action; if the accelerometer spikes as well, then that could suggest an accident may have just occurred or payload checks may be necessary. Fuel management systems could pick up on driving patterns and below average MPG performance, even sense when the driver is perhaps not having the best day. This is where the value of Kafka in IoT and the possibilities of using ML algorithms really becomes apparent because it makes all of this possible in real time without a huge overhead of complexity.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

After plugging in the OBDII bluetooth adapter to the old e92-335i and driving it for 20 minutes, having it automatically stream data over the internet to the kafka master, Alcyone, and automatically create and update an OBD influxdb measure in Pleione, it can quickly be observed in Grafana that it doesn't enjoy idling that much; the coolant and intake air temperature dropped right down as it started moving at a reasonable speed. This kind of correlation is easier to spot in time series Grafana dashboards whereas it would be far less intuitive with standard vehicle dashboards that provide only current values.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

So now that a real bare-metal infrastructure exists - and it’s a self-monitoring, low power consumption cluster, spread across multiple geographical locations, keeping track of enviro-sensor producers from multiple places/rooms, logging all vehicle data and learning to detect problems as far ahead as possible - adding sensor data pickup points to this Kafka ecosystem is as simple as its inherent scalability. As such, with the right Kafka-Fu, pretty much everything is kind of plug-&-play from this point onwards, meaning we can now go onto connecting, centralising and automating as many things in life as possible that can become IoT using Kafka as the core engine under the hood.

:Cluster; Performance Metrics; Sensorboards & OBD-II::
Categories: BI & Warehousing

OAC Row Limits and Scale Up or Down

Rittman Mead Consulting - Wed, 2019-08-28 04:26
OAC Row Limits and Scale Up or Down

I created an OAC instance the other day for some analysis in preparation of my OOW talk, and during the analytic journey I reached the row limit with the error Exceeded configured maximum number of allowed input records.

OAC Row Limits and Scale Up or Down

Since a few releases back, each OAC instance has fixed row limits depending by the number of OCPU assigned that can be checked in the related documentation, with the current ones shown in the table below.

OAC Row Limits and Scale Up or Down

If you plan using BI Publisher (included in OAC a few versions ago) check also the related limits.

OAC Row Limits and Scale Up or Down

Since in my analytical journey I reached the row limit, I wanted to scale up my instance, but surprise surprise, the Scale Up or Down option wasn't available.

OAC Row Limits and Scale Up or Down

After some research I understood that Scaling Up&Down is available only if you chose originally a number of OCPUs greater than one. This is in line with Oracle's suggestion to use 1 OCPU only for non-production instances as stated in the instance creation GUI.

OAC Row Limits and Scale Up or Down

When choosing originally an OAC instance with 4 OCPUs the Scale Up/Down option becomes available (you need to start the instance first).

OAC Row Limits and Scale Up or Down

When choosing the scale option, we can decide whether to increase/decrease the number of OCPUs.

OAC Row Limits and Scale Up or Down

Please note that we could have limited choice in the number of OCPUs we can increase/decrease by depending on the availability and current usage.

Concluding, if you want to be able to Scale Up/Down your OAC instances depending on your analytic/traffic requirements, always start your instance with a number of OCPUs greater than one!

Categories: BI & Warehousing

AW-argh

Jonathan Lewis - Tue, 2019-08-27 09:59

This is another of the blog notes that have been sitting around for several years – in this case since May 2014, based on a script I wrote a year earlier. It makes an important point about “inconsistency” of timing in the way that Oracle records statistics of work done. As a consequence of being first drafted in May 2014 the original examples showed AWR results from 10.2.0.5 and 11.2.0.4 – I’ve just run the same test on 19.3.0.0 to see if anything has changed.

 

[Originally drafted May 2014]: I had to post this as a reminder of how easy it is to forget things – especially when there are small but significant changes between versions. It’s based loosely on a conversation from Oracle-L, but I’m going to work the issue in the opposite order by running some code and showing you the ongoing performance statistics rather than the usual AWR approach of reading the performance stats and trying to guess what happened.

The demonstration needs two sessions to run; it’s based on one session running some CPU-intensive SQL inside an anonymous PL/SQL block with a second another session launching AWR snapshots at carefully timed moments. Here’s the code for the working session:

rem
rem     Script:         awr_timing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2013
rem

alter session set "_old_connect_by_enabled"=true';

create table kill_cpu(n, primary key(n))
organization index
as
select  rownum n
from    all_objects
where   rownum <= 26 -- > comment to avoid wordpress format issue
;

execute dbms_stats.gather_table_stats(user,'kill_cpu')

pause Take an AWR snapshot from another session and when it has completed  press return

declare
        m_ct    number;
begin

        select  count(*) X
        into    m_ct
        from    kill_cpu
        connect by
                n > prior n
        start with
                n = 1
        ;

        dbms_lock.sleep(30);

end;
/

You may recognise an old piece of SQL that I’ve often used as a way of stressing a CPU and seeing how fast Oracle can run. The “alter session” at the top of the code is necessary to use the pre-10g method of running a “connect by” query so that the SQL does a huge number of buffer gets (and “buffer is pinned count” visits). On my current laptop the query takes about 45 seconds (all CPU) to complete. I’ve wrapped this query inside a pl/sql block that then sleeps for 30 seconds.

From the second session you need to launch an AWR snapshot 4 times – once in the pause shown above, then (approximately) every 30 seconds thereafter. The second one should execute while the SQL statement is still running, the third one should execute while the sleep(30) is taking place, and the fourth one should execute after the pl/sql block has ended and the SQL*Plus prompt is visible.

Once you’ve got 4 snapshots you can generate 3 AWR reports. The question to ask then, is “what do the reports say about CPU usage?” Here are a few (paraphrased) numbers – starting with 10.2.0.5 comparing the “Top 5 timed events”, “Time Model”, and “Instance Activity” There are three sets of figures, the first reported while the SQL was still running, the second reported after the SQL statement had completed and the dbms_lock.sleep() call is executing, the last reported after the PL/SQL block has completed. There are some little oddities in the numbers due to backgorund “noise” – but the key points are still clearly visible:

While the SQL was executing
Top 5
-----
CPU Time                       26 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     26.9        100.0
DB CPU                                       26.2         97.6

Instance Activity
-----------------
CPU used by this session         0.65 seconds
recursive cpu usage              0.67 seconds

SQL ordered by CPU
------------------
31 seconds reported for both the SQL and PLSQL
During the sleep()
Top 5
-----
CPU Time                        19 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     19.0        100.0
DB CPU                                       18.6         98.1

Instance Activity
-----------------
CPU used by this session         0.66 seconds
recursive cpu usage             44.82 seconds

SQL ordered by CPU
------------------
14 seconds reported for both the SQL and PLSQL
After the PL/SQL block ended
Top 5
-----
CPU Time                         1 second

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.4         99.9
DB CPU                                        1.4         99.7

Instance Activity
-----------------
CPU used by this session        44.68 seconds
recursive cpu usage              0.50 seconds

SQL ordered by CPU
------------------
1 second reported for the PLSQL, but the SQL was not reported

Points to notice:

While the SQL was excecuting (and had been executing for about 26 seconds, the Time Model mechanism was recording the work done by the SQL, and the Top N information echoed the Time model CPU figure. At the same time the “CPU used …” Instance Activity Statistics have not recorded any CPU time for the session – and they won’t until the SQL statement completes. Despite this, the “SQL ordered by …” reports double-count in real-time, show the SQL and the PL/SQL cursors as consuming (with rounding errors, presumable) 31 seconds each.

After the SQL execution was over and the session was sleeping the Time model (hence the Top 5) had recorded a further 19 seconds of work. The instance activity, however, has now accumulated 44 seconds of CPU, but only as “recursive CPU usage” (recursive because our SQL was called from with a PL/SQL block), with no “CPU used by this session”. The “SQL ordered by …” figures have recorded the amount of CPU used by both the SQL and PL/SQL in the interval (i.e. 14 seconds – which is a little off) recorded against both.)

After the PL/SQL block has completed the Time Model and the Top 5 report both say that nothing much happened in the interval, but the Instance Activity suddenly reports 44.68 seconds of CPU used by this session – which (roughly speaking) is truish as the PL/SQL block ended and assigned the accumulated recursive CPU usage to the session CPU figure. Finally, when we get down to the “SQL ordered by CPU” the SQL was not reported  – it did no work in the interval – but the PL/SQL block was still reported but only with a generous 1 second of CPU since all it did in the interval was finish the sleep call and tidy up the stack before ending.

Now the same sets of figures for 11.2.0.4 – there’s a lot of similarity, but one significant difference:

While the SQL was executing

Top 5
-----
CPU Time                        26.6 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     27.0        100.0
DB CPU                                       26.6         98.5

Instance Activity
-----------------
CPU used by this session         1.09 seconds
recursive cpu usage              1.07 seconds

SQL ordered by CPU
------------------
25.6 seconds reported for both the SQL and PLSQL
During the sleep()
Top 5
-----
CPU Time                        15.1 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     15.3         99.8
DB CPU                                       15.1         98.2

Instance Activity
-----------------
CPU used by this session        41.09 seconds
recursive cpu usage             41.03 seconds

SQL ordered by CPU
------------------
14.3 seconds reported for the SQL
13.9 seconds reported for the PLSQL
After the PL/SQL block ended
Top 5
-----
CPU Time                         1.4 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.5         99.6
DB CPU                                        1.4         95.4

Instance Activity
-----------------
CPU used by this session         1.02 seconds
recursive cpu usage              0.95 seconds

SQL ordered by CPU
------------------
0.5 seconds reported for the PLSQL, and no sign of the SQL

Spot the one difference in the pattern – during the sleep() the Instance Activity Statistic “CPU used by this session” is recording the full CPU time for the complete query, whereas the time for the query appeared only in the “recursive cpu” in the 10.2.0.5 report.

I frequently point out that for proper understanding of the content of an AWR report you need to cross-check different ways in which Oracle reports “the same” information. This is often to warn you about checking underlying figures before jumping to conclusions about “hit ratios”, sometimes it’s to remind you that while the Top 5 might say some average looks okay the event histogram may say that what you’re looking at is mostly excellent with an occasional disaster thrown in. In this blog note I just want to remind you that if you only ever look at one set of figures about CPU usage there are a few special effects (particularly relating to long running PL/SQL / Java / SQL) where you may have to work out a pattern of behaviour to explain unexpectedly large (or small) figures and contradictory figures, The key to the problem is recognising that different statistics may be updated at different stages in a complex process.

Footnote

I doubt if many people still run 11.2.0.4, so I also re-ran the test on 19.3.0.0 before publishing. The behaviour hasn’t changed since 11.2.0.4 although the query ran a little faster, perhaps due to changes in the mechanisms for this type of “connect by pump”.

11.2.0.4 stats

Name                                            Value
----                                            -----
session logical reads                      33,554,435
consistent gets                            33,554,435
consistent gets from cache                 33,554,435
consistent gets from cache (fastpath)      33,554,431
no work - consistent read gets             33,554,431
index scans kdiixs1                        33,554,433
buffer is not pinned count                 16,777,219


19.3.0.0 stats
Name                                            Value
----                                            -----
session logical reads                      16,843,299
consistent gets                            16,843,299
consistent gets from cache                 16,843,299
consistent gets pin                        16,843,298
consistent gets pin (fastpath)             16,843,298
no work - consistent read gets             16,790,166
index range scans                          33,554,433
buffer is not pinned count                 16,790,169

Some changes are trivial (like the change of name for “index scans kdiixs1”) some are interesting (like some gets not being labelled as “pin” and “pin (fastpath)”), some are baffling (like how you can manage 33M index range scans while doing only 16M buffer gets!)

Oracle Cloud at Customers(C@C): Overview and Concepts for Beginners

Online Apps DBA - Tue, 2019-08-27 05:56

Are you a Beginner in Oracle Cloud at Customers(C@C) and looking for an Overview of Oracle C@C & its Offerings? If YES, then the blog post at https://k21academy.com/oci47 is a perfect fit! The blog post discusses: ➥ What is Oracle C@C? ➥ How is Oracle C@C beneficial to you? ➥ Oracle C@C’s Offerings: Cloud at […]

The post Oracle Cloud at Customers(C@C): Overview and Concepts for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 10 Regions Latest Sao Paulo @ Brazil

Online Apps DBA - Mon, 2019-08-26 08:58

New Region Added: Sao Paulo @ Brazil In 2019 till mid-August Oracle added 6 new Regions in Gen 2 Cloud that’s OCI and a lot more in the pipeline This means you now have in total 10 regions, 4 with 3 availability domain while 6 with single availability domain If you want to get full […]

The post Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 10 Regions Latest Sao Paulo @ Brazil appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Troubleshooting

Jonathan Lewis - Mon, 2019-08-26 06:19

A recent thread on the Oracle Developer Community starts with the statement that a query is taking a very long time (with the question “how do I make it go faster?” implied rather than asked). It’s 12.1.0.2 (not that that’s particularly relevant to this blog note), and we have been given a number that quantifies “very long time” (again not particularly relevant to this blog note – but worth mentioning because your “slow” might be my “wow! that was fast” and far too many people use qualitative adjectives when the important detail is quantative). The query had already been running for 15 hours – and here it is:


SELECT 
        OWNER, TABLE_NAME 
FROM
        DBA_LOGSTDBY_NOT_UNIQUE 
WHERE
        (OWNER, TABLE_NAME) NOT IN (
                SELECT 
                        DISTINCT OWNER, TABLE_NAME 
                        FROM     DBA_LOGSTDBY_UNSUPPORTED
        ) 
AND     BAD_COLUMN = 'Y'

There are many obvious suggestions anyone could make for things to do to investigate the problem – start with the execution plan, check whether the object statistics are reasonably representative, run a trace with wait state tracing enabled to see where the time goes; but sometimes that are a couple of very simple observation you can make that point you to simple solutions.

Looking at this query we can recognise that it’s (almost certainly) about a couple of Oracle data dictionary views (which means it’s probably very messy under the covers with a horrendous execution plan) and, as I’ve commented from time to time in the past, Oracle Corp. developers create views for their own purposes so you should take great care when you re-purpose them. This query also has the very convenient feature that it looks like two simpler queries stitched together – so a very simple step in trouble-shooting, before going into any fine detail, is to unstitch the query and run the two parts separately to see how much data they return and how long they take to complete:


SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE WHERE BAD_COLUMN = 'Y'

SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED

It’s quite possble that the worst case scenario for the total run time of the original query could be reduced to the sum of the run time of these two queries. One strategy to achieve this would be a rewrite of the form:

select  * 
from    (
        SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE WHERE BAD_COLUMN = 'Y'
        minus
        SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED
)

Unfortunately the immediately obvious alternative may be illegal thanks to things like duplicates (which disappear in MINUS operations) or NULLs (which can make ALL the data “disappear” in some cases). In this case the original query might be capable of returning duplicates of (owner, table_name) from dba_lgstdby_not_unique which would collapse to a single ocurrence each in my rewrite – so my version of the query is not logically equivalent (unless the definition of the view enforces uniqueness); on the other hand tracking, back through the original thread to the MoS article where this query comes from, we can see that even if the query could return duplicates we don’t actually need to see them.

And this is the point of the blog note – it’s a general principle (that happens to be a very obvious strategy in this case): if a query takes too long, how does it compare with a simplified version of the query that might be a couple of steps short of the final target. If it’s easy to spot the options for simplification, and if the simplified version operates efficiently, them isolate it (using a no_merge hint if necessary), and work forwards from there. Just be careful that your rewrite remains logically equivalent to the original (if it really needs to).

In the case of this query, the two parts took 5 seconds and 9 seconds to complete, returning 209 rows and 815 rows respectively. Combining the two queries with a minus really should get the required result in no more than 14 seconds.

Footnote

The “distinct” in the second query is technically redundant as the minus operation applies a sort unique operation to both the two intermediate result sets before comparing them.  Similarly the  “distinct” was also redundant when the second query was used for the “in subquery” construction – again there would be an implied uniqueness operation if the optimizer decided to do a simple unnest of the subquery.

 

 

 

 

DevOps for Oracle DBA

Pakistan's First Oracle Blog - Sun, 2019-08-25 00:21
DevOps is natural evolution for Oracle database administrators or sysadmins of any kind. The key to remain relevant in the industry is to embrace DevOps these days and in near future.

The good news is that if you are an Oracle DBA, you already have the solid foundation. You have worked with the enterprise, world class database system and are aware of high availability, disaster recovery, performance optimization, and troubleshooting. Having said that, there is still lots to learn and unlearn to become a DevOps Engineer.


You would need to look outside of Oracle, Linux Shell and the core competency mantra. You would need to learn a proper computer language such as Python. You would need to learn software engineering framework like Agile methodology, and you would need to learn stuff such as Git. Above all you would need to unlearn that you only manage Database. As DevOps Engineer in today's Cloud era, you would be responsible for end to end delivery.


Without Cloud skills, its impossible to transition from Oracle DBA to DevOps role. Regardless of the cloud provider, you must know the networking, compute, storage, and infrastructure as code. You already know the databases side of things, but now learn a decent amount about other databases as you would be expected to migrate and manage them in the cloud.


So any of public cloud like AWS, Azure, or GCP plus a programming language like Python or Go or NodeJS, plus agile concepts, IaC as Terraform or CloudFormation, and plethora of stuff like code repositories and pipelining would be required to be come an acceptable DevOps Engineer.


Becoming obsolete by merely staying Oracle DBA is not an option. So buckle up and start DevOps journey today.
Categories: DBA Blogs

Alfresco – Share Clustering fail with ‘Ignored XML validation warning’

Yann Neuhaus - Sat, 2019-08-24 10:00

In a recent project on Alfresco, I had to setup a Clustering environment. It all went smoothly but I did face one single issue with the setup of the Clustering on the Alfresco Share layer. That’s something I never faced before and you will understand why below.

Initially, to setup the Alfresco Share Clustering, I used the sample file packaged in the distribution zip (E.g.: alfresco-content-services-distribution-6.1.0.5.zip):

<?xml version='1.0' encoding='UTF-8'?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:hz="http://www.hazelcast.com/schema/spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
                http://www.hazelcast.com/schema/spring
                https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd">

   <!--
        Hazelcast distributed messaging configuration - Share web-tier cluster config
        - see http://www.hazelcast.com/docs.jsp
        - and specifically http://docs.hazelcast.org/docs/2.4/manual/html-single/#SpringIntegration
   -->
   <!-- Configure cluster to use either Multicast or direct TCP-IP messaging - multicast is default -->
   <!-- Optionally specify network interfaces - server machines likely to have more than one interface -->
   <!-- The messaging topic - the "name" is also used by the persister config below -->
   <!--
   <hz:topic id="topic" instance-ref="webframework.cluster.slingshot" name="slingshot-topic"/>
   <hz:hazelcast id="webframework.cluster.slingshot">
      <hz:config>
         <hz:group name="slingshot" password="alfresco"/>
         <hz:network port="5801" port-auto-increment="true">
            <hz:join>
               <hz:multicast enabled="true"
                     multicast-group="224.2.2.5"
                     multicast-port="54327"/>
               <hz:tcp-ip enabled="false">
                  <hz:members></hz:members>
               </hz:tcp-ip>
            </hz:join>
            <hz:interfaces enabled="false">
               <hz:interface>192.168.1.*</hz:interface>
            </hz:interfaces>
         </hz:network>
      </hz:config>
   </hz:hazelcast>

   <bean id="webframework.cluster.clusterservice" class="org.alfresco.web.site.ClusterTopicService" init-method="init">
      <property name="hazelcastInstance" ref="webframework.cluster.slingshot" />
      <property name="hazelcastTopicName"><value>slingshot-topic</value></property>
   </bean>
   -->

</beans>

 

I obviously uncommented the whole section and configured it properly for the Share Clustering. The above content is only the default/sample content, nothing more.

Once configured, I restarted Alfresco but it failed with the following messages:

24-Aug-2019 14:35:12.974 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Aug-2019 14:35:12.974 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
24-Aug-2019 14:35:12.988 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor [/opt/tomcat/conf/Catalina/localhost/share.xml]
Aug 24, 2019 2:35:15 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Aug 24, 2019 2:35:15 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
Aug 24, 2019 2:35:15 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
2019-08-23 14:35:16,052  WARN  [factory.xml.XmlBeanDefinitionReader] [localhost-startStop-1] Ignored XML validation warning
 org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; schema_reference.4: Failed to read schema document 'https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.warning(ErrorHandlerWrapper.java:100)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:392)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:306)
	at java.xml/com.sun.org.apache.xerces.internal.impl.xs.traversers.XSDHandler.reportSchemaErr(XSDHandler.java:4218)
  ... 69 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
	... 89 more
...
2019-08-23 14:35:16,067  ERROR [web.context.ContextLoader] [localhost-startStop-1] Context initialization failed
 org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [surf-config.xml]
Offending resource: class path resource [web-application-config.xml]; nested exception is org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from URL location [classpath*:alfresco/web-extension/*-context.xml]
Offending resource: class path resource [surf-config.xml]; nested exception is org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:68)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:76)
  ... 33 more
Caused by: org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from URL location [classpath*:alfresco/web-extension/*-context.xml]
Offending resource: class path resource [surf-config.xml]; nested exception is org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:68)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:76)
	... 42 more
Caused by: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:397)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:335)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303)
	... 44 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:135)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:396)
	... 64 more
...
24-Aug-2019 14:35:16.196 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
24-Aug-2019 14:35:16.198 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [/share] startup failed due to previous errors
Aug 24, 2019 2:35:16 PM org.apache.catalina.core.ApplicationContext log
...

 

As you can see above, the message is pretty clear: there is a problem within the file “/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml” which is causing Share to fail to start properly. The first warning message points you directly to the issue: “Failed to read schema document ‘https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd’

After checking the content of the sample file and comparing it with a working one, I found out what was wrong. To solve this specific issue, you can simply replace “https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd” with “http://www.hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd“. Please note the two differences in the URL:

  • Switch from “https” to “http
  • Switch from “hazelcast.com” to “www.hazelcast.com

 

The issue was actually caused by the fact that this installation was completely offline, with no access to internet. Because of that, Spring wasn’t able to check for the XSD file to validate the definition in the context file. The solution is therefore to switch the URL to http with www.hazelcast.com so that the Spring internal resolution can understand and use the local file to do the validation and not look for it online.

As mentioned previously, I never faced this issue before for two main reasons:

  • I usually don’t use the sample files provided by Alfresco, I always prefer to build my own
  • I mainly install Alfresco on servers which have internet access (outgoing communications allowed)

 

Once the URL is corrected, Alfresco Share is able to start and the Clustering is configured properly:

24-Aug-2019 14:37:22.558 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Aug-2019 14:37:22.558 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
24-Aug-2019 14:37:22.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor [/opt/tomcat/conf/Catalina/localhost/share.xml]
Aug 24, 2019 2:37:24 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Aug 24, 2019 2:37:25 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
Aug 24, 2019 2:37:25 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n1.domain' to address(es): [10.10.10.10]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n2.domain' to address(es): [127.0.0.1, 10.10.10.11]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [share_n1.domain/10.10.10.10, share_n2.domain/10.10.10.11, share_n2.domain/127.0.0.1]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Prefer IPv4 stack is true.
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Picked Address[share_n2.domain]:5801, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5801], bind any local is true
Aug 24, 2019 2:37:28 PM com.hazelcast.system
INFO: [share_n2.domain]:5801 [slingshot] Hazelcast Community Edition 2.4 (20121017) starting at Address[share_n2.domain]:5801
Aug 24, 2019 2:37:28 PM com.hazelcast.system
INFO: [share_n2.domain]:5801 [slingshot] Copyright (C) 2008-2012 Hazelcast.com
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n2.domain]:5801 [slingshot] Address[share_n2.domain]:5801 is STARTING
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.TcpIpJoiner
INFO: [share_n2.domain]:5801 [slingshot] Connecting to possible member: Address[share_n1.domain]:5801
Aug 24, 2019 2:37:28 PM com.hazelcast.nio.ConnectionManager
INFO: [share_n2.domain]:5801 [slingshot] 54991 accepted socket connection from share_n1.domain/10.10.10.10:5801
Aug 24, 2019 2:37:29 PM com.hazelcast.impl.Node
INFO: [share_n2.domain]:5801 [slingshot] ** setting master address to Address[share_n1.domain]:5801
Aug 24, 2019 2:37:35 PM com.hazelcast.cluster.ClusterManager
INFO: [share_n2.domain]:5801 [slingshot]

Members [2] {
	Member [share_n1.domain]:5801
	Member [share_n2.domain]:5801 this
}

Aug 24, 2019 2:37:37 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n2.domain]:5801 [slingshot] Address[share_n2.domain]:5801 is STARTED
2019-08-23 14:37:37,664  INFO  [web.site.ClusterTopicService] [localhost-startStop-1] Init complete for Hazelcast cluster - listening on topic: share_hz_test
...

 

Cet article Alfresco – Share Clustering fail with ‘Ignored XML validation warning’ est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator