Friday, April 26, 2024

Oracle Database / Datawarehouse Security, High Availability & Disaster Recovery

Following is a quick article that I prepared for providing the data security, high availability and disaster recovery solution all at once for a mission critical Oracle Data warehouse environment. Below, you will find Oracle Advanced Security, Oracle RAC , Golden Gate, Dataguard and other related features and solutions of Oracle all at once, integrated together, put into the context and proposed together to be in place for securing the data, making it high available and protecting both the data and the services from disasters. Features / Solutions given here can be used in different contexts (for instance: Golden Gate can be used for data integration) as well, but here we are in the scope of data security, high availability and DR, so I will not get into that.

Oracle Database / Data Warehouse - Advanced Security Option for Data Security:

Securing Your Oracle Data Warehouse: Why Oracle Advanced Security is the Right Choice

Introduction

Data warehouses hold an organization's most valuable asset: its data. This data can include sensitive customer information, financial records, and intellectual property. Ensuring the security of this data is paramount. Oracle Advanced Security (OAS) is the ideal solution for robust data security within your Oracle data warehouse, and it can be tailored to your data warehouse environment, providing the peace of mind that comes with knowing your data is safe.

Robust Data Security Measures

OAS offers a comprehensive suite of functionalities to address all aspects of data security:

  • Encryption: Transparent Data Encryption (TDE) encrypts data at rest within the database, rendering it unreadable even if an attacker gains access to storage media.
  • Access Controls: OAS enforces granular access controls, allowing you to define precisely who can access specific data and what actions they can perform.
  • Authentication: OAS supports various strong authentication mechanisms like Kerberos and PKI, ensuring only authorized users can access the data warehouse.

Benefits of Oracle Advanced Security

  • Seamless Integration: OAS integrates seamlessly with your existing Oracle database environment. There's no need to modify applications, minimizing disruption and implementation costs.
  • Performance Efficiency: OAS is designed for minimal performance impact. Data encryption and decryption happen transparently, ensuring smooth operation of your data warehouse.
  • Regulatory Compliance: OAS helps meet compliance requirements for data privacy regulations like PCI DSS and HIPAA.
  • Data Security Lifecycle Coverage: OAS protects data throughout its lifecycle, from encryption at rest to securing backups and exports.

Addressing Specific Data Warehouse Security Concerns

Data warehouses often house highly sensitive data. OAS addresses these concerns:

  • Insider Threats: OAS enforces access controls, preventing unauthorized access attempts, even from privileged users.
  • Data Breaches: TDE renders data useless even if stolen, minimizing the impact of a breach.
  • Accidental Data Exposure: Data redaction, another feature of OAS, allows you to mask sensitive data retrieved by queries, further protecting sensitive information.

In addition to the Oracle Advanced Security Components, Audit Vault, Database Firewall and Database Vault can be implemented to provide the Maximum Security Architecture for ensuring the security of the data end-to-end, in transit and in place.

Here is the short take / Conclusion:

Oracle Advanced Security provides a robust and efficient solution for securing your Oracle data warehouse. It offers comprehensive data encryption, access controls, and authentication mechanisms, all seamlessly integrated with your existing environment. By choosing OAS, you ensure a high level of data security for your most valuable asset. Implementing the Maximum Security Architecture (MSA) of Oracle takes data security to the ultimate, maximum level. By implementing the MSA, your gains in the context of data security will be Auditability, Regulatory Compliance, Operational Efficiency, Risk Minimization, Correct Authorized Access, Role Based Authorization, Minimum Req. Privilege Principle, Separation of Duties, Data Security, Strengthened Authentication, Risk-based Authorization Control, Compliance with Security Standards, Centralized Reporting, Centralized Key Management, Centralized Security Management and Anomaly detection in Data Access.

Data Warehouse Continuity- Oracle Database – High Availability Features:

Ensuring Uninterrupted Data Access: Oracle Database High Availability for Your Data Warehouse

Introduction

In today's data-driven world, uninterrupted access to your data warehouse is critical for informed decision-making. Downtime can lead to missed opportunities, lost productivity, and erode user confidence. This paper explores how Oracle Database's High Availability (HA) features, specifically Oracle Real Application Clusters (RAC), ensure continuous operation of your Oracle data warehouse.

Our Expertise in High Availability Solutions ( just a little advertisement:)

My team and I possesses extensive experience in implementing and managing high availability solutions for Oracle databases. We understand the critical role of data warehouse uptime and leverage the power of Oracle HA to create a robust and resilient environment. We can implement Oracle Database in a highly available manner by using the Oracle Features in this context. Oracle Database Features combined with our implementation experience, can ensure your data warehouse remains accessible 24/7, empowering informed decisions and driving business success.

What is Oracle Database High Availability?

Oracle offers a comprehensive set of features for achieving high availability in your data warehouse. RAC, the cornerstone of this approach, allows you to run your database on multiple servers (nodes) concurrently in active-active manner.

 This provides several key benefits:

  • Failover: If a node fails, the remaining nodes automatically take over processing, minimizing downtime and ensuring data remains accessible.
  • Scalability: RAC facilitates horizontal scaling by adding additional nodes, enabling your data warehouse to handle increased data volumes and user demands.
  • Planned Maintenance: You can perform maintenance on individual nodes without impacting overall data warehouse availability.

Addressing Data Warehouse Continuity Concerns

Data warehouses often face unique challenges regarding continuity:

  • Large Data Volumes: RAC's parallel processing capabilities ensure efficient data loading and retrieval, even with massive datasets.
  • Batch Processing Windows: Planned maintenance can be scheduled outside of critical batch processing windows, minimizing disruption.
  • Disaster Recovery: RAC integrates with Oracle Data Guard for disaster recovery, providing a geographically separate replica for business continuity in case of a major outage.

Beyond RAC: Additional High Availability Features

Oracle's HA suite extends beyond RAC, offering additional functionalities:

  • Oracle Data Guard: Creates synchronized standby databases for disaster recovery and read-only workloads.
  • Automatic Storage Management (ASM): Simplifies storage management and facilitates automated failover in case of disk failures.
  • Oracle Restart: Monitors and automatically restarts critical Oracle components in case of failures.

Here is the short take / Summary:

By leveraging Oracle Database High Availability features, we can create a highly available data warehouse environment. Our expertise ensures a seamless implementation, maximizing uptime and minimizing the impact of potential outages. We, certified Oracle experts with extensive experience in implementing and customizing Oracle Database for various industries and business requirements.

Oracle Data Warehouse - Disaster Recovery Implementation

Disaster Recovery for Your Oracle Data Warehouse: Minimizing Downtime, Maximizing Peace of Mind

Introduction

Natural disasters, technical failures, or cyberattacks can cripple your business by disrupting access to your data warehouse. Here I will briefly summarize how Oracle Database's disaster recovery (DR) features equip you to recover your data warehouse swiftly in the face of such events, minimizing downtime and ensuring business continuity. I will try to give a quick explanation of how Oracle's DR features, combined with our (my team and I) proven expertise, can minimize downtime and ensure your data warehouse remains operational even in the face of disaster.

Our Proven Track Record in Disaster Recovery

My team and I posses a proven track record of implementing and managing robust DR solutions for Oracle data warehouses. We understand the criticality of rapid recovery and leverage Oracle's DR features to create a comprehensive disaster preparedness plan.

Oracle Database Disaster Recovery: Built-in Resilience

Oracle offers a comprehensive suite of DR functionalities designed to get your data warehouse back online quickly:

  • Data Guard: This core technology creates a synchronized copy (standby database) of your production data warehouse at a geographically separate location. This ensures data availability even if the primary site becomes unavailable.
  • Active Data Guard: This advanced option allows read-only access to the standby database, offloading some workload from the primary and enabling faster failover in case of disaster.
  • Golden Gate: For complex data warehouse environments, Golden Gate facilitates heterogeneous replication, allowing replication between dissimilar database systems if needed for your DR strategy.
  • Recovery Manager (RMAN): This powerful tool simplifies backups, enabling efficient restoration of your data warehouse to a specific point in time.

Addressing Data Warehouse DR Challenges

Data warehouses present unique considerations for disaster recovery:

  • Large Datasets: Oracle Data Guard's efficient replication methods minimize bandwidth consumption and ensure timely synchronization of massive data volumes.
  • Data Consistency: Our expertise ensures configuration of Data Guard for optimal consistency based on your specific recovery time objective (RTO) and recovery point objective (RPO) requirements.
  • Testing and Failover: Regular DR testing done in this context ensures seamless failover to the standby site and rapid data warehouse recovery.

Beyond Recovery: Building a Comprehensive DR Plan

Disaster recovery goes beyond technology. Oracle Consultancy services (Such as my team and I provide) encompass:

  • DR Strategy Development: We collaborate with you to define your RTO, RPO, and DR site selection based on your business needs.
  • DR Plan Documentation: We create a comprehensive DR plan outlining roles, responsibilities, and failover procedures for a smooth recovery process.
  • Ongoing Support: We offer ongoing support to ensure your DR plan remains current and your team is prepared for any eventuality.

Her is the Short Take / Conclusion

By leveraging Oracle Database's disaster recovery features and our extensive experience, we can create a robust DR plan for your data warehouse. This ensures minimal downtime, rapid recovery, and protects your business from the impact of unforeseen disasters.

Wednesday, April 24, 2024

EBS 12.2 -- oracle.apps.fnd.cp.gsm.GSMSvcComponentContainer starts from wrong filesystem / patch filesystem

Question: How much time do you spend/lose?
Answer: Well, how much time I gain? :)

I mentioned in my previous blog post that I will share some of the interesting cases here. Erman Arslan's Oracle Blog / Forum has 350 subscribers asking questions regularly and here I 'm sharing the interesting ones.

You asked:

We have an Oracle 12.2.6 EBS system.

Run FS is fs1

As soon as system started, all the services are running on fs1 , except oracle.apps.fnd.cp.gsm.GSMSvcComponentContainer

GSM starts on fs2. Concurrent Managers are using binaries from fs2.

If we run an empty ADOP patching cycle, after cutover then all is OK -> they all starts from fs2.

But if we run it one more time, all the services start from fs1 as expected but those mentioned ones start from fs2!

Running adconfig is not helping and we even checked that in context file of fs1 none of the parameters reference to fs2.

None of the env parameters pointing to fs2.

Where it takes that fs2 value?

What is this ? Any ideas ?

Moreover;

if I do env |grep fs2 -> the patch_base is fs2.

So meaning conc starting from patch base..

Any hints would be appreciated :)

ps -ef |grep fs2

/testapp/oracle/TEST/fs2/EBSapps/comn/util/jdk64/bin/java -DCLIENT_PROCESSID=24445080 -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts= -Djava.awt.headless=true -Xmx384m -Doracle.apps.fnd.common.Pool.leak.mode=stderr:off -verbose:gc -Ddbcfile=/testapp/oracle/TEST/fs2/inst/apps/TEST_ebstestapps/appl/fnd/12.0.0/secure/TEST.dbc -Dcpid=9758 -Dconc_queue_id=1140 -Dqueue_appl_id=0 -Dlogfile=/testout/oracle/TEST/conc/log/FNDCPGSC9758.txt -DLONG_RUNNING_JVM=true -DEBS_HOSTNAME=ebstestapps -DOVERRIDE_DBC=true -DFND_JDBC_BUFFER_MIN=1 -DFND_JDBC_BUFFER_MAX=2 oracle.apps.fnd.cp.gsm.GSMSvcComponentContainer

So where is in EBS the reference to use this /testapp/oracle/TEST/fs2/EBSapps/comn/util/jdk64/bin/java? and not the java from fs1?


/testapp/oracle/test/fs2/EBSapps/comn/util/jdk64/jre/bin]> ./java -version
java version "1.7.0"

Java(TM) SE Runtime Environment (build pap6470sr9fp10-20150708_01(SR9 FP10))

IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 20150701_255667 (JIT enabled, AOT enabled)

J9VM - R26_Java726_SR9_20150701_0050_B255667

JIT - tr.r11_20150626_95120.01

GC - R26_Java726_SR9_20150701_0050_B255667_CMPRSS

J9CL - 20150701_255667)

JCL - 20150628_01 based on Oracle jdk7u85-b15

/testapp/oracle/test/fs2/EBSapps/comn/util/jdk64/jre/bin]> cd /testapp/oracle/test/fs1/EBSapps/comn/util/jdk64/jre/bin

/testapp/oracle/test/fs1/EBSapps/comn/util/jdk64/jre/bin]> ./java -version

java version "1.7.0"

Java(TM) SE Runtime Environment (build pap6470sr9fp10-20150708_01(SR9 FP10))

IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 20150701_255667 (JIT enabled, AOT enabled)

J9VM - R26_Java726_SR9_20150701_0050_B255667

JIT - tr.r11_20150626_95120.01

GC - R26_Java726_SR9_20150701_0050_B255667_CMPRSS

J9CL - 20150701_255667)

JCL - 20150628_01 based on Oracle jdk7u85-b15

And now I know why :

autoconfig does not populate fnd_env_context table so majority of parameters there refer to fs2 .

So thats why have this problem.

Tried to update context file version to a higher version. The same issue..

So, I guess to fix our issue we need to clean fnd_env_context somehow?

How we can recreate autoconfig files and re-load fnd_env_context ?

Any hints ? :)

Thanks!

We said:

INFO :FND_ENV_CONTEXT Table stores information about environment name and value for each of the concurrent processes.

Action plan:

1)Run the "Purge Concurrent Requests and/or Manager Data" concurrent program with the necessary -parameters to clear that records from fnd_env_context.
2)Stop the concurrent processing
3) Run the Concurrent Manager Recovery Troubleshooting Wizard with the internal manager down as detailed in the following document:
Concurrent Processing - Concurrent Manager Recovery Troubleshooting Wizard (Note 134007.1)
4) Start the concurrent processing.

Reference:
MAIN REF: Workflow Mailer Won't Start In Cloned Environment (Doc ID 1953629.1) -- this for 12.1 but, it seems it is applicable for 12.2 as well.
Concurrent Processing - Purge Concurrent Request and/or Manager Data Program (FNDCPPUR) (Doc ID 104282.1)

Be careful while running this program. Don't break any business rules (i.e if there is a need to save the concurrent request log and out files for 30 days, then you will need to get the approval of business side for this...)

You can backup the following tables before running the conc program:

FND_CONCURRENT_REQUESTS
This table contains a complete history of all
concurrent requests. This table should ideally be kept at around
4 - 5k records and should be regularly purged as part of normal
'housekeeping' by running the standard FNDCPPUR program.
This is a part of basic system administration and it is recommended
that for the average instance, it should be run every 30 days or so.
Allowing the table ( and others ) to increase in size can affect performance.

FND_RUN_REQUESTS
When a user submits a report set, this table stores
information about the reports in the report set and
the parameter values for each report.

FND_CONC_REQUEST_ARGUMENTS
This table records arguments passed by the concurrent
manager to each program it starts running.

FND_DUAL
This table records when requests do not update
database tables.

FND_CONCURRENT_PROCESSES
This table records information about Oracle
Applications and operating system processes.

FND_CONC_STAT_LIST
This table collects runtime performance statistics
for concurrent requests.

FND_CONC_STAT_SUMMARY
This table contains the concurrent program
performance statistics generated by the Purge.

FND_ENV_CONTEXT
Table stores information about environment name and value for each of the concurrent process

You said:

Last time I managed to fix- so conc mgrs started on correct file system - by updating the s_contextserial to higher than in the database and re-run autoconfig.
select serial_number from FND_OAM_CONTEXT_FILES where status in ('S','F') and name not in
('METADATA','TEMPLATE') and ctx_type='A';

This time I am again having this issue after cutover.
This time updating the context_serial to higher than in db, is not helping.
autoconfig completes without any errors..
But the serial_number 65 but in context file I set it to 70.
Its not changing after autoconfig.
Any hints ? I think of cleanuping the fnd_oam_context_files ?

We said:

I think there are some things that affect some environment variables, left there, in the FND.. tables.. So, a wrong value in FND tables.
Yes.. FND_ENV_CONTEXT may be the cause.

After taking your backup, you can try truncating FND_OAM_CONTEXT_FILES and running autoconfig (first in db then in apps tier)afterwards.. But, what I recommend you is following;

INFO :FND_ENV_CONTEXT Table stores information about environment name and value for each of the concurrent process

Action plan:
1) Run the "Purge Concurrent Requests and/or Manager Data" concurrent program with the necessary -parameters to clear that records from fnd_env_context.
2) Stop the concurrent processing
3) Run the Concurrent Manager Recovery Troubleshooting Wizard with the internal manager down as detailed in the following document:
Concurrent Processing - Concurrent Manager Recovery Troubleshooting Wizard (Note 134007.1)
4) Start the concurrent processing.

Reference:
MAIN REF: Workflow Mailer Won't Start In Cloned Environment (Doc ID 1953629.1) -- this for 12.1 but, it seems it is applicable for 12.2 as well.
Concurrent Processing - Purge Concurrent Request and/or Manager Data Program (FNDCPPUR) (Doc ID 104282.1)

Solution / You Said:

Update and FYI:
We fixed the issue using the following action plan;

shutdown apps tier
truncate the applsys.fnd_env_context
run autoconfig on db+appstier
restart

Actually its comes up now and then same issue appears sometimes after a lot of patching activities, note that -> not every time.. 
So, seems like some bug in EBS that confuses this GSM thing, and it just start on wrong filesystem.
Take care!

EBS 12.2 -- due to VIP values in the apps_jdbc_connect_descriptor - failing fs_clone

Educability.. We all need it, and here in this blog and in my forum, I think we are taking steps in this regard. Educability is defined with 3 basic things. 

1) Learning from experience ( PAC - Probably approximately correct learning)
2) Chaining together what we've learned in different contexts. 
3) Taking from someone else, their experience.

It is actually the combination of those 3.. We, humans have this unique capability. So why not we use it in every way that we can? :)

So you encounter some problems, you ask them, I try to shed some light on them and offer some solutions for them, you try the suggested actions, or visit the guided documentation, you send the feedback and update me.. In this way, we all learn, we all educate ourselves.

Anyways, enough great words of wisdom for the intro here :) I almost hear you say give us the short take. So I come to the point.

Erman Arslan's Oracle Blog / Forum has 350 subscribers asking questions regularly and here I 'm sharing the interesting ones.

You asked:

After cloning  an EBS 12.2 instance,  we have seen that, we had SCAN-related values in the apps_jdbc_connect_descriptor. But! , in the cloned environment we had VIP values in the apps_jdbc_connect_descriptor. Apparently the fs_clone is using the VIP values and failing in EBS_SYSTEM validation. We can connect from the apps tier using sqlplus EBS_SYSTEM/<pwd> without any issue.

Any thoughts on what is going on?

We have tried to set the <DB>.dbc and adgendbc.sh as well as the context file to the desired values but! as soon as we run autoconfig, the related values in all the related places in the context file get reset to the values with VIP.

 So where is autoconfig picking up these from?

We said:

Interesting , actually it should be other way around. (see the following blog post , there we have a different behavior -> https://ermanarslan.blogspot.com/2021/01/ebs-122-configuring-application.html -- reference

You may check the relevant template file and see if there is some casual connection there. (your AD & autoconfig version may also be the cause) .. Check the template and try to predict where the autoconfig takes that value.. (db parameters maybe.. Local and Remote listener parameters I mean..) -- informative

*s_jdbc_connect_descriptor_generation is also important.

if we set it or leave it as is (I mean if it is set to TRUE), then the apps tier autoconfig will overwrite the jdbc url with the scan-based & load balanced one. (but you are saying the exact opposite.. But still check that value) -- solution offer 1

Autoconfig may also take that value from the old context files stored in FND_OAM_CONTEXT_FILES table.. Check that as well..
In the worst case, you may clean up the network topology and re-populate it with autoconfigs (db + app tier) --solution offer 2

Solution / You said:

Afer setting the s_jdbc_connect_descriptor_generation to TRUE it is working as desired.
Thanks for the reply.

Tuesday, April 23, 2024

RDBMS -- a Case Study: Suggestions for Performance Improvements, quick actions, utilizing the features for performance and the approach

Following is based on a real life performance problem, which was encountered on a very critical Oracle Database 19C Production System.

While the headings given below summarize how these performance problems should be approached from scratch, those written things under the headings provide case-specific details. Determining the current status of the environment, motivations in determining the direction of analysis, quick actions and suggestions for the future (Oracle specific).. Of course, the details are included in the analysis report. With this approach, we solve problems while ensuring customer satisfaction.. 

General Findings:

High parse

Lots of I/O -- due to several reporting activities

concurrency waits

Slow SQLs due to visiting very big tables.

High (almost at limits) OS load and CPU usage

Several active sessions and huge amount of queries (especially OLTP-type) 

Remote database queries and related network waits

Findings:

High number of sessions

High CPU usage

High Hard Parse

High Parse (Soft)

High number of SQL Executions

Low number but long CPU Bound SQL Execution

SQLs with high buffer get.. (there are also those with high execution and those with low execution)

Low Parse CPU to Parse Elapsed ratio

Medium-High DBlink wait

Medium-High Indexed Reading Wait

Fast I/O (ALL Flash I/O Subsystem)

High I/O count

Comments:

SQL Version count should be reduced. -> For mutexes

Hard Parse should be reduced. (Using Bind for Hard Parse, Dynamic SQL should be avoided, PLSQL should be used -- automatic cursor sharing)

Soft Parse should be reduced. (For Soft Parse, techniques such as simplifying queries, using indexes effectively, and reducing unnecessary joins can significantly improve performance.)

SGA should be increased to you. (780GB specified by Adviser. We have already recommended this.)

SQLs that perform high GET should be tuned so that they work with less data.

Motivation for further analysis:

Random Cursor S pin on X waits

High CPU usage ve Load

Random ORA-00600 errors

Low RU level

Possible Bugs

Lack of best practice development

Actions Taken:

RU upgrade

LOAD analysis

Platform Migration

Performance Analysis & Workshops

Error Diagnostics & Action Plans

Bug Fix- manual actions

Hang analysis & Support

Resource Management implementation tests

Recommendations for Improvements:

• Active Dataguard – Transferring the load from the source system to the secondary system with SQL/Report offloading (general benefit, I/O + Memory, less work to be done..)

• Continuous read/write data transfer to a secondary environment and SQL offloading with a tool such as Golden Gate (alternative to the above item)

• Abandoning the use of DB links. (Not very related to resources, but still noticeable and sometimes keeps queries waiting.)

• Partitioning – Sessions process with less data by partitioning operations based on queries. (This will reduce concurrency waits and reduce CPU usage.)

• In-Memory – Faster querying (will reduce concurrency waits and reduce CPU usage.)

• ILM (Information Lifecycle Management) implementation -- Data Archiving – Reducing and archiving data in the source system. (Less data, faster queries, less I/O and CPU usage due to concurrency)

• Oracle RAC – (Horizontal growth and node-based load distribution + many advantages.. -- Application side to be RAC Aware.)

• Reducing the number of parses – Mainly the work of the software developer. Increasing the number of parses in the current system, minimizing the hard parse and reaching the point of “Parse First Execute Many”. (More gain from CPU cycles)

• Testing of 23C / multi-tenant is mandatory – in terms of performance.. (it has many innovations and bug fixes. It may be beneficial indirectly.)

• What can be done to reduce the concurrency in memory.. (e.g., if there are cache buffer chains while reading the same part of the index, changing the index type.. reverse key index maybe.. Scenario-based actions) - low gain and challenging job.

• Hardware upgrade (last resort) – Compatible with licenses, but capacity on-demand. It will grow in place when necessary. Supported by virtualized structures that accept Oracle's capacity on-demand. Access to disks will still be physical.

• Cloud use can be considered (in all relevant contexts). It can be much more flexible and less costly.

• A fast cache in front of Oracle Database. (Fast reading and less work on DB. Maybe Redis. It needs to be supported with research.)

• Oracle Database Result Cache.. (It may seem contradictory depending on the situation, but it can be checked.)

EBS -- About Oracle Linux 8 upgrades

This is a hot topic for the EBS customers and wanted to clarify something about this upgrade process..

That is, the in-place OS upgrade for Oracle EBS from Linux 7 to Linux 8 is not certified. 

So , if we want to switch to Oracle Linux 8  a  Oracle Linux 8 must be installed to a new machine and then all the requirements in the MOS Doc ID 1330701.1 document must be met. Afterwards, EBS system needs to be moved to the new machine/OS using the cloning method.

This was already documented for Oracle Linux 7 upgrades (as shared below). But I also wanted to clarify this for the Oracle Linux 8 upgrade(s).. Basically the same things apply.

Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.2) for Linux x86-64 (Doc ID 1330701.1)

RDBMS -- 19C's Premier Support & 23C consideration

For 19C customers, currently Latest RU is 19.22.  However, 19C's premier support will end on April 30. Then this will happen -> “Premier Support (PS) ends April 30, 2024, two years of waived Extended Support (ES) fees will be in effect from May 1, 2024 until April 30, 2026.”

Error correction continues until 2026. In any case, we should be on 19C and be ready for the future  23C upgrades. (considering, 23C is the latest long term support release and will be available for on-premises starting 1H CY2024)

Additional information for 19C:

Premier Support (PS) ends: April 30, 2024

Waived Extended Support (ES): May 1, 2024 - April 30, 2026 (No fees required during this period)

Paid Extended Support (ES): May 1, 2026 - April 30, 2027 (Fees apply)

Patching:

You will have access to error correction and patching until April 30, 2027, but there's a distinction:

With paid Extended Support: You will have access to all available patches.

Without paid Extended Support (after April 30, 2026): You will only have access to patches released before May 1, 2026. This includes Monthly Recommended Patches (MRPs) if you are using a compatible platform.

19c Release Update Revisions (RURs):

These are no longer relevant for Oracle Database 19c versions 19.17.0 and above. You should focus on Monthly Recommended Patches (MRPs) instead, but be aware of their platform limitations.


Release Schedule of Current Database Releases (Doc ID 742060.1)

Monday, April 22, 2024

Erman Arslan's Oracle Forum / FEB 10 - APR 22, 2024 - "Q & A Series"

Empower yourself with knowledge! Erman Arslan's Oracle Blog offers a vibrant forum where you can tap into a wealth of experience. Get expert guidance and connect with a supportive community. Click the banner proclaiming "Erman Arslan's Oracle Forum is available now!" Dive into the conversation and ask your burning questions. -- or just use the direct link: http://erman-arslan-s-oracle-forum.124.s1.nabble.com A testament to its vibrancy: over 2,000 questions have been posed, sparking nearly 10,000 insightful comments. Explore the latest discussions and see what valuable knowledge awaits! Supporting the Oracle users around the world. Let's check what we have in the last few weeks..




Requests stuck in Pending Standby by VinodN

Databases to use Oracle forms and reports by kvmishra

E-business APEX HTTPS by big

Auto Generating -bash process by kvmishra

Auto Invoice Master / Auto Invoice import performance issues by raiq1

Package installation by kvmishra

Can you clone from the patch filesystem in EBS 12.2? by Srini

Oracle Bi by Mohammed Hamed

Moodle on Oracle Database by kvmishra

stty: standard input: Inappropriate ioctl for device by kvmishra

DBCS - Compute Instance Creation by satish

ERP cloud - Paas - oaf by satish

Question on VPN -OCI by satish

Oracle Base Database Service - OCI by satish

Provisiong db systems - paas oci by satish

concurrent program cannot create output by big

Confusion between application server 10.1.2 and managed servers by VinodN

Regarding audit in EBS 12.2 by VinodN

Oracle Base Database Service - Bastion - OCI by satish

xtts filename by Roshan

migration of oracle 12c from windows to linux by ayobi.shafiq

What should an EBS DBA Learn to be future ready by VinodN

EBS Upgrade 12.2.12 character set issue by raiq1

Encrypt Table Data by kvmishra

Dev instance slow by VinodN

xtts by Roshan

"Privileged" Users in EBS by VinodN

date:time of execution by big

convert log file parameter for clonning by big

Migration oracle database to OCI by satish

perl adcfgclone.pl appsTier fails by big

Oracle fusion - SAAS by satish

Migration to OCI - Oracle Database Backup Cloud Service - Object Storage by satish

Need advice on Configuration - R12.2 by satish

Bursting Emails R12.2 by satish

Wednesday, March 13, 2024

Interesting Issue on Oracle Forms 12C due to Digicert Code Signing ceriticate-related Revocation Checks

Forms 12C customers should know this. The jar files/the standard code delivered by Oracle is signed with the Digicert certificates and these certificates, which are there for the code signing, are used ensure the security on the client side. I won't go into the details of how code signing works, and I suppose most of my followers already know this.

Here, I want to share something interesting.. This is based on a true story, on an issue escalated to me.

The environment was a clustered Forms 12C (12.2.1.3). It was highly critical and very crowded.(3000 active users)

The issue appeared suddenly and clients started to complain about the slowness of client-side forms.  Actually forms java client startup was taking time and java console gave us the clues.

We had read timeouts recorded in the java console output both for ocsp and crl3 urls of the Digicert..

Following is an example of ocsp related output taken from the java console:

Our java output: network: Connecting socket://ocsp.digicert.com:80 with proxy=DIRECT security: Revocation Status Unknown com.sun.deploy.security.RevocationChecker$StatusUnknownException: java.net.SocketTimeoutException: Read timed out

Clients had internet connection, no issues were found on firewall but we had read timeouts, and we started analyzing it.

We had the following 3 workarounds:

1)Use FSAL mode of running forms.

2)Modify the "Certificate revocation check" to "Publisher's certificate only." Follow Start -> Control Panel -> Java to access the Java Control Panel then click on the Advanced tab. Under "Perform signed code certificate revocation checks on" check the "Publisher's certificate only" or do not check option using radio button. -- note that.

3)Disable ocsp and crl based revocaction checks on the java client using java control panel.

But! our customer didn't accept these.. (due to the difficulty of delivering these configurations to 3000 clients)

Then, we analyzed deeper, and found that the issue is caused by Digicert itself.

Digicert was having troubles giving these ocsp and crl3 services and due to that, forms clients were facing these issues. (openssl can be used to test the revocation checks manually..)

We also found the following web pages in the digicert website and these let us see the current status of digicert services and follow the incidents and planned maintenance tasks (noted to be used for the future)

https://status.digicert.com

https://status.digicert.com/history

Anyways, problem disappeared when the Digicert's crl3 and ocsp services started working again.

But this showed us Oracle's dependency to Digicert. (Oracle Forms in this case), and we had no possible solutions for getting rid of this dependency. 

That dependency cannot be destroyed as the forms jar files are signed using the Digicert certificates. Again, workaround comes into picture.

We thought we can resign the jar files with --let's say-- a Verisign certificate and get rid of the dependency to Digicert, but it is not supported.. The default forms jar file should not be signed with any custom certificates.

Anyways.. It was an interesting issue but we learned something.. You can't easily find client-server architecture in the enterprise environments nowadays, but if you have one, this might help you one day.

Again, these tiny little details gives you a lot of information.

Of course information is contextual and it is never absolute! But, this might save your day one day :) --especially if you are in the same context --> using Oracle Forms..

By the way, clients having Oracle Forms 12.2.1.3 should consider 12.2.1.4 upgrade.

Actually it was expected that customers will move away from 12.2.1.3.0 in 2020, one year after 12.2.1.4.0 was released.

Upgrading to 12.2.1.4.0 is not so hard and you can even do it with zero down time. You can also use the same domain files. Also keep in mind that 12.2.1.4.0 will be out of support in September this year, so I encourage you to upgrade to 12.2.1.19.0 and receive the full support you deserve for the same money you are paying for an obsolete version.

Tuesday, March 12, 2024

Oracle SuperCluster M8 - End Of Life (EOL) & End Of Support (EOS)

I want to remind you something important regarding the Engineered systems. That is, as you may already know, the Oracle Supercluster M8 reached its End of Life (EOL) on June 30th, 2019. End of Life (EOL) signifies that Oracle no longer sells the hardware for the Supercluster M8.

In addition to that, end of Support (EOS) status is also important.. EOS indicates that Oracle no longer provides guaranteed support for the system, including hardware replacement, bug fixes, and security patches. Well, last ship day of Supercluster M8 is Jun 2019. So, the premier support will end in June 2024.


After June 2024, you may go with the extended support. But! It won't be like premier support and may have cons. (delays, lack of premier support-like concentration on the Oracle side and stuff like that..)

For example, when you need to replace a faulty disk in an environment with extended support, you may wait longer than in an environment with premier support. (That disk type you need may not be present in your region, and getting it from another region may take time.).

Delays, guarantees... You may have disadvantages in these.

Ref: List of Oracle Supported Hardware with Last Ship Dates Announced (Doc ID 1450710.1)

Note that, after five years from last ship date, replacement parts may not be available and/or the response times for sending replacement parts may be delayed.”

Ref: https://www.oracle.com/us/hardware-systems-support-policies-069182.pdf

Anyways.. While you can technically continue using the Supercluster M8 after June 2024, the effective end of support falls on that date. Here are some potential risks of using an Oracle Supercluster M8 after EOS:

  • Increased security vulnerabilities: Without security patches, your system becomes more susceptible to attacks.
  • Limited hardware support: If a hardware component fails, you might have difficulty finding replacements or repairs.
  • Compatibility issues: Newer software versions may not be compatible with the EOS system, hindering future upgrades.
  • No guaranteed bug fixes: You'll be on your own for troubleshooting any software bugs encountered.

Considering these risks, it's recommended to migrate from Oracle Supercluster M8 to a supported system (Like EXADATA or PCA or EXADATA + PCA, or ODA + EXADATA or Oracle/Sun Traditional Servers + ZFS.. and KVM virtualization.. Choice depends on the context actually.. These are some on-prem options... ) before June 2024.

Thursday, February 22, 2024

RDBMS 19C -- Tablespace and File I/O Stats missing in AWR reports / real fix -> dbms_workload_repository.modify_table_settings , flush_level

You may be hitting this bug even if your Oracle database version is 19C (even if it is 19.14..)

You have a workaround (setting statistics_level to ALL), but it is not recommended.. (due to the fact that, you may end up collecting too much data for the  AWR snapshots) Note that, your AWR snapshot related data collection tasks may also take too much time to complete, if you set statistics_level to ALL.. especially if you have a large buffer cache...

At the first glance, this problem seems addressed with the patch 22048821 TABLESPACE IO STATISTICS MISSING FROM AWR REPORT (Patch).. But it is actually not so :)

Also, this seems already fixed (the fix is already delivered) in versions like "19.8.0.0.200714 (Jul 2020) Database Release Update (DB RU)", but it is practically not so :)

Note that, this problem is also seen in 19.14, but when I checked my 19.20 DB, the issue is not there.. It seems the Enhancement request which is opened for 19.14 received a good response...

Related ER -> Bug 35122334 : REQUEST TO INCLUDE TYPICAL AS A DEFAULT VALUE TO BYPASS MANUAL EXECUTION OF DBMS_WORKLOAD_REPOSITORY.MODIFY_TABLE_SETTINGS AFTER APPLYING FIX 25416731

Related MOS Doc. -> Missing Tbspace I/O Stats AWR Report need to change the default dbms_workload_repository.modify_table_settings for WRH$_FILESTATXS WRH$_DATAFILE Tempfile WRH$_TEMPFILE and WRH$_TEMPSTATXS (Doc ID 2954137.1)

Probably, after 19.14, the issue is fixed. That is, after the Release 19.14, Oracle included the fix for bypassing the need of manual execution of the DBMS_WORKLOAD_REPOSITORY.MODIFY_TABLE_SETTINGS.

Anyways, for Oracle 19.x (<=19.14), the fix of that bug and fix of related the Enhancement Request 27871293 doesn't really fix this problem.  

To fix this and get the TBS and File I/O Stats data back in AWR reports in 19.X versions, we should enable flushing for the WRH$_FILESTATXS, WRH$_DATAFILE, WRH$_TEMPFILE and WRH$_TEMPSTATXS tables by taking the statistics level of the Tempfile Group into consideration.

Well.. We have to run the following in order to do that;

     $ sqlplus / as sysdba

        exec dbms_workload_repository.modify_table_settings(table_name  => 'WRH$_FILESTATXS', flush_level => 'TYPICAL');

        exec dbms_workload_repository.modify_table_settings(table_name  => 'WRH$_DATAFILE', flush_level => 'TYPICAL');

        exec dbms_workload_repository.modify_table_settings(table_name  => 'Tempfile Group', flush_level => 'TYPICAL');

        exec dbms_workload_repository.modify_table_settings(table_name  => 'WRH$_TEMPFILE', flush_level => 'TYPICAL');

        exec dbms_workload_repository.modify_table_settings(table_name  => 'WRH$_TEMPSTATXS', flush_level => 'TYPICAL');

After these actions, we will have those missing stats back.. but! as you may imagine -> we will have those stats in the new snapshots and in the new AWR reports...

This is a very important thing and should be taken care off.. You may not be aware of this... But without having these stats in place, believe me, one day your AWR reports will leave you alone in dealing with a real performance problem 

It is important for being able to check historical IO performance of Oracle Database and that's why I shared it with you.