Author Archive

Script to generate CSV for Compute Optimizer data from a Json file

Below is the script to generate a CSV file from a JSON output. I wrote this script for generating CSV for collecting compute optimizer data so that each EC2 has one line of data in the CSV file. Later on this CSV file is uploaded to google sheet for further analysis.

Python script “” is called within shell script “”.

Python Script

import sys
import json
import pandas as pd
## Env is set for proper console display
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
## Env Setting - Ends
jsonfile = str(sys.argv[1])
csvfile = str(sys.argv[2])
with open(jsonfile) as file:
    data = json.load(file)

df = pd.DataFrame(data['instanceRecommendations'])
for i,item in enumerate(df['utilizationMetrics']):
    for k in range(len(df['utilizationMetrics'][i])):
        #Add a new column with a default value and then add/update the value of that colm[i,'utilizationMetrics_name_{}'.format(k)] = dict(df['utilizationMetrics'][i][k])['name'][i,'utilizationMetrics_statistic_{}'.format(k)] = dict(df['utilizationMetrics'][i][k])['statistic'][i,'utilizationMetrics_value_{}'.format(k)] = dict(df['utilizationMetrics'][i][k])['value']

    for m in range(len(df['recommendationOptions'][i])):[i,'recommendationOptions_instanceType_{}'.format(m)] = dict(df['recommendationOptions'][i][m])['instanceType'][i,'recommendationOptions_performanceRisk_{}'.format(m)] = dict(df['recommendationOptions'][i][m])['performanceRisk'][i,'recommendationOptions_rank_{}'.format(m)] = dict(df['recommendationOptions'][i][m])['rank']
       for j in range(len(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'])):
 [i,'reco_projectedUtilizationMetrics_{}_name_{}'.format(m,j)] = dict(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'][j])['name']
 [i,'reco_projectedUtilizationMetrics_{}_statistic_{}'.format(m,j)] = dict(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'][j])['statistic']
 [i,'reco_projectedUtilizationMetrics_{}_value_{}'.format(m,j)] = dict(dict(df['recommendationOptions'][i][m])['projectedUtilizationMetrics'][j])['value']

df = df.drop({'utilizationMetrics','recommendationOptions'}, axis=1)
df.to_csv(csvfile, header=True,index=False)
print("CSV File generated at..-  {}".format(csvfile))

Shell Script (which generates the json file which then parsed to python script to generate the CSV file)

if [[ $# -lt 1 ]]; then
  echo "Usage: ${0} <AccountID> [<Region>]"
  exit 1
NOW=$(date +"%m%d%Y%H%M")
AWS_DEFAULT_REGION=${2} ## 3rd Argument is the Account Default Region is diff than the CLI server
## Reset Env variables
reset_env () {
        unset AWS_SESSION_TOKEN
        unset AWS_DEFAULT_REGION
        unset AWS_ACCESS_KEY_ID
} #end of reset_env
## Set Env function
assume_role () {
source </path_to_source_env_file/filename> ${AccontID}
# Function assume_role ends
assume_role ${AccountID}
if [[ ! -z "$2" ]]; then
## Generate json file
aws compute-optimizer get-ec2-instance-recommendations | jq -r . >${jsonfile}
## Pass the json file to python script along with the CSV File for the output
python ${script_top}/ ${jsonfile} ${csvfile}
echo "CSV File generated... - ${csvfile}"

Json file format

“instanceRecommendations”: [
“instanceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-04a67rqw6c029b82f”,
“accountId”: “123404238928”,
“instanceName”: “testserver01”,
“currentInstanceType”: “c4.xlarge”,
“finding”: “OVER_PROVISIONED”,
“utilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 6.3559322033898304
“lookBackPeriodInDays”: 14,
“recommendationOptions”: [
“instanceType”: “t3.large”,
“projectedUtilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 12.711864406779661
“performanceRisk”: 3,
“rank”: 1
“instanceType”: “m5.large”,
“projectedUtilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 12.711864406779661
“performanceRisk”: 1,
“rank”: 2
“instanceType”: “m4.large”,
“projectedUtilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 15.645371577574968
“performanceRisk”: 1,
“rank”: 3
“recommendationSources”: [
“recommendationSourceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-04a67rqw6c029b82f”,
“recommendationSourceType”: “Ec2Instance”
“lastRefreshTimestamp”: 1583986171.637
“instanceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-0af6a6b96e2690002”,
“accountId”: “123404238928”,
“instanceName”: “TestServer02”,
“currentInstanceType”: “t2.micro”,
“finding”: “OPTIMIZED”,
“utilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 96.27118644067791
“lookBackPeriodInDays”: 14,
“recommendationOptions”: [
“instanceType”: “t3.micro”,
“projectedUtilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 39.1101694915254
“performanceRisk”: 1,
“rank”: 1
“instanceType”: “t2.micro”,
“projectedUtilizationMetrics”: [
“name”: “CPU”,
“statistic”: “MAXIMUM”,
“value”: 96.27118644067791
“performanceRisk”: 1,
“rank”: 2
“recommendationSources”: [
“recommendationSourceArn”: “arn:aws:ec2:eu-east-1:123404238928:instance/i-0af6a6b96e2690002”,
“recommendationSourceType”: “Ec2Instance”
“lastRefreshTimestamp”: 1583986172.297
“errors”: []

CSV file Output

Enjoy reading !!!
Anand M

Categories: AWS/Boto3/Python Tags:

Error – gpg: cancelled by user/gpg: Key generation canceled.

While generating gpg key, I was getting error where the screen automatically goes off and the control immediately comes back stating below

gpg: cancelled by user
gpg: Key generation canceled.

-bash-4.2$ gpg --gen-key
gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: svc_WellsFargo
Email address:
You selected this USER-ID:
    "svc_WellsFargo <>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

gpg: cancelled by user
gpg: Key generation canceled.

Solution Applied: It bugged a lot and finally googled the solution (putting it here for the sake of everyone)
As a root user, run below command

$ chmod o+rw $(tty)

Happy reading !!!
Anand M

Categories: Unix/Linux Tags:

EBS- SSO Integration with Oracle Identity Cloud Service (IDCS)

Recently got an opportunity to do a POC for implementing SSO with Oracle EBS (12.2.5) using Oracle IDCS approach. It’s fairly simple and much less intrusive work as far as work within eBS is concerned.
One primary component for this solution is EBS Asserter which needs to be deployed and configured in DMZ (Security policy does not allow any core EBS component to be exposed in DMZ)

This is fully integrated solution with inhouse Active Directory and not exposing any critical data (user password) in Cloud. POC was completely successful. Below is the data flow between various components of EBS and Oracle IDCS.

Happy reading !!!
Anand M

Categories: Oracle Apps, SSO Tags: , ,

Collect Cloudwatch metrics (including custom one) and upload to S3 bucket

Recently I wrote a script to pull the cloudwatch metrics (including the custom ones – Memory utilization) using CLI. Objective is to have have the data published to S3 and then using Athena/QuickSight, create a dashboard so as to have a consolidated view of all the servers across All the AWS accounts for CPU and Memory utilization.

This dashboard will help to take a right decision on resizing the instances thereby optimizing the overall cost.
Script is scheduled (using crontab) to run every one hour. There are 2 parts of the script
1. – This is the main script
2. – This is a wrapper and internally calls python script.

How the script is called :

/path/ <Destination_AWS_Account ID> <S3_Bucket_AWS_Account_ID> [<AWS_Region>]

Wrapper script –

if [[ $# -lt 2 ]]; then
  echo "Usage: ${0} <AccountID> <S3_Bucket_AccountID>"
  exit 1
NOW=$(date +"%m%d%Y%H%M")
AWS_DEFAULT_REGION=${3} ## 3rd Argument is the Account Default Region is diff than the CLI server
## Reset Env variables
reset_env () {
        unset AWS_SESSION_TOKEN
        unset AWS_DEFAULT_REGION
        unset AWS_ACCESS_KEY_ID
} #end of reset_env
## Set Env function
assume_role () {
source </path_to_source_env_file/filename> ${AccontID}
# Function assume_role ends
assume_role ${AccontID}
if [[ ! -z "$3" ]]; then
## Generate CSV file
python <path_of_the_script>/ ${AccontID} ${csvfile}
## Upload generated CSV file to S3
assume_role ${s3_AccountID}
echo ${csvfile}
echo "Uploading data file  to S3...."
aws s3 cp ${csvfile} <Bucket_Name>

Main python Script –

# To Correct indent in the code - autopep8
import sys
import boto3
import logging
import pandas as pd
import datetime
from datetime import datetime
from datetime import timedelta

AccountID = str(sys.argv[1])
csvfile = str(sys.argv[2])
logger = logging.getLogger()
# define the connection
client = boto3.client('ec2')
ec2 = boto3.resource('ec2')
cw = boto3.client('cloudwatch')

# Function to get instance Name
def get_instance_name(fid):
    ec2instance = ec2.Instance(fid)
    instancename = ''
    for tags in ec2instance.tags:
        if tags["Key"] == 'Name':
            instancename = tags["Value"]
    return instancename

# Function to get instance ID (mandatory for Custom memory Datapoints)
def get_instance_imageID(fid):
    rsp = client.describe_instances(InstanceIds=[fid])
    for resv in rsp['Reservations']:
        v_ImageID = resv['Instances'][0]['ImageId']
    return v_ImageID

# Function to get instance type (mandatory for Custom memory Datapoints)
def get_instance_Instype(fid):
    rsp = client.describe_instances(InstanceIds=[fid])
    for resv in rsp['Reservations']:
        v_InstanceType = resv['Instances'][0]['InstanceType']
    return v_InstanceType

# all running EC2 instances.
filters = [{
    'Name': 'instance-state-name',
    'Values': ['running']

# filter the instances
instances = ec2.instances.filter(Filters=filters)

# locate all running instances
RunningInstances = [ for instance in instances]
# print(RunningInstances)
dnow =
cwdatapointnewlist = []

for instance in instances:
    ec2_name = get_instance_name(
    imageid = get_instance_imageID(
    instancetype = get_instance_Instype(
    cw_response = cw.get_metric_statistics(
                'Name': 'InstanceId',
        Statistics=['Average', 'Minimum', 'Maximum']

    cw_response_mem = cw.get_metric_statistics(
                'Name': 'InstanceId',
                'Name': 'ImageId',
                'Value': imageid
                'Name': 'InstanceType',
                'Value': instancetype
        Statistics=['Average', 'Minimum', 'Maximum']

    cwdatapoints = cw_response['Datapoints']
    label_CPU = cw_response['Label']
    for item in cwdatapoints:
        item.update({"Label": label_CPU})

    cwdatapoints_mem = cw_response_mem['Datapoints']
    label_mem = cw_response_mem['Label']
    for item in cwdatapoints_mem:
        item.update({"Label": label_mem})

# Add memory datapoints to CPUUtilization Datapoints

    for cwdatapoint in cwdatapoints:
         timestampStr = cwdatapoint['Timestamp'].strftime(
             "%d-%b-%Y %H:%M:%S.%f")
         cwdatapoint['Timestamp'] = timestampStr
         cwdatapoint.update({'Instance Name': ec2_name})
         cwdatapoint.update({'Instance ID':})

df = pd.DataFrame(cwdatapointnewlist)
df.to_csv(csvfile, header=False, index=False)

Sample Flat file (CSV format) is as shown below.

Categories: AWS/Boto3/Python Tags: , ,

Unable to start WLST: “Problem invoking WLST – Traceback (most recent call last)”

The above error means that the correct CLASSPATH has not been set in order to start WLST because of environment not being properly set. The same error can occur for other utilities such as weblogic.Deployer, utils.dbping or even weblogic.Server etc. I was using this decrypt weblogic admin password.

Exact Error was as below

$ java weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Problem invoking WLST - Traceback (most recent call last):
  File "/tmp/", line 3, in <module>
    import os
ImportError: No module named os


The root cause is always the same, Correct environment not being properly Set and hence the required classes are not loaded.
To fix this, set your environment explicitly using

1. Make sure the right environment is set by running the script. This is located under @MW_HOME/wlserver/server/bin

The script needs to be sourced in order to execute the script in the context of the running shell, and have the actual environment of the shell be set. It has to be executed as either ” ./” (notice the extra dot) or “source ./” and then the CLASSPATH can be confirmed in the current shell with the “env” command.

Simply executing the script “./” will only display the output on the screen.

Once the environment is set, I was able to run the java weblogic.WLST successfully

-bash-4.1$ . ./


Your environment has been set.
-bash-4.1$ java weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands


Hope this helps.
-Anand M

Categories: WebLogic Tags: ,

How to install Oracle Apex 5.1 and deploy to Apache Tomcat Application Server on linux (RHEL 6.7)

Requirement was to have the Apex launched from Tomcat and not using Standalone Apex installation.

Following components are needed for this setup

1.Installation of Oracle Apex 5.1

Download it from
Mine is 5.1.4 English Language ONLY

2.Installation and configuration of Apache Tomcat 9.0.6

Download it from

2.1 Installation of Java JDK 9.0.1

Download it from

3.Installation and configuration of orace Rest Data service (ORDS 17.4)

Download it from

Oracle EE 11gR2 is already installed and configured. Mine was already installed don AIX 6.3 box.

1.By default Oracle Apex comes as bundled with Oracle EE is located in $ORACLE_HOME/apex.
Since I needed to install 5.1, I renamed the existing “apex” folder and kept the the downloaded the zipped file ( to some temp location on the server (/tmp/

2.unzipped the file to $ORACLE_HOME
$ unzip /tmp/ — this will create the new apex folder which

3.Create Tablespace in database for storing Apex MetaData
Login to Database as sysdba and issue below command to create a tablespace called “APEX”

CREATE TABLESPACE APEX DATAFILE ‘/d03/oracle/oradata/apex_01.dbf’

4. Now we need to install Apex 5.1. Change to $ORACLE_HOME/apex directory and login to SQL DB as sysdba
SQL> !pwd

SQL>@apexins.sql APEX APEX TEMP /i/
Here APEX is the TBS created in Step#2 and TEMP is the temporary tablespace.

Thank you for installing Oracle Application Express

Oracle Application Express is installed in the APEX_050100 schema.

The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex/apex_admin (Oracle REST Data Services)

The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex (Oracle REST Data Services)

timing for: Phase 3 (Switch)
Elapsed: 00:01:54.23
timing for: Complete Installation
Elapsed: 00:13:58.41

PL/SQL procedure successfully completed.

5. Connect again to the Database using “sysdba” and move to the same $ORACLE_HOME/apex location
and run apex_rest_config.sql

SQL> !pwd

SQL>@apex_rest_config.sql –> this will ask for password for APEX_LISTENER_USER & APEX_REST_PUBLIC_USER

Enter a password for the APEX_LISTENER user [] apex4demo
Enter a password for the APEX_REST_PUBLIC_USER user [] apex4demo

I kept the same password “apex4demo” for the entire installation.

After the successful run of the above script,
Synonym created.

Session altered.

PL/SQL procedure successfully completed

6.Now we need to stop the http port as the plan is to launch the Apex using Tomcat Apace and not standalone.
SQL> EXEC dbms_xdb.sethttpport(0);

PL/SQL procedure successfully completed.

7.Login to DB again as sysdba to unlock the user account.
SQL> alter user apex_public_user identified by apex4demo account unlock;

User altered.

SQL> alter user APEX_REST_PUBLIC_USER identified by apex4demo account unlock;

User altered.

8.Installation of Tomcat & Java JDK
This installation was done on another Linux server (RHEL 6.7) from where Apex will be launched.
Confirmed – Both the server (Apex hosting DB server and the tomcat server are able to talk to each other)

8.1 create a directory – /d01/apex
unzipped the tomcat apaeche zipped file & Java JDK
$ cd /d01/apex
$ tar xzf /tmp/apache-tomcat-9.0.6.tar.gz
$ tar xzf /tmp/jdk-9.0.1_linux-x64_bin.tar.gz

Modify the tomcat config file (/d01/apex/apache-tomcat-9.0.6/conf/server.xml) so as to use different port (default is 8080)
This is needed in my case as there is already another application running on tomcat on the default port.
Server port –> Changed to 8105 (Default is 8005)
Connector port –> Changed to 8181 (default is 8080)
Connector port (Define an AJP 1.3 Connector on port 8009) –> Changed to 8109 (Default is 8009)
Save the config file

Start the tomcat service
$startup/shutdown scritps are located in /d01/apex/apache-tomcat-9.0.6/bin
a) &

Before services are stopped/started, make sure to set following

export JAVA_HOME=/d01/apex/jdk-9.0.1
export CATALINA_HOME=/d01/apex/apache-tomcat-9.0.6
Check the catalina log ($CATALINA_HOME/logs/catalina.out) for any issue.

If the tomcat is successfully started, launch the IE and put the URL as –
http:/<server_name>:<port_number> (Pls remember it is 8181 for my case else default is 8080)

9. Install Oracle Rest Data service (ords)

9.1 Create a directory /d01/ords
9.2 Unzip the

$ cd /d01/ords
$ unzip /tmp/
Once unzipped, “/d01/ords/” directory will have a ‘war’ file called “ords.war”
By default, apex launc URL will look like – http://<server_name&gt;:8181/ords, but if you like to give some meaningful name in the URL, you MUST change the name of this ‘war’ file accordingly.

For my case, I wanted my URL to look like – http://<server_name&gt;:8181/demo,
renamed the ‘war’ file to “demo.war”.

9.3 Rename the parameter file (within /d01/ords/params) and update the settings as per the actual environment.

Default name –

Since the war file is renamed, it is mandatory to change the parameter file name as well –

Contents of parameter file
db.hostname=<Server name where the Database is installed>
db.port=<DB Port #>
db.servicename=<SID or Service Name of the database>
db.sid=<SID or Service Name of the database>
schema.tablespace.default=<Tablespace created in Step 1>
schema.tablespace.temp=<Existing TEMP Tablespace in the DB>
standalone.http.port=<Port number where the tomcat Apache is running>

9.4 Create a folder called “config” inside “/d01/ords”

9.4.i> Set the configuration directory to “config” for demo.war
$ export JAVA_HOME=/d01/apex/jdk-9.0.1
$ $JAVA_HOME/bin/java -jar demo.war configdir /d01/ords/config

Above command resulted in below error
java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
at java.base/java.lang.Class.getDeclaredFields0(Native Method)
at java.base/java.lang.Class.privateGetDeclaredFields(

This is due to the fact – “JDK 9 has deprecated java.xml.bind module and has removed from default classpath” and hence to overcome the error, workaround is to
Use –add-modules to add module in classpath

Hence correct command
$ $JAVA_HOME/bin/java –add-modules java.xml.bind -jar alqd.war configdir /d01/ords/config-ords
Mar 26, 2018 4:03:26 PM
INFO: Set config.dir to /d01/ords/config in: /d01/ords/demo.war

9.5.Now execute the Oracle Rest Data service config script

$ cd /d01/ords
$ $JAVA_HOME/bin/java –add-modules java.xml.bind -jar demo.war

Below information are asked

Enter the name of the database server [localhost]: <Press Enter>;
Enter the database listen port [1521]: <Press enter if your port is 1521, if others then put the value and press enter>
Enter 1 to specify the database service name, or 2 to specify the database SID [1]: <Press Enter>
Enter the database service name:Demo (I have put DEMO my service name, please put your db service name)
Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]: <Press Enter>
Enter the database password for ORDS_PUBLIC_USER:apex4demo
Enter the database password for sys: sys123
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]: <Press Enter>
Enter the PL/SQL Gateway database user name [APEX_PUBLIC_USER]: <Press Enter>
Enter the database password for APEX_PUBLIC_USER:apex4demo
Enter 1 to specify passwords for Application Express RESTful Services database users (APEX_LISTENER, APEX_REST_PUBLIC_USER) or 2 to skip this step [1]: <Press Enter>
Enter the database password for APEX_LISTENER: apex4demo
Enter the database password for APEX_REST_PUBLIC_USER: apex4demo
Enter 1 if you wish to start in standalone mode or 2 to exit [1]: 2

Requires SYS AS SYSDBA to verify Oracle REST Data Services schema.

Enter the database password for SYS AS SYSDBA:
Confirm password:

Retrieving information.
Mar 26, 2018 5:45:08 PM
INFO: Updated configurations: defaults, apex, apex_pu, apex_al, apex_rt
Installing Oracle REST Data Services version
... Log file written to /d01/ords/logs/ords_install_core_2018-03-26_174508_00856.log
... Verified database prerequisites
... Created Oracle REST Data Services schema
... Created Oracle REST Data Services proxy user
... Granted privileges to Oracle REST Data Services
... Created Oracle REST Data Services database objects
... Log file written to /d01/ords/logs/ords_install_datamodel_2018-03-26_174523_00352.log
... Log file written to /d01/ords/logs/ords_install_apex_2018-03-26_174524_00312.log
Completed installation for Oracle REST Data Services version Elapsed time: 00:00:16.538

10. Deploying oracle apex to Tomcat Apache
Copy the demo.war file from the directory /d01/ords and paste into the directory /d01/apex/apache-tomcat-9.0.6/webapps
11. Create a folder “i” inside /d01/apex/apache-tomcat-9.0.6/webapps
11.1 Copy the entire contents from $ORACLE_HOME/apex/images to this folder /d01/apex/apache-tomcat-9.0.6/webapps/i
login to the DB server where the apex is installed
$ cd $ORACLE_HOME/apex/images
$ scp * <user_name>@<Tomcat_Server>://d01/apex/apache-tomcat-9.0.6/webapps/i

12. Final step is to restart the tomcat apache service.

But before it is done, due to the fact ‘JDK 9 has deprecated java.xml.bind module and has removed from default classpath’, I had to modify the startup script so as to include this in the class path.

Modify $CATALINA_HOME/logs/

$ # Add the JAVA 9 specific start-up parameters required by Tomcat
JDK_JAVA_OPTIONS=”$JDK_JAVA_OPTIONS –add-modules java.xml.bind” ### Added to overcome the deployment error

Now restart the tomcat

13. Launch Apex URL – http://:8181/demo

Hope this helps. Happy reading and learning!

-Anand M

Categories: Oracle Apex Tags: , ,

“Could Not lock the record” while trying to cancel the running concurrent request

March 5, 2018 2 comments

Recently came across a typical scenario where I needed to cancel a long running concurrent request and while doing the same from the front end, I kept getting error “Could not lock the record“.

Later on I thought to mark it as terminated by running below “update” statement from the database.

UPDATE apps.fnd_concurrent_requests
SET phase_code = 'C', status_code = 'X'
WHERE request_id = 126192043
and status_code ='R' 
and  phase_code = 'R';


But again this “update” was taking exceptionally long time. Then I figured out a better and quicker way of doing this.
Step:1 Find out the “FNDLIBR” process associated with the long running concurrent request by running below query

Set Pages 1000
Set head on
Column Manager   Format A12
Column Request   Format 999999999
Column Program   Format A30
Column User_Name Format A15
Column Started   Format A15
Column FNDLIBR  Format A9
prompt Managers that is running a request and FNDLIBR PROCESS;
select substr(Concurrent_Queue_Name, 1, 12) Manager,
       Request_Id Request,
       substr(Concurrent_Program_Name, 1, 35) Program,
       To_Char(Actual_Start_Date, 'DD-MON-YY HH24:MI') Started
  from apps.Fnd_Concurrent_Queues    Fcq,
       apps.Fnd_Concurrent_Requests  Fcr,
       apps.Fnd_Concurrent_Programs  Fcp,
       apps.Fnd_User                 Fu,
       apps.Fnd_Concurrent_Processes Fpro
 where Phase_Code = 'R' And Status_Code <> 'W' And
       Fcr.Controlling_Manager = Concurrent_Process_Id and
       (Fcq.Concurrent_Queue_Id = Fpro.Concurrent_Queue_Id and
       Fcq.Application_Id = Fpro.Queue_Application_Id) and
       (Fcr.Concurrent_Program_Id = Fcp.Concurrent_Program_Id and
       Fcr.Program_Application_Id = Fcp.Application_Id) and
       Fcr.Requested_By = User_Id and
       Fcr.request_id =&request_id

Step:- 2 Now look for the FNDLIBR process ID obtained above on the “Concurrent Manager Node”

ps -ef|grep 9240602|grep -v grep
  applmgr 19859 18919  0 Mar04 ?        00:00:02 FNDLIBR 

Step:- 3 Query the database to get the Session details for the offending process obtained in Step 2

select ses.sid,
              ses.serial# serial#,
         from gv$session ses, gv$process proc
        where ses.paddr = proc.addr and ses.process in ('&process_ID');

Step:- 4 Now clear the database session by running below statement in the database(Using SID and Serial# obtained in Step 3)

SQL> alter system kill session '<SID>,<Serial#>' immediate;

Step:- 5 Finally go ahead and cancel the long running request either from the front end or from database (using update statement as mentioned in the beginning.

Hope this helps. Happy learning and keep reading.

-Anand M

Categories: Oracle Apps

Troubleshooting EBS Workflow Notification mailer Issues

February 26, 2018 1 comment

Oracle E-Business Suite’s Workflow Notification Mailer sends an email notification in a multi-step process.

After a workflow notification is sent, it immediately appears in the recipient’s EBS Worklist UI. For each workflow notification,business event is raised to send the same notification as email.

For a workflow notification to be e-mailed, following statements should be true
The notification’s STATUS is OPEN or CANCELED
The notification’s MAIL_STATUS is MAIL or INVALID
The recipient role has a valid email address
The recipient role’s notification preference must be MAILTEXT, MAILATTH, MAILHTML or MAILHTM2
The Workflow Deferred Agent Listener is running
The Workflow Notification Mailer is running

Most of the information above can be obtained by running the diagnostic script $FND_TOP/sql/wfmlrdbg.sql. It takes the notification id as input.

After the business event is raised, it is processed through two queues before it is actually delivered as email to the recipient’s Inbox.
The Workflow Notification Mailer dequeues the send event messages from this queue and dispatches it through the designated SMTP server.
To determine at a given time where the email notification is being processed, run $FND_TOP/sql/wfmlrdbg.sql for the notification id

Query to find WF Mailer is up and running

SQL>SELECT component_name, component_status
FROM fnd_svc_components
WHERE component_type = 'WF_MAILER';


Query to find the name and location of WF related log files

SQL> select fl.meaning,
              'maile r container',
              'listener container',
  from apps.fnd_concurrent_queues    fcq,
       apps.fnd_concurrent_processes fcp,
       apps.fnd_lookups              fl
 where fcq.concurrent_queue_id = fcp.concurrent_queue_id and
       fcp.process_status_code = 'A' and
       fl.lookup_type = 'CP_PROCESS_STATUS_CODE' and
       fl.lookup_code = fcp.process_status_code and
       concurrent_queue_name in ('WFMLRSVC', 'WFALSNRSVC')
 order by fcp.logfile_name


Workflow Mailer Log file – FNDCPGSC*.txt

Query to check Failed WF Notifications

  from wf_notifications

Query to find the ‘Pending’ WF Notifications waiting to be processed

SQL>SELECT COUNT(*), message_name
  FROM wf_notifications
 WHERE STATUS = 'OPEN' AND mail_status = 'MAIL'
 GROUP BY message_name
  FROM wf_notifications
 WHERE STATUS = 'OPEN' AND mail_status = 'SENT'
 ORDER BY begin_date DESC

Query to check if WF Notifications are sent
select mail_status, status from wf_notifications where notification_id= ‘&Notification_ID’

–If mail_status is MAIL, it means the email delivery is pending for workflow mailer to send the notification
–If mail_status is SENT, its means mailer has sent email
–If mail_status is Null & status is OPEN, its means that no need to send email as notification preference of user is “Don’t send email”

Query to verify whether the message is processed in WF_DEFERRED queue

SQL>select * 
from$wf_deferred a 
where a.user_data.getEventKey()= '&Notification_ID'

–Once message is successfully processed, message will be enqueued to WF_NOTIFICATION_OUT queue and if
–errored out, it will be in WF_ERROR queue

select wf.user_data.event_name Event_Name, wf.user_data.event_key Event_Key,
wf.user_data.error_stack Error_Stack, wf.user_data.error_message Error_Msg
from wf_error wf where wf.user_data.event_key = ‘’

Query to check which WF notification are sent and which are errored out

SQL>Select from_user, to_user, notification_id, status, mail_status, begin_date
 where status = 'OPEN'

Select from_user,
       MESSAGE_NAME begin_date
 where status = 'OPEN'

Query to check different types of WF notification that are stuck

SQL>select message_type, count(1)
  from wf_notifications
 where status = 'OPEN' and mail_status = 'MAIL'
 group by message_type

E.g o/p of query –

——– ———-
POAPPRV 21 — 21 mails of Po Approval not sent —
WFERROR 145 — 145 mails have error
APCCARD 5411 - 5411


–For the uset to receive WF notification mails, email preference MUST be MAILHTML
Query to check User’s mail preference setup

SQL>SELECT email_address,
       nvl(WF_PREF.get_pref(name, 'MAILTYPE'), notification_preference)
  FROM wf_roles
 WHERE name = '&recipient_role' --recipient_role --- is the User name in Oracle


To debug a WF Notification
SQL> $FND_TOP/sql/wfmlrdbg.sql
It will prompt for Notification ID

Query to find WF related parameters from backend

SQL>select fscpv.parameter_value
    from fnd_svc_comp_params_tl fscpt
    ,fnd_svc_comp_param_vals fscpv
    where fscpt.display_name = 'Framework URL timeout' --'Test Address'
    and fscpt.parameter_id = fscpv.parameter_id


Query to check the date/time when the last email was sent by WF Mailer

SQL>select to_char(max(begin_date),'DD-MON-YY HH24:MI:SS')
from apps.wf_notifications  
where mail_status = 'SENT'

Query to find the WF Test notification status

SQL>select *
  from apps.wf_notifications
--where notification_id = '&notification_Id' --- Pass Notification Id if any
--where message_type = 'REQAPPRV' AND -- This is type of message, possible value are POAPPRV, REQAPPRV,WFTESTS
--user_key = '42056' -- This is PO# or PR # (can be obtained from user)  
--ITEM_KEY = '908848-170147' --- this can be derived from PO_REQUISITION_HEADERS_ALL if message_type is REQAPPRV
 where recipient_role = '<Application_User_Name>' --- Useful to provide if the message_type is WFTESTS
 and message_type = 'WFTESTS' and trunc(begin_date) = trunc(sysdate) --- Trying to look troubleshoot WF Notifications for current date only

      SELECT SEGMENT1,wf_item_type,wf_item_key,last_update_date FROM PO_REQUISITION_HEADERS_ALL
WHERE SEGMENT1 = '42052' -- PO or PR #


Query to see workflow configuration

SQL>select p.parameter_id, p.parameter_name, v.parameter_value value
  from apps.fnd_svc_comp_param_vals_v v,
       apps.fnd_svc_comp_params_b     p,
       apps.fnd_svc_components        c
 where c.component_type = 'WF_MAILER' and v.component_id = c.component_id and
       v.parameter_id = p.parameter_id and
       p.parameter_name in
 order by p.parameter_name


Some messages like alerts don’t get a record in wf_notifications table
so you have to watch the WF_NOTIFICATION_OUT queue

SQL>select corr_id, retry_count, msg_state, count(*)
 where corr_id = 'APPS:ALR:'
 group by corr_id, msg_state, retry_count
 order by count(*) desc
select q_name,
       to_char(deq_time, 'YYYY-MON-DD HH12:MI:SSSSS AM') dqtime
  from wf_notification_out
 where --msgid = '65BED43EA74678B1E053652850812B40'
 corrid = 'APPS:ALR:'
 ORDER BY dqtime desc
select notification_id,msg_state,msg_id,role,corrid,enq_time,deq_time
from  (select msg_id, o.enq_time, o.deq_time, msg_state
              ,(select str_value
			  from   table (
                where  name = 'NOTIFICATION_ID') notification_id
              , (select str_value
                 from   table (
                 where  name = 'ROLE') role
              , (select str_value
                 from   table (
                 where  name = 'Q_CORRELATION_ID') corrid
       from$wf_notification_out o)         
where notification_id= '&notification_id'
and rownum=1


Query to run from backend to update WF mailer attributes

SQL>select fscpv.parameter_value
    from fnd_svc_comp_params_tl fscpt
    ,fnd_svc_comp_param_vals fscpv
    where fscpt.display_name = 'Test Address'
    and fscpt.parameter_id = fscpv.parameter_id;

How to set Workflow Mailer Override Address from Backend ? (Doc ID 1533596.1)

Hope this helps. Happy learning.

-Anand M

Categories: Oracle Apps Tags: ,

ORA-955 name is already used by an existing object

February 23, 2018 Leave a comment

Recently while working on some upgrade activity, I faced an interesting scenario. I am supposed to create a sequence in Oracle database (

MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1

Error at line 1:
ORA-955: name is already used by an existing object
Elapsed: 00:00:00.73

I queried DBA_OBJECTS but didn’t find the object there. “adop -cleanup” phase was all completed. Database recyclebin if OFF but still went ahead and purged “recyclebin”. Still it did not let me create the sequence.

Later on while examining the “adop -cleanup” script, I came across the package ‘ad_zd_sys.drop_covered_object’. It asks few different parameters like


In order to get all these details, I ran a query

SQL> select * from dba_objects_ae where object_name like '%XX_VTX_INV_TAX_LINES_S%' and object_type <> 'NON-EXISTENT'

and this fetched me a record with all the values needed to execute package – ad_zd_sys.drop_covered_object.
Logged into database as sys and executed the package

SQL> exec sys.ad_zd_sys.drop_covered_object('XX_VTX', 'XX_VTX_INV_TAX_LINES_S', 'SEQUENCE', 'V_20170715_2200');

PL/SQL procedure successfully completed.
Elapsed: 00:00:00.90

After this I again ran the select statement “select * from dba_objects_ae where object_name like ‘%XX_VTX_INV_TAX_LINES_S%’ and object_type ‘NON-EXISTENT'” and it did not return any records.

I went ahead and fired the “Create sequence..” statement and this time sequence got created without any error.

This error wasted a lot of time and effort in actual upgrade task but thankfully made me to learn another new thing.

Hope this help. Happy learning and keep reading.

-Anand M



java.lang.ClassCastException: Cannot cast weblogic.wsee.jaxws.framework.policy.WSDLParserExtension

June 20, 2017 2 comments

I was working on to clone a web application running on Weblogic 12.1.1. Objective was to create a new DEV instance for already a working UAT instance.

I installed the Weblogic 12.1.1 on Oracle Linux 5.11 and installation completed successfully including the domain configuration.

Later on when I started deploying the application (WAR file), I got an error in the Weblogic log despite application deployment showing ‘success’. Below is the error I got in the log.

Caused By: exception during WSDL parsing: file:/u01/fmw/oracle/product/12/user_projects/domains/mydomain/servers/ManagedServer_1/tmp/_WL_user/vertex-remote-services/hkmfe6/war/WEB-INF/wsdl/VertexIncSystemUserPreferenceService_6_0.wsdl
        Truncated. see log file for complete stacktrace
Caused By: Provider weblogic.wsee.jaxws.framework.policy.WSDLParserExtension is specified in jar:file:/u01/fmw/oracle/product/12/oracle_common/modules/!/META-INF/services/ could not be instantiated: <strong>java.lang.ClassCastException: Cannot cast weblogic.wsee.jaxws.framework.policy.WSDLParserExtension</strong> to
        Truncated. see log file for complete stacktrace
Caused By: java.lang.ClassCastException: Cannot cast weblogic.wsee.jaxws.framework.policy.WSDLParserExtension to

To troubleshoot, first tried to see what is defined under “container description” inside ‘weblogic.xml’ file of the application. This file is located in WEB-INF directory of the WAR file and it is already set to TRUE


Now after further investigation and going through some other forums, I was clear with the error that it is due to the conflict of application related classes with that of system classes (Weblogic owned libraries of class)
I looked into the list of all Weblogic(system) classes. Location of the file is $MW_HOME/wlserver_12.1/server/lib/weblogic.jar.
Extracted the weblogic.jar into a temp location (jar -xvf weblogic.jar) and once the jar file is extracted, related system class files are located in /META-INF/services and prepared the list of all the class files.

Extracted the application WAR file and traversed to the location of class files (in my case it is – /temp/vertex-remote-services/WEB-INF/lib)
created a small shell script – idea was to look for each class files which are present in weblogic.jar (System class files) see find if it is there in application WAR file as well.

#! /usr/bin/ksh
_jarfile="jarlist.txt" ### Contains the list of system class files obtained above
while read _list;do
for i in *.jar; do jar -tvf "$i" | grep -Hsi ""${_list}"" && echo "$i"; done
done < $_jarfile

Whatever files found in both the places, corresponding class file is removed from weblogic.jar. This will avoid the conflict and hence the error.
Go to weblogic.jar location ($MW_HOME/wlserver_12.1/server/lib/weblogic.jar) and run below command

$ zip -d weblogic.jar META-INF/services/
deleting: META-INF/services/

This way removed all the conflicting class file from the system CLASS libraries. Stopped and started the managed server and deployed the application. This completed successfully and all the “ClassCastException” were gone.

P.s- I was getting similar errors even with WebLogic 12.1.3 and did the same for 12.1.3 as well to get rid of all the errors.

-Happy learning

Categories: WebLogic Tags: