Archive for the ‘Oracle Apps’ Category

EBS- SSO Integration with Oracle Identity Cloud Service (IDCS)

Recently got an opportunity to do a POC for implementing SSO with Oracle EBS (12.2.5) using Oracle IDCS approach. It’s fairly simple and much less intrusive work as far as work within eBS is concerned.
One primary component for this solution is EBS Asserter which needs to be deployed and configured in DMZ (Security policy does not allow any core EBS component to be exposed in DMZ)

This is fully integrated solution with inhouse Active Directory and not exposing any critical data (user password) in Cloud. POC was completely successful. Below is the data flow between various components of EBS and Oracle IDCS.

Happy reading !!!
Anand M

Categories: Oracle Apps, SSO Tags: , ,

“Could Not lock the record” while trying to cancel the running concurrent request

March 5, 2018 2 comments

Recently came across a typical scenario where I needed to cancel a long running concurrent request and while doing the same from the front end, I kept getting error “Could not lock the record“.

Later on I thought to mark it as terminated by running below “update” statement from the database.

UPDATE apps.fnd_concurrent_requests
SET phase_code = 'C', status_code = 'X'
WHERE request_id = 126192043
and status_code ='R' 
and  phase_code = 'R';


But again this “update” was taking exceptionally long time. Then I figured out a better and quicker way of doing this.
Step:1 Find out the “FNDLIBR” process associated with the long running concurrent request by running below query

Set Pages 1000
Set head on
Column Manager   Format A12
Column Request   Format 999999999
Column Program   Format A30
Column User_Name Format A15
Column Started   Format A15
Column FNDLIBR  Format A9
prompt Managers that is running a request and FNDLIBR PROCESS;
select substr(Concurrent_Queue_Name, 1, 12) Manager,
       Request_Id Request,
       substr(Concurrent_Program_Name, 1, 35) Program,
       To_Char(Actual_Start_Date, 'DD-MON-YY HH24:MI') Started
  from apps.Fnd_Concurrent_Queues    Fcq,
       apps.Fnd_Concurrent_Requests  Fcr,
       apps.Fnd_Concurrent_Programs  Fcp,
       apps.Fnd_User                 Fu,
       apps.Fnd_Concurrent_Processes Fpro
 where Phase_Code = 'R' And Status_Code <> 'W' And
       Fcr.Controlling_Manager = Concurrent_Process_Id and
       (Fcq.Concurrent_Queue_Id = Fpro.Concurrent_Queue_Id and
       Fcq.Application_Id = Fpro.Queue_Application_Id) and
       (Fcr.Concurrent_Program_Id = Fcp.Concurrent_Program_Id and
       Fcr.Program_Application_Id = Fcp.Application_Id) and
       Fcr.Requested_By = User_Id and
       Fcr.request_id =&request_id

Step:- 2 Now look for the FNDLIBR process ID obtained above on the “Concurrent Manager Node”

ps -ef|grep 9240602|grep -v grep
  applmgr 19859 18919  0 Mar04 ?        00:00:02 FNDLIBR 

Step:- 3 Query the database to get the Session details for the offending process obtained in Step 2

select ses.sid,
              ses.serial# serial#,
         from gv$session ses, gv$process proc
        where ses.paddr = proc.addr and ses.process in ('&process_ID');

Step:- 4 Now clear the database session by running below statement in the database(Using SID and Serial# obtained in Step 3)

SQL> alter system kill session '<SID>,<Serial#>' immediate;

Step:- 5 Finally go ahead and cancel the long running request either from the front end or from database (using update statement as mentioned in the beginning.

Hope this helps. Happy learning and keep reading.

-Anand M

Categories: Oracle Apps

Troubleshooting EBS Workflow Notification mailer Issues

February 26, 2018 3 comments

Oracle E-Business Suite’s Workflow Notification Mailer sends an email notification in a multi-step process.

After a workflow notification is sent, it immediately appears in the recipient’s EBS Worklist UI. For each workflow notification,business event is raised to send the same notification as email.

For a workflow notification to be e-mailed, following statements should be true
The notification’s STATUS is OPEN or CANCELED
The notification’s MAIL_STATUS is MAIL or INVALID
The recipient role has a valid email address
The recipient role’s notification preference must be MAILTEXT, MAILATTH, MAILHTML or MAILHTM2
The Workflow Deferred Agent Listener is running
The Workflow Notification Mailer is running

Most of the information above can be obtained by running the diagnostic script $FND_TOP/sql/wfmlrdbg.sql. It takes the notification id as input.

After the business event is raised, it is processed through two queues before it is actually delivered as email to the recipient’s Inbox.
The Workflow Notification Mailer dequeues the send event messages from this queue and dispatches it through the designated SMTP server.
To determine at a given time where the email notification is being processed, run $FND_TOP/sql/wfmlrdbg.sql for the notification id

Query to find WF Mailer is up and running

SQL>SELECT component_name, component_status
FROM fnd_svc_components
WHERE component_type = 'WF_MAILER';


Query to find the name and location of WF related log files

SQL> select fl.meaning,
              'maile r container',
              'listener container',
  from apps.fnd_concurrent_queues    fcq,
       apps.fnd_concurrent_processes fcp,
       apps.fnd_lookups              fl
 where fcq.concurrent_queue_id = fcp.concurrent_queue_id and
       fcp.process_status_code = 'A' and
       fl.lookup_type = 'CP_PROCESS_STATUS_CODE' and
       fl.lookup_code = fcp.process_status_code and
       concurrent_queue_name in ('WFMLRSVC', 'WFALSNRSVC')
 order by fcp.logfile_name


Workflow Mailer Log file – FNDCPGSC*.txt

Query to check Failed WF Notifications

  from wf_notifications

Query to find the ‘Pending’ WF Notifications waiting to be processed

SQL>SELECT COUNT(*), message_name
  FROM wf_notifications
 WHERE STATUS = 'OPEN' AND mail_status = 'MAIL'
 GROUP BY message_name
  FROM wf_notifications
 WHERE STATUS = 'OPEN' AND mail_status = 'SENT'
 ORDER BY begin_date DESC

Query to check if WF Notifications are sent
select mail_status, status from wf_notifications where notification_id= ‘&Notification_ID’

–If mail_status is MAIL, it means the email delivery is pending for workflow mailer to send the notification
–If mail_status is SENT, its means mailer has sent email
–If mail_status is Null & status is OPEN, its means that no need to send email as notification preference of user is “Don’t send email”

Query to verify whether the message is processed in WF_DEFERRED queue

SQL>select * 
from$wf_deferred a 
where a.user_data.getEventKey()= '&Notification_ID'

–Once message is successfully processed, message will be enqueued to WF_NOTIFICATION_OUT queue and if
–errored out, it will be in WF_ERROR queue

select wf.user_data.event_name Event_Name, wf.user_data.event_key Event_Key,
wf.user_data.error_stack Error_Stack, wf.user_data.error_message Error_Msg
from wf_error wf where wf.user_data.event_key = ‘’

Query to check which WF notification are sent and which are errored out

SQL>Select from_user, to_user, notification_id, status, mail_status, begin_date
 where status = 'OPEN'

Select from_user,
       MESSAGE_NAME begin_date
 where status = 'OPEN'

Query to check different types of WF notification that are stuck

SQL>select message_type, count(1)
  from wf_notifications
 where status = 'OPEN' and mail_status = 'MAIL'
 group by message_type

E.g o/p of query –

——– ———-
POAPPRV 21 — 21 mails of Po Approval not sent —
WFERROR 145 — 145 mails have error
APCCARD 5411 - 5411


–For the uset to receive WF notification mails, email preference MUST be MAILHTML
Query to check User’s mail preference setup

SQL>SELECT email_address,
       nvl(WF_PREF.get_pref(name, 'MAILTYPE'), notification_preference)
  FROM wf_roles
 WHERE name = '&recipient_role' --recipient_role --- is the User name in Oracle


To debug a WF Notification
SQL> $FND_TOP/sql/wfmlrdbg.sql
It will prompt for Notification ID

Query to find WF related parameters from backend

SQL>select fscpv.parameter_value
    from fnd_svc_comp_params_tl fscpt
    ,fnd_svc_comp_param_vals fscpv
    where fscpt.display_name = 'Framework URL timeout' --'Test Address'
    and fscpt.parameter_id = fscpv.parameter_id


Query to check the date/time when the last email was sent by WF Mailer

SQL>select to_char(max(begin_date),'DD-MON-YY HH24:MI:SS')
from apps.wf_notifications  
where mail_status = 'SENT'

Query to find the WF Test notification status

SQL>select *
  from apps.wf_notifications
--where notification_id = '&notification_Id' --- Pass Notification Id if any
--where message_type = 'REQAPPRV' AND -- This is type of message, possible value are POAPPRV, REQAPPRV,WFTESTS
--user_key = '42056' -- This is PO# or PR # (can be obtained from user)  
--ITEM_KEY = '908848-170147' --- this can be derived from PO_REQUISITION_HEADERS_ALL if message_type is REQAPPRV
 where recipient_role = '<Application_User_Name>' --- Useful to provide if the message_type is WFTESTS
 and message_type = 'WFTESTS' and trunc(begin_date) = trunc(sysdate) --- Trying to look troubleshoot WF Notifications for current date only

      SELECT SEGMENT1,wf_item_type,wf_item_key,last_update_date FROM PO_REQUISITION_HEADERS_ALL
WHERE SEGMENT1 = '42052' -- PO or PR #


Query to see workflow configuration

SQL>select p.parameter_id, p.parameter_name, v.parameter_value value
  from apps.fnd_svc_comp_param_vals_v v,
       apps.fnd_svc_comp_params_b     p,
       apps.fnd_svc_components        c
 where c.component_type = 'WF_MAILER' and v.component_id = c.component_id and
       v.parameter_id = p.parameter_id and
       p.parameter_name in
 order by p.parameter_name


Some messages like alerts don’t get a record in wf_notifications table
so you have to watch the WF_NOTIFICATION_OUT queue

SQL>select corr_id, retry_count, msg_state, count(*)
 where corr_id = 'APPS:ALR:'
 group by corr_id, msg_state, retry_count
 order by count(*) desc
select q_name,
       to_char(deq_time, 'YYYY-MON-DD HH12:MI:SSSSS AM') dqtime
  from wf_notification_out
 where --msgid = '65BED43EA74678B1E053652850812B40'
 corrid = 'APPS:ALR:'
 ORDER BY dqtime desc
select notification_id,msg_state,msg_id,role,corrid,enq_time,deq_time
from  (select msg_id, o.enq_time, o.deq_time, msg_state
              ,(select str_value
			  from   table (
                where  name = 'NOTIFICATION_ID') notification_id
              , (select str_value
                 from   table (
                 where  name = 'ROLE') role
              , (select str_value
                 from   table (
                 where  name = 'Q_CORRELATION_ID') corrid
       from$wf_notification_out o)         
where notification_id= '&notification_id'
and rownum=1


Query to run from backend to update WF mailer attributes

SQL>select fscpv.parameter_value
    from fnd_svc_comp_params_tl fscpt
    ,fnd_svc_comp_param_vals fscpv
    where fscpt.display_name = 'Test Address'
    and fscpt.parameter_id = fscpv.parameter_id;

How to set Workflow Mailer Override Address from Backend ? (Doc ID 1533596.1)

Hope this helps. Happy learning.

-Anand M

Categories: Oracle Apps Tags: ,

ORA-955 name is already used by an existing object

February 23, 2018 Leave a comment

Recently while working on some upgrade activity, I faced an interesting scenario. I am supposed to create a sequence in Oracle database (

MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1

Error at line 1:
ORA-955: name is already used by an existing object
Elapsed: 00:00:00.73

I queried DBA_OBJECTS but didn’t find the object there. “adop -cleanup” phase was all completed. Database recyclebin if OFF but still went ahead and purged “recyclebin”. Still it did not let me create the sequence.

Later on while examining the “adop -cleanup” script, I came across the package ‘ad_zd_sys.drop_covered_object’. It asks few different parameters like


In order to get all these details, I ran a query

SQL> select * from dba_objects_ae where object_name like '%XX_VTX_INV_TAX_LINES_S%' and object_type <> 'NON-EXISTENT'

and this fetched me a record with all the values needed to execute package – ad_zd_sys.drop_covered_object.
Logged into database as sys and executed the package

SQL> exec sys.ad_zd_sys.drop_covered_object('XX_VTX', 'XX_VTX_INV_TAX_LINES_S', 'SEQUENCE', 'V_20170715_2200');

PL/SQL procedure successfully completed.
Elapsed: 00:00:00.90

After this I again ran the select statement “select * from dba_objects_ae where object_name like ‘%XX_VTX_INV_TAX_LINES_S%’ and object_type ‘NON-EXISTENT'” and it did not return any records.

I went ahead and fired the “Create sequence..” statement and this time sequence got created without any error.

This error wasted a lot of time and effort in actual upgrade task but thankfully made me to learn another new thing.

Hope this help. Happy learning and keep reading.

-Anand M



TNSPing & SQLPlus just hang without errors

November 22, 2016 Leave a comment

Usually, when you connect to Oracle, you get errors that give you some feedback on what is happening.

Today, I got an issue where when trying to connect to SQLplus or even running a tnsping command was hanging. Not getting any error to start the troubleshooting. The issue was definitely some sort of connectivity but not able to point it out

In our case, we use “nameserver” in addition to tnsnames.ora. Our sqlnet.ora file looks like this:

I needed to trace my “tnsping” command to see where it is getting hung.

To troubleshoot the issue with tnsping hanging, all you need to do is add these settings in sqlnet.ora to trace tnsping

TNSPING.TRACE_DIRECTORY =/d01/abc/product/8.0.6/network/admin

My being a linux box and hence the path. You may need to modify according to your OS and directory

I ran the “tnsping” to the same Oracle SID again, a trace file “tnsping.trc” got generated in the path defined in the above “TNSPING.TRACE_DIRECTORY” variable.

Careful review of the trace file revealed that he connection was having an issue with the “name server” defined in my sqlnet.ora file.
I asked the Oracle DBA to confirm if the “name server” is started and she confirmed that it is not. Once she started the “name server”, tnsping command went successfully and I was able to connect to SQLplus.

Hope this helps you in some way.

Happy learning!!!

-Anand M

Decrypt weblogic admin password

November 22, 2016 Leave a comment

Pls follow below steps to decrypt Weblogic admin password

Step 1:- Create a file called – and udpate the file with below cotents

import os

def decrypt(domainHomeName, encryptedPwd):
domainHomeAbsolutePath = os.path.abspath(domainHomeName)
encryptionService =
ces =
clear = ces.decrypt(encryptedPwd)
print "RESULT:" + clear

if len(sys.argv) == 3:
decrypt(sys.argv[1], sys.argv[2])
print " Usage: java weblogic.WLST DOMAIN_HOME ENCRYPTED_PASSWORD"
print " Example:"
print " java weblogic.WLST D:/Oracle/Middleware/user_projects/domains/base_domain {AES}819R5h3JUS9fAcPmF58p9Wb3syTJxFl0t8NInD/ykkE="
print "Unexpected error: ", sys.exc_info()[0]

Step 2:- Set Domain environment variable

cd $FMW_HOME/user_projects/domains/<domain_name>


Once it is properly set, do echo $DOMAIN_HOME and you will find it getting properly displayed

Step 3:- Get encrypted password value from file

$ grep password $DOMAIN_HOME/servers/AdminServer/security/ | sed -e "s/^password=\(.*\)/\1/"


Step 4:- Decrypt the encrypted password obtained in Step 3 (Run the command from the location where the is kept)

java weblogic.WLST $DOMAIN_HOME {AES}udb6nZLDw24HiRRrZkojuoiLNiu/MfAIZpcU=

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands


Hope this helps. Happy reading!

-Anand M

Collection of useful script for Oracle Apps DBA

November 9, 2016 Leave a comment

Below are some of the most useful script for any Oracle APPS DBA. These are handy for day to day monitoring and troubleshooting activity.

Hope this helps.

Script to monitor Tablespace Growth

  • This probes the dba_hist_tbspc_space_usage table and gets the data as old as the retention time of AWR snap
  • Starting Oracle 10G, Oracle records tablespaces usage (allocated, used etc.) in AWR which can be retrieved by querying the data dictionary view dba_hist_tbspc_space_usage
  • This script is based on AWR
  • If your AWR retention period is 7 days, this script can only tell the growth history of last 7 days and predict based on last 7 days growth
a as (
select name,ts#,block_size
from v$tablespace,dba_tablespaces
where name = tablespace_name
c as (
select,min(snap_id) Begin_snap_ID, max(snap_id) End_Snap_ID, min(trunc(to_date(rtime,'MM/DD/YYYY HH24:MI:SS'))) begin_time,
max(trunc(to_date(rtime,'MM/DD/YYYY HH24:MI:SS'))) End_time
from dba_hist_tbspc_space_usage,a
where tablespace_id= a.ts#
group by
d as (
round((dh.tablespace_size* A.BLOCK_SIZE)/1024/1024,2) begin_allocated_space,
round((dh.tablespace_usedsize * A.BLOCK_SIZE)/1024/1024,2) begin_Used_space
from dba_hist_tbspc_space_usage dh,c,a --,b
where dh.snap_id = c.Begin_snap_ID
and a.ts# = dh.tablespace_id
and =
e as (
round((tablespace_size*a.block_size)/1024/1024,2) End_allocated_space,
round((tablespace_usedsize*a.block_size)/1024/1024,2) End_Used_space
from dba_hist_tbspc_space_usage,c ,a
where snap_id = c.End_Snap_ID
and a.ts# = dba_hist_tbspc_space_usage.tablespace_id
and =
select,to_char(c.begin_time,'DD-MON-YYYY') Begin_time,d.begin_allocated_space "Begin_allocated_space(MB)",
d.begin_Used_space "Begin_Used_space(MB)",
to_char(c.End_time,'DD-MON-YYYY') End_Time, e.End_allocated_space "End_allocated_space(MB)", e.End_Used_space "End_Used_space(MB)",
(e.End_Used_space - d.begin_Used_space)"Total Growth(MB)", (c.End_time - c.begin_time)"No.of days",
round(((e.End_Used_space - d.begin_Used_space)/(c.End_time - c.begin_time))*30,2) "Growth(MB)_in_next30_days",
round(((e.End_Used_space - d.begin_Used_space)/(c.End_time - c.begin_time))*60,2) "Growth(MB)_in_next60_days",
round(((e.End_Used_space - d.begin_Used_space)/(c.End_time - c.begin_time))*90,2) "Growth(MB)_in_next90_days"
from e,d,c
where =
and =
and (e.End_Used_space - d.begin_Used_space) > 0
order by 1

Script to monitor Tablespace Usage

select,a.gbytes "Allocated", a.MAX_SPACE_GB "
Max Space(GB)", a.used_GB, a.Free_GB, a.pct_used_1 "%age Used"
from (
SELECT NVL(b.tablespace_name, NVL(a.tablespace_name, 'UNKOWN')) name,
Gbytes_alloc Gbytes,
Gbytes_alloc - NVL(Gbytes_free, 0) used_GB,
NVL(Gbytes_free, 0) free_GB,
ROUND(((Gbytes_alloc - NVL(Gbytes_free, 0)) / Gbytes_alloc) * 100, 2) pct_used,
ROUND(((Gbytes_alloc - NVL(Gbytes_free, 0)) / MAX_SPACE_GB) * 100, 2) pct_used_1,
NVL(largest_GB, 0) "largest(GB)"
FROM (SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024, 2) Gbytes_free,
ROUND(MAX(bytes) / 1024 / 1024 / 1024, 2) largest_GB,
FROM sys.dba_free_space
GROUP BY tablespace_name) a,
(SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024, 2) Gbytes_alloc,
FROM sys.dba_data_files
GROUP BY tablespace_name) b,
(select b.tablespace_name,sum(greatest(b.bytes / (1024 * 1024 * 1024),
b.maxbytes / (1024 * 1024 * 1024))) "MAX_SPACE_GB"
from dba_data_files b
group by b.tablespace_name)c
WHERE a.tablespace_name(+) = b.tablespace_name
and b.tablespace_name = c.tablespace_name
ORDER BY 6 desc
order by 6 desc

Script to monitor major database wait events

set echo off
set pages 1000
set lines 120
col inst format 9999
col sid format 9999
col event format a29 trunc
col program format a20 trunc
col module format a20
col username format A11
col secs format 99999
select w.inst_id inst, w.sid, w.event,s.module,s.username,w.p1,w.p2,w.p3,w.seconds_in_wait Secs
from gv$session_wait w, gv$session s
where w.inst_id = s.inst_id and
w.sid=s.sid and w.state='WAITING' and
w.event not in ('pmon timer',
'smon timer',
'rdbms ipc message',
'pipe get',
'SQL*Net message from client',
'SQL*Net message to client',
'SQL*Net break/reset to client',
'SQL*Net more data from client',
'wakeup time manager',
'slave wait',
'SQL*Net more data to client') and (w.event not like '%slave wait'
and w.event not like 'EMON slave idle wait%'
and w.event not like 'Streams AQ: waiting for%'
and w.event not like 'Space Manager: slave idle wai%'
and w.event not like 'Streams AQ: emn coordinator%'
and w.event not like 'VKRM%'
and w.event not like 'Streams AQ%')
group by w.inst_id, w.sid,w.event,s.module,s.username,w.p1,w.p2,w.p3,w.seconds_in_wait
order by 1,3;

Query (dbcheck.sql) to check how database is performing at a given point of time

set echo off
set feedback off
set verify off
set lines 500
set pages 1000
column event format a30
column module format a35
column sql_id format a15
SELECT event, module, sql_id, COUNT(*)
FROM v$session
('SQL*Net message from client',
'Streams AQ: waiting for time management or cleanup tasks',
'Streams AQ: qmn slave idle wait',
'Streams AQ: qmn coordinator idle wait',
'Streams AQ: emn coordinator idle wait', 'DIAG idle wait',
'SQL*Net message to client', 'pmon timer', 'smon timer',
'VKTM Logical Idle Wait', 'JOX Jit Process Sleep',
'PL/SQL lock timer', 'Streams AQ: waiting for messages in the queue',
'EMON slave idle wait', 'rdbms ipc message', 'PX Deq: Execution Msg',
'Streams AQ: waiting FOR messages IN the queue', 'rdbms ipc MESSAGE',
'Space Manager: slave idle wait', 'pipe get', 'PL/SQL LOCK timer',
'SQL*Net more data to client', 'SQL*Net break/reset to client')
GROUP BY event, module, sql_id
set feedback on
set echo on
set verify on

Query to know the specific user session detail

  • Input – Session ID

select ses.action,
substr(ses.program, 1, instr(ses.program, ' ') - 1) PROGRAM,
to_char(ses.logon_time, 'DD-MON-RR HH24:MI:SS') CONNECT_TIME
from v$session ses, v$process proc
where ses.paddr = proc.addr and ses.sid = &sid

Script to look for all the concurrent jobs currently running in the database.

select q.concurrent_queue_name qname,
a.request_id "Req Id",
decode(a.parent_request_id, -1, NULL, a.parent_request_id) "Parent",
a.concurrent_program_id "Prg Id",
(nvl(a.actual_completion_date, sysdate) - a.actual_start_date) * 1440 "Time",
c.concurrent_program_name || ' - ' ||
a.program "Program"
from APPS.fnd_conc_req_summary_v a,
APPLSYS.fnd_concurrent_processes b,
applsys.fnd_concurrent_queues q,
APPLSYS.fnd_concurrent_programs_tl c2,
APPLSYS.fnd_concurrent_programs c,
APPLSYS.fnd_user f
where a.controlling_manager = b.concurrent_process_id and
a.concurrent_program_id = c.concurrent_program_id and
a.program_application_id = c.application_id and
c2.concurrent_program_id = c.concurrent_program_id and
c2.application_id = c.application_id and
a.phase_code in ('I', 'P', 'R', 'T') and a.requested_by = f.user_id and
b.queue_application_id = q.application_id and
b.concurrent_queue_id = q.concurrent_queue_id and c2.language = 'US' and
a.hold_flag = 'N'
order by 1, 3;

Query to look for all the concurrent jobs currently running in the database in a specific manager

  • Input – Queue name

set echo off
set heading on
set lines 1000
set pagesize 1000

col spid form a6 head SPID
col program form A60 trunc
col time form 99999.99 head Elapsed
col "Req Id" form 9999999999
col "Parent" form a9
col "Prg Id" form 9999999col qname head "Manager" format a20 trunc
col user_name form A12 head User trunc
set recsep off

select q.concurrent_queue_name qname,
a.request_id "Req Id",
decode(a.parent_request_id, -1, NULL, a.parent_request_id) "Parent",
a.concurrent_program_id "Prg Id",
(nvl(a.actual_completion_date, sysdate) - a.actual_start_date) * 1440 "Time",
c.concurrent_program_name || ' - ' || a.program "Program"
from APPS.fnd_conc_req_summary_v a,
APPLSYS.fnd_concurrent_processes b,
applsys.fnd_concurrent_queues q,
APPLSYS.fnd_concurrent_programs_tl c2,
APPLSYS.fnd_concurrent_programs c,
APPLSYS.fnd_user f
where a.controlling_manager = b.concurrent_process_id and
a.concurrent_program_id = c.concurrent_program_id and
a.program_application_id = c.application_id and
c2.concurrent_program_id = c.concurrent_program_id and
c2.application_id = c.application_id and
a.phase_code in ('I', 'P', 'R', 'T') and a.requested_by = f.user_id and
b.queue_application_id = q.application_id and
b.concurrent_queue_id = q.concurrent_queue_id and c2.language = 'US' and
a.hold_flag = 'N' and q.concurrent_queue_name = '&qeueue_name'
order by 1, 3
set echo on


Query to find the concurrent program ID for any concurrent program(findprog.sql)

  • Input – Concurrent Program (Wild character will also do)

set echo off
set line 132
set feed off
set define on
set serveroutput on
set timing off
set pagesize 1000
set heading off
undefine Concurrent_Program_Name1
accept Concurrent_Program_Name1 prompt 'Concurrent_Program_Name: '
exec dbms_output.put_line('*************************************');
exec dbms_output.put_line('Displaying program details...');
exec dbms_output.put_line('*************************************');
select '--------------------------------------------------------'||chr(10) from dual
select 'Prog Name: ' || fcpt.user_concurrent_program_name || chr(10) ||
chr(9) || 'Conc Prog Id: ' || fcpt.concurrent_program_id || chr(10) ||
chr(9) || 'Short Name: ' || fcp.concurrent_program_name || chr(10) ||
chr(9) || 'Application: ' || fat.application_name || chr(10) ||
'--------------------------------------------------------' Details
from apps.fnd_concurrent_programs_tl fcpt,
apps.fnd_concurrent_programs fcp,
apps.fnd_application_tl fat
where upper(fcpt.user_concurrent_program_name) like
upper('&Concurrent_Program_Name1') and
fcpt.concurrent_program_id = fcp.concurrent_program_id and
fcpt.application_id = fcp.application_id and
fcpt.application_id = fat.application_id
order by 1

set timing on
set heading on

Query to find the history run statistics of any specific program

  • Input – Concurrent Program ID

set echo off
clear column
set lines 500
set feedback off
set verify off
accept program_id prompt 'Enter Conc Prog ID :'

column request_id format 999999999
column username format a15
column name format a40
column argument_text format a30
column actual_start_date format a10

select r.request_id Request,
f.user_name UserName,
to_char(r.actual_start_date, 'DD-MON-YYYY HH24:MI:SS') Run_date,
round((r.actual_completion_date - r.actual_start_date) * 1440, 4) Elapsed,
r.argument_text "Program Parameters"
from apps.fnd_concurrent_requests r, apps.fnd_user f
where r.concurrent_program_id = &&program_id and
r.requested_by = f.user_id
order by r.actual_completion_date desc

select r.concurrent_program_id Id,
p.user_concurrent_program_name Name,
trunc(r.actual_start_date) Start_date,
round(avg((r.actual_completion_date - r.actual_start_date) * 1440),
2) "Avg Elapsed Time (min)",
from apps.fnd_concurrent_requests r, apps.fnd_concurrent_programs_tl p
where r.concurrent_program_id = p.concurrent_program_id and
r.program_application_id = p.application_id and
r.concurrent_program_id = &&program_id and p.language = 'US'
group by r.concurrent_program_id,

select concurrent_program_id,
avg(round((actual_completion_date - actual_start_date) * 1440, 2)) as "Avg_Time",
max(round((actual_completion_date - actual_start_date) * 1440, 2)) as "Max_Time",
min(round((actual_completion_date - actual_start_date) * 1440, 2)) as "Min_Time"
from (select fr.concurrent_program_id,
from apps.fnd_concurrent_requests fr,
apps.fnd_concurrent_programs_tl fc,
apps.fnd_user fu
where fr.concurrent_program_id = fc.concurrent_program_id and
fu.user_id = fr.requested_by and
fr.concurrent_program_id = &&program_id and
fc.language = 'US' and fr.status_code = 'C' and
fr.phase_code = 'C')
group by concurrent_program_id, user_concurrent_program_name

prompt ++++++++++++++++++++++++++++++++++ END

Query to find job statistics submitted by a particular user.
It is useful to run during the month end to monitor the job statistics for a particular user.
It can be customized for (Input)

  • User
  • Request Date or Completion Date.
  • Specific concurrent Program.

select r.request_id Request,
f.user_name UserNam,
r.phase_code) PHASE,
'On Hold',
'No Manager',
r.status_code) STATUS,
to_char(r.actual_start_date, 'DD-MON-YYYY HH24:MI:SS') Run_date,
to_char(r.actual_completion_date, 'DD-MON-YYYY HH24:MI:SS') Comletion_date,
round((nvl(r.actual_completion_date, sysdate) - r.actual_start_date) * 1440,
2) Elapsed,
r.argument_text "Program Parameters"
from apps.fnd_concurrent_requests r,
apps.fnd_user f,
apps.fnd_concurrent_programs_tl fcpt
where r.requested_by = f.user_id
and TO_DATE(r.request_date) >= to_date('08-NOV-2016 10:00:00', 'DD-MON-YYYY HH24:MI:SS') and
f.user_name = '&User_Name' and
fcpt.language = 'US' and
upper(fcpt.USER_CONCURRENT_PROGRAM_NAME) like upper('%&Prog_name%')
AND r.phase_code = 'C' -- Can be commented
AND r.status_code = 'C' -- Can be commented as per need
order by r.actual_completion_date desc

Query to find the profile option and their values defined.
Query can be customized to check for any specific profile option name

  • Input – Profile Option name(Wild character will also do)

select distinct
t.user_profile_option_name "Profile Option Name",
decode(v.level_id, 10001,'Site Level',
10002,'Application Level --> ' ||application_name ,
10003,'Responsibility level-->'||responsibility_name,
10004,'User Level-->' ||u.user_name,
'XXX') "Profile Option Level",
profile_option_value "Value"
from apps.fnd_profile_options o,
apps.fnd_profile_option_values v,
apps.fnd_profile_options_tl t,
apps.fnd_responsibility_tl r,
apps.fnd_application_tl a,apps.fnd_user u
where o.profile_option_id = v.profile_option_id
and o.application_id = v.application_id
and start_date_active <= SYSDATE and nvl(end_date_active,SYSDATE) >= SYSDATE
and o.profile_option_name = t.profile_option_name
and a.application_id(+) = decode(level_id,10002,level_value,null)
and r.responsibility_id(+)= decode(level_id,10003,level_value,null)
and u.user_id(+) = decode(level_id,10004,level_value,null)
and upper(t.user_profile_option_name) like upper('%&Profile_name%')
and t.language = 'US'
order by 2,
decode(v.level_id, 10001,'Site Level',
10002,'Application Level --> ' ||application_name ,
10003,'Responsibility level-->'||responsibility_name,
10004,'User Level-->' ||u.user_name,

Query to know how much datafiles within a specific tablespace can be resized. This query comes very handy when you want to check how much a datafile within a TBS can be resized.

  • Input – Tablespace ID (can be found from v$tablespace) and Tablespace Name

set linesize 1000 pagesize 0 feedback off trimspool on
hwm as (
-- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
-- 403 is the TS # for APPS_TS_TX_DATA
select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
from sys.x$ktfbue WHERE ktfbuesegtsn = 388 group by ktfbuefno,ktfbuesegtsn
hwmts as (
-- join ts# with tablespace_name
select name tablespace_name,relative_fno,hwm_blocks
from hwm join v$tablespace using(ts#)where name = 'APPS_TS_TX_DATA'
hwmdf as (
-- join with datafiles, put 5M minimum for datafiles with no extents
select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes
from hwmts right join dba_data_files using(tablespace_name,relative_fno)
where tablespace_name = 'APPS_TS_TX_DATA'
case when autoextensible='YES' and maxbytes>=bytes
then -- we generate resize statements only if autoextensible can grow back to current size
'/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
else -- generate only a comment when autoextensible is off
'/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
||'M from '||to_char(ceil(bytes/1024/1024),999999)
||'M after setting autoextensible maxsize higher than current size for file '
|| file_name||' */'
end SQL
from hwmdf
bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
order by bytes-hwm_bytes desc;

Happy Reading!

-Anand M


Oracle Apps R12.2.2 Log file location and Environment Variables

November 2, 2016 1 comment

I have compiled the name and location of all the log files in Oracle EBS 12.2.2. As a DBA, I find it quite handy when you need to do some troubleshooting with a definite timeline.

Oracle Apps R12.2.2 Log file location

Also below is the frequently used and very useful environment variables that comes handy often.

<shows which file edition you are using, run or patch>

$ echo $RUN_BASE
<shows an absolute path to run file system>

$ echo $PATCH_BASE
<shows an absolute path to patch file system>

$ echo $NE_BASE
<shows an absolute path to non-edition file system>

$ echo $APPL_TOP_NE
<non-editioned appl_top path. Equivalent to $NE_BASE/EBSapps/appl>

$ echo $LOG_HOME
<Application Instance Specific Log Directory>

<Online patching Specific Log Directory. Equivalent to $NE_BASE/EBSapps/log/adop>
<FMW Web Tier Home Directory>

echo $FMW_HOME
<FMW home>

< 10.1.2 ORACLE_HOME>

<Source for information populating template files (autoconfig)>

<WLS Deployment of Oracle E-Business Suite 12.2 Domain (instance specific)>

<Shell scripts to control processes associated to the Applications Instance>

<Oracle E-Business Suite 12.2 FMW Deployment directory>

$ echo $RW
<10.1.2 reports directory>

$ echo $HOSTNAME
<hostname without domain name>

<to get the EBS version>

And the most important part is for setting up the right environment don’t directly hard code the RUN environment .env file in the .profile of OS user as online patching switches the filesystem from RUN to PATCH and vice versa and it can really create confusion.
Instead use EBSapps.env environment file (created under BASE directory) with ‘RUN’ as argument. It will automatically find which file system (fs1 or fs2) is currently RUN file system and lay out the correct environment.

For example, in our case base directory is ‘/<TWO_TASK>/applmgr’

. /<TWO_TASK>/applmgr/EBSapps.env RUN

E-Business Suite Environment Information
RUN File System : /<TWO_TASK>/applmgr/fs2/EBSapps/appl
PATCH File System : /<TWO_TASK>/applmgr/fs1/EBSapps/appl
Non-Editioned File System : /<TWO_TASK>/applmgr/fs_ne
DB Host: Service/SID: <TWO_TASK>
Sourcing the RUN File System …

Hope this helps. Happy learning!!!

-Anand M

Space reclaim using complete database export import

November 2, 2016 Leave a comment

We had a non-prod database which was having more than 50% of space as free. Objective was to reclaim the space at OS level and release the space back to storage.
This is an Oracle E-Biz environment.
Ebiz Version – 12.2.2
Oracle DB version –
OS – 64 bit Oracle Linux

Tried to resize the datafile as much as possible but could not reclaim enough space. Hence thought of doing the full database re-org using export and import.

This document demonstrates step by step procedure with screenshots to do full database reorg using export/import

Full Database reorg using export-import

Post export/import, I was able to reclaim around 4 TB (65% reduction) of space.

Pls see the result below.


Hope this helps. Happy learning!!!

-Anand M

“Output Post Processor” Concurrent Manager not able to start

January 21, 2016 Leave a comment

Development team informed me of an issue where the concurrent job (that needs post processing) errored. On reveiwing the request log file, i notice an isue with ‘Output post Processor’. I checked the OPP in the ‘Administer Concurrent Manager’ screen and found

Actual=4 and Target=0 processes

I tried to restart but still getting the same status. Later on I tried to query the “FNDOPP” process on the application tier
$ ps -ef|grep -i FNDOPP|grep -v grep

and this resulted in 0 process.

I then looked into the manager log file and found below error

Jan 19, 2016 8:02:44 AM oracle.ias.cache.CacheInternal logLifecycleEvent
INFO: JOC is initialized from oracle.apps.jtf.cache.IASCacheProvider.init, ver=, distribute=true, vid=996, coordinator=0, discover list=[[] segID=1]
Unable to initialize state monitor.
oracle.apps.fnd.cp.gsm.GenCartCommException: ORA-01403: no data found
ORA-06512: at "APPS.FND_CP_GSM_IPC", line 539
ORA-06512: at line 1

	at oracle.apps.fnd.cp.gsm.GenCartComm.initService(Unknown Source)
	at oracle.apps.fnd.cp.gsm.GenCartComm.<init>(Unknown Source)
	at oracle.apps.fnd.cp.gsf.GSMStateMonitor.init(Unknown Source)
	at oracle.apps.fnd.cp.gsf.GSMStateMonitor.<init>(Unknown Source)
	at oracle.apps.fnd.cp.gsf.GSMServiceController.init(
	at oracle.apps.fnd.cp.gsf.GSMServiceController.<init>(
	at oracle.apps.fnd.cp.gsf.GSMServiceController.main(

Solution that resolved the issue

I found “Service Manager” was down. So I restarted “Service Manager” and then restarted “Output Post Processor”.
and Actual=4 and Target=4 processes

I asked the development team to submit the job again. This time job completed successfully.

Categories: Oracle Apps