Thinking Out Loud

October 30, 2016

GoldenGate 12.2 EXCLUDEUSER

Filed under: GoldenGate,oracle — mdinh @ 9:02 pm

Here’s a good reason to always use the latest version.

GoldenGate 12c Integrated EXTRACT: How To EXCLUDEUSER (Doc ID 2008420.1)

APPLIES TO:

Oracle GoldenGate – Version 12.1.2.0.2 to 12.1.2.1.9 [Release 12.1]
Information in this document applies to any platform.

GOAL

We are trying to exclude the REPLICAT user in OGG 12.1.2 capturing from Oracle 12.1.0.2 bi-directional. But using EXTRACT TRANLOGOPTIONS EXCLUDEUSER to exclude a user in one of the PDBs:

TRANLOGOPTIONS EXCLUDEUSER ZEPP01.ZE_GG

EXTRACT is returning the message:

2015-03-25 15:44:27 ERROR OGG-00303 Could not find USER_ID corresponding to USERNAME ‘ZEPP01.ZE_GG’.

SOLUTION

When using Integrated EXTRACT, EXCLUDEUSER is currently not supported in OGG V12.1.2. We have a enhancement request tracked via Bug 21891811 – Support EXCLUDEUSER in Integrated Extract (IE) for DDL records which has been implementd in OGG v12.2.0.1.

For excluding users in a 12c multitenant DB, OGG v12.1.2  has a new EXTRACT param TRANLOGOPTIONS EXCLUDETAG <nn>. This is typically used to exclude the REPLICAT user in bi-directional configurations.

Changes made by Integrated REP are tagged by default in redo as 00. So adding the EXTRACT param TRANLOGOPTIONS EXCLUDETAG 00

Would exclude those operations.

The tag can also be explicitly set in REPLICAT using:

DBOPTIONS SETTAG 0935

Then in EXTRACT param:

TRANLOGOPTIONS EXCLUDETAG 0935

With OGG v12.2.0.1+, you can use the earlier option of TRANLOGOPTIONS EXCLUDEUSER ZEPP01.ZE_GG

Reference:

(2016, October 18). Retrieved October 30, 2016, from https://support.oracle.com/epmos/faces/DocContentDisplay?id=2008420.1

October 29, 2016

Working With Interval Datatype

Filed under: 10g,oracle — mdinh @ 3:41 pm

Here is the scope:

The job takes 42m to complete and monitoring threshold is 30m.

If 42m is the typical to complete, then we should increase the threshold to avoid noise.

If 42m is not the typical time to complete, then we should investigate to see if this is an anomaly.

From dba_scheduler_job_run_details, RUN_DURATION  data type is INTERVAL DAY(3) TO SECOND(0).

Querying dba_scheduler_job_run_details provides the following result.

This seems to be a lot of work trying to identify jobs running longer that 30m, plus there can be errors as a run could have been missed which I did not highlight.

This test case was for 10.2.0.4 but imagine it would for all versions higher.

-- dba_scheduler_jobs
Enter value for 1: PURGE_JOB

OWNER			        JOB_NAME 		      STATE	      REPEAT_INTERVAL						   NEXT_RUN_DATE
------------------------------ ------------------------------ --------------- ------------------------------------------------------------ --------------------------------------------------
DEMODB01		       PURGE_JOB	      SCHEDULED       Freq=Hourly						   28-OCT-16 02.00.00.000000 PM -05:00

-- dba_scheduler_job_run_details

LOG_DATE			    JOB_NAME			   STATUS     ACTUAL_START_DATE 				 RUN_DURATION
----------------------------------- ------------------------------ ---------- -------------------------------------------------- ---------------
28-OCT-16 01.18.34.766342 PM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 01.00.00.101332 PM -05:00		 +000 00:18:35
28-OCT-16 12.18.50.630375 PM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 12.00.00.128330 PM -05:00		 +000 00:18:50
28-OCT-16 11.19.39.302422 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 11.00.00.096290 AM -05:00		 +000 00:19:39
28-OCT-16 10.19.06.631086 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 10.00.00.098943 AM -05:00		 +000 00:19:07
28-OCT-16 09.21.55.042321 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 09.00.00.142871 AM -05:00		 +000 00:21:55
28-OCT-16 08.22.05.168894 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 08.00.00.114841 AM -05:00		 +000 00:22:05
28-OCT-16 07.20.58.795685 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 07.00.00.615018 AM -05:00		 +000 00:20:58
28-OCT-16 06.19.32.286364 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 06.00.04.355748 AM -05:00		 +000 00:19:28
28-OCT-16 05.22.12.990840 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 05.00.00.108447 AM -05:00		 +000 00:22:13
28-OCT-16 04.22.46.316064 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 04.00.00.822142 AM -05:00		 +000 00:22:45
28-OCT-16 03.23.49.518493 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 03.00.00.767310 AM -05:00		 +000 00:23:49
28-OCT-16 02.33.20.747873 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 02.00.00.689275 AM -05:00		 +000 00:33:20
28-OCT-16 01.42.40.477522 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 01.00.00.687059 AM -05:00		 +000 00:42:40
28-OCT-16 12.24.03.200350 AM -05:00 PURGE_JOB		   SUCCEEDED  28-OCT-16 12.00.00.687291 AM -05:00		 +000 00:24:03
27-OCT-16 11.21.45.310127 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 11.00.00.174274 PM -05:00		 +000 00:21:45
27-OCT-16 10.22.04.226755 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 10.00.00.100101 PM -05:00		 +000 00:22:04
27-OCT-16 09.20.46.491481 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 09.00.00.095188 PM -05:00		 +000 00:20:46
27-OCT-16 08.20.11.936472 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 08.00.01.162137 PM -05:00		 +000 00:20:11
27-OCT-16 07.16.30.836698 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 07.00.00.122310 PM -05:00		 +000 00:16:31
27-OCT-16 06.17.19.315011 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 06.00.00.683161 PM -05:00		 +000 00:17:19
27-OCT-16 05.17.40.010428 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 05.00.00.093667 PM -05:00		 +000 00:17:40
27-OCT-16 04.18.08.064115 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 04.00.00.227937 PM -05:00		 +000 00:18:08
27-OCT-16 03.19.58.561961 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 03.00.00.136952 PM -05:00		 +000 00:19:58
27-OCT-16 02.17.30.492761 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 02.00.00.149822 PM -05:00		 +000 00:17:30
27-OCT-16 01.17.44.490772 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 01.00.00.084995 PM -05:00		 +000 00:17:44
27-OCT-16 12.19.29.486819 PM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 12.00.00.665087 PM -05:00		 +000 00:19:29
27-OCT-16 11.19.43.126793 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 11.00.00.095392 AM -05:00		 +000 00:19:43
27-OCT-16 10.20.28.096927 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 10.00.00.098227 AM -05:00		 +000 00:20:28
27-OCT-16 09.20.59.774236 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 09.00.00.087305 AM -05:00		 +000 00:21:00
27-OCT-16 08.20.11.194525 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 08.00.00.095362 AM -05:00		 +000 00:20:11
27-OCT-16 07.21.19.440710 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 07.00.00.093116 AM -05:00		 +000 00:21:19
27-OCT-16 06.21.21.777725 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 06.00.00.729176 AM -05:00		 +000 00:21:21
27-OCT-16 05.22.00.235810 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 05.00.00.093358 AM -05:00		 +000 00:22:00
27-OCT-16 04.22.41.035983 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 04.00.00.109282 AM -05:00		 +000 00:22:41
27-OCT-16 03.23.03.747790 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 03.00.00.740992 AM -05:00		 +000 00:23:03
27-OCT-16 02.28.28.902795 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 02.00.00.098180 AM -05:00		 +000 00:28:29
27-OCT-16 01.38.20.999365 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 01.00.00.618087 AM -05:00		 +000 00:38:20
27-OCT-16 12.22.48.596935 AM -05:00 PURGE_JOB		   SUCCEEDED  27-OCT-16 12.00.00.134944 AM -05:00		 +000 00:22:48
26-OCT-16 11.22.39.417841 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 11.00.00.601951 PM -05:00		 +000 00:22:39
26-OCT-16 10.21.24.088692 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 10.00.00.700209 PM -05:00		 +000 00:21:23
26-OCT-16 09.21.48.083949 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 09.00.00.108737 PM -05:00		 +000 00:21:48
26-OCT-16 08.20.36.600056 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 08.00.00.599238 PM -05:00		 +000 00:20:36
26-OCT-16 07.21.26.973000 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 07.00.00.170795 PM -05:00		 +000 00:21:27
26-OCT-16 06.21.13.513245 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 06.00.00.661959 PM -05:00		 +000 00:21:13
26-OCT-16 05.20.38.611174 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 05.00.00.095111 PM -05:00		 +000 00:20:39
26-OCT-16 04.19.43.026097 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 04.00.00.647115 PM -05:00		 +000 00:19:42
26-OCT-16 03.22.25.930388 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 03.00.00.099210 PM -05:00		 +000 00:22:26
26-OCT-16 02.22.28.929046 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 02.00.00.084756 PM -05:00		 +000 00:22:29
26-OCT-16 01.22.35.673399 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 01.00.00.092354 PM -05:00		 +000 00:22:36
26-OCT-16 12.22.51.518382 PM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 12.00.00.098155 PM -05:00		 +000 00:22:51
26-OCT-16 11.25.12.885694 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 11.00.00.097514 AM -05:00		 +000 00:25:13
26-OCT-16 10.25.30.697623 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 10.00.00.109984 AM -05:00		 +000 00:25:31
26-OCT-16 09.25.48.331970 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 09.00.00.650305 AM -05:00		 +000 00:25:48
26-OCT-16 08.23.00.786181 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 08.00.00.092182 AM -05:00		 +000 00:23:01
26-OCT-16 07.23.09.292134 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 07.00.00.706906 AM -05:00		 +000 00:23:09
26-OCT-16 06.23.26.838110 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 06.00.00.737269 AM -05:00		 +000 00:23:26
26-OCT-16 05.23.46.628555 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 05.00.00.779244 AM -05:00		 +000 00:23:46
26-OCT-16 04.26.10.248261 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 04.00.00.118811 AM -05:00		 +000 00:26:10
26-OCT-16 03.26.45.574405 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 03.00.00.868199 AM -05:00		 +000 00:26:45
26-OCT-16 02.29.59.220470 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 02.00.00.716143 AM -05:00		 +000 00:29:59
26-OCT-16 01.41.19.190372 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 01.00.00.671400 AM -05:00		 +000 00:41:19
26-OCT-16 12.40.14.668943 AM -05:00 PURGE_JOB		   SUCCEEDED  26-OCT-16 12.00.00.686242 AM -05:00		 +000 00:40:14
25-OCT-16 11.25.01.222486 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 11.00.00.094405 PM -05:00		 +000 00:25:01
25-OCT-16 10.25.16.188272 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 10.00.00.093149 PM -05:00		 +000 00:25:16
25-OCT-16 09.22.50.957227 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 09.00.00.685572 PM -05:00		 +000 00:22:50
25-OCT-16 08.23.24.366195 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 08.00.00.681146 PM -05:00		 +000 00:23:24
25-OCT-16 07.21.34.877025 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 07.00.00.679188 PM -05:00		 +000 00:21:34
25-OCT-16 06.22.01.687270 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 06.00.00.705931 PM -05:00		 +000 00:22:01
25-OCT-16 05.22.59.985410 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 05.00.00.058401 PM -05:00		 +000 00:23:00
25-OCT-16 04.23.47.510414 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 04.00.00.117113 PM -05:00		 +000 00:23:47
25-OCT-16 03.23.04.877802 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 03.00.00.099222 PM -05:00		 +000 00:23:05
25-OCT-16 02.26.03.301910 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 02.00.00.116152 PM -05:00		 +000 00:26:03
25-OCT-16 01.24.27.430723 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 01.00.00.117626 PM -05:00		 +000 00:24:27
25-OCT-16 12.24.36.971637 PM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 12.00.00.664118 PM -05:00		 +000 00:24:36
25-OCT-16 11.23.58.307402 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 11.00.00.096260 AM -05:00		 +000 00:23:58
25-OCT-16 10.24.31.716126 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 10.00.00.092256 AM -05:00		 +000 00:24:32
25-OCT-16 09.24.10.974503 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 09.00.00.100348 AM -05:00		 +000 00:24:11
25-OCT-16 08.22.53.833215 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 08.00.00.082274 AM -05:00		 +000 00:22:54
25-OCT-16 07.24.30.938431 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 07.00.00.692159 AM -05:00		 +000 00:24:30
25-OCT-16 06.23.52.431531 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 06.00.00.093246 AM -05:00		 +000 00:23:52
25-OCT-16 05.24.41.532133 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 05.00.00.729381 AM -05:00		 +000 00:24:41
25-OCT-16 04.23.25.958305 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 04.00.00.208211 AM -05:00		 +000 00:23:26
25-OCT-16 03.25.01.192060 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 03.00.00.106130 AM -05:00		 +000 00:25:01
25-OCT-16 02.28.53.376223 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 02.00.00.662053 AM -05:00		 +000 00:28:53
25-OCT-16 01.40.10.906657 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 01.00.00.690348 AM -05:00		 +000 00:40:10
25-OCT-16 12.26.06.567009 AM -05:00 PURGE_JOB		   SUCCEEDED  25-OCT-16 12.00.00.101229 AM -05:00		 +000 00:26:06
24-OCT-16 11.23.44.968956 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 11.00.00.112954 PM -05:00		 +000 00:23:45
24-OCT-16 10.25.27.790486 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 10.00.02.818356 PM -05:00		 +000 00:25:25
24-OCT-16 09.21.43.329295 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 09.00.00.143735 PM -05:00		 +000 00:21:43
24-OCT-16 08.23.53.456424 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 08.00.00.699357 PM -05:00		 +000 00:23:53
24-OCT-16 07.22.46.201840 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 07.00.00.108008 PM -05:00		 +000 00:22:46
24-OCT-16 06.24.31.090402 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 06.00.00.712135 PM -05:00		 +000 00:24:30
24-OCT-16 05.22.41.748800 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 05.00.00.092252 PM -05:00		 +000 00:22:42
24-OCT-16 04.24.25.687414 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 04.00.00.093241 PM -05:00		 +000 00:24:26
24-OCT-16 03.24.51.693561 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 03.00.00.734264 PM -05:00		 +000 00:24:51
24-OCT-16 02.23.25.886242 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 02.00.00.689991 PM -05:00		 +000 00:23:25
24-OCT-16 01.26.34.238241 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 01.00.00.097179 PM -05:00		 +000 00:26:34
24-OCT-16 12.26.56.946893 PM -05:00 PURGE_JOB		   SUCCEEDED  24-OCT-16 12.00.00.097951 PM -05:00		 +000 00:26:57

98 rows selected.

SQL >

Here’s the better way for  doing this using extract function.

Datatype is data type is INTERVAL DAY(3) TO SECOND(0)..

24 hours per day and 60 minutes per hour.

SQL > r
  1  SELECT log_date, job_name, extract(day from 24*60*RUN_DURATION) mins
  2  FROM dba_scheduler_job_run_details
  3  WHERE extract(day from 24*60*RUN_DURATION)>30
  4  AND job_name='PURGE_JOB'
  5  and log_date>sysdate-90
  6  ORDER BY 1 DESC
  7*

LOG_DATE			    JOB_NAME				 MINS
----------------------------------- ------------------------------ ----------
28-OCT-16 02.33.20.747873 AM -05:00 PURGE_JOB			   33
28-OCT-16 01.42.40.477522 AM -05:00 PURGE_JOB			   42
27-OCT-16 01.38.20.999365 AM -05:00 PURGE_JOB			   38
26-OCT-16 01.41.19.190372 AM -05:00 PURGE_JOB			   41
26-OCT-16 12.40.14.668943 AM -05:00 PURGE_JOB			   40
25-OCT-16 01.40.10.906657 AM -05:00 PURGE_JOB			   40
24-OCT-16 03.31.51.372573 AM -05:00 PURGE_JOB			   31
24-OCT-16 02.31.55.268260 AM -05:00 PURGE_JOB			   31
24-OCT-16 01.45.33.525611 AM -05:00 PURGE_JOB			   45
22-OCT-16 05.39.23.980162 AM -05:00 PURGE_JOB			   39
22-OCT-16 04.31.25.202116 AM -05:00 PURGE_JOB			   31
21-OCT-16 01.38.29.977490 AM -05:00 PURGE_JOB			   38
20-OCT-16 01.40.58.167010 AM -05:00 PURGE_JOB			   40
19-OCT-16 01.32.25.295604 AM -05:00 PURGE_JOB			   32
18-OCT-16 01.32.53.733495 AM -05:00 PURGE_JOB			   32
17-OCT-16 01.31.49.296251 AM -05:00 PURGE_JOB			   31
14-OCT-16 03.31.43.274818 AM -05:00 PURGE_JOB			   31
14-OCT-16 02.36.02.084047 AM -05:00 PURGE_JOB			   36
14-OCT-16 01.40.30.270864 AM -05:00 PURGE_JOB			   40
13-OCT-16 01.40.32.792297 AM -05:00 PURGE_JOB			   40
12-OCT-16 03.31.59.942443 AM -05:00 PURGE_JOB			   31
12-OCT-16 02.35.10.422916 AM -05:00 PURGE_JOB			   35
12-OCT-16 01.44.37.543112 AM -05:00 PURGE_JOB			   44
11-OCT-16 01.37.06.018139 AM -05:00 PURGE_JOB			   37
10-OCT-16 03.31.07.278843 AM -05:00 PURGE_JOB			   31
10-OCT-16 02.33.08.974312 AM -05:00 PURGE_JOB			   33
10-OCT-16 01.44.18.952183 AM -05:00 PURGE_JOB			   44
08-OCT-16 05.34.36.615095 AM -05:00 PURGE_JOB			   34
03-OCT-16 02.34.54.308012 AM -05:00 PURGE_JOB			   34
03-OCT-16 01.38.21.388805 AM -05:00 PURGE_JOB			   38
02-OCT-16 04.33.57.111031 PM -05:00 PURGE_JOB			   33
02-OCT-16 03.31.53.305048 AM -05:00 PURGE_JOB			   31
02-OCT-16 01.38.53.896792 AM -05:00 PURGE_JOB			   38
30-SEP-16 04.31.26.036246 AM -05:00 PURGE_JOB			   31
30-SEP-16 01.42.47.725811 AM -05:00 PURGE_JOB			   42
29-SEP-16 01.35.00.663317 AM -05:00 PURGE_JOB			   35

36 rows selected.

SQL > 

October 26, 2016

Quarterly CPU – To Patch or Not

Filed under: oracle — mdinh @ 1:13 pm

If the organization is mandated by regulation, e.g. PCI, then it’s a no brainer.

What if the is no regulation and system is stable?

At this point, it depends on risk tolerance.

Review Oracle Critical Patch Update Advisory – October 2016 and search for Oracle Database Server Risk Matrix.

This example is for Oracle Database Server and please search accordingly for the required components.

Remote Exploit without Auth.? NO for all except Application Express

How much time is spent assessing risk and is it more efficient to spend the time patching?

It all boils down to time and tolerance.

Isolate Your Code

Filed under: oracle — mdinh @ 12:19 am

I fail to understand anonymous PL/SQL block is used with dbms_scheduler.

Here is an example:

hawk:(SYSTEM@hawk):PRIMARY> @x.sql
hawk:(SYSTEM@hawk):PRIMARY> set echo on
hawk:(SYSTEM@hawk):PRIMARY> BEGIN
  2  DBMS_SCHEDULER.CREATE_PROGRAM(
  3  program_name=>'TESTING',
  4  program_action=>'DECLARE
  5  x NUMBER := 100;
  6  BEGIN
  7     FOR i IN 1..10 LOOP
  8        IF MOD(i,2) = 0 THEN
  9           INSERT INTO temp VALUES (i);
 10        ELSE
 11           INSERT INTO temp VALUES (i);
 12        END IF;
 13        x := x + 100;
 14     END LOOP;
 15     COMMIT;
 16  END;',
 17  program_type=>'PLSQL_BLOCK',
 18  number_of_arguments=>0
 19  );
 20  END;
 21  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> -- exec DBMS_SCHEDULER.DROP_PROGRAM('TESTING');

Nothing wrong, right? What happens when we strip out and run the anonymous PL/SQL block?

hawk:(SYSTEM@hawk):PRIMARY> @y.sql
hawk:(SYSTEM@hawk):PRIMARY> DECLARE
  2     x NUMBER := 100;
  3  BEGIN
  4     FOR i IN 1..10 LOOP
  5        IF MOD(i,2) = 0 THEN
  6           INSERT INTO temp VALUES (i);
  7        ELSE
  8           INSERT INTO temp VALUES (i);
  9        END IF;
 10        x := x + 100;
 11     END LOOP;
 12     COMMIT;
 13  END;
 14  /
         INSERT INTO temp VALUES (i);
                     *
ERROR at line 6:
ORA-06550: line 6, column 22:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 6, column 10:
PL/SQL: SQL Statement ignored
ORA-06550: line 8, column 22:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 8, column 10:
PL/SQL: SQL Statement ignored


hawk:(SYSTEM@hawk):PRIMARY> desc temp;
ERROR:
ORA-04043: object temp does not exist


hawk:(SYSTEM@hawk):PRIMARY>

Why not create stored procedure or package?

hawk:(SYSTEM@hawk):PRIMARY> @z.sql
hawk:(SYSTEM@hawk):PRIMARY> create or replace procedure SP_TESTING
  2  AS
  3  x NUMBER := 100;
  4  BEGIN
  5     FOR i IN 1..10 LOOP
  6        IF MOD(i,2) = 0 THEN
  7           INSERT INTO temp VALUES (i);
  8        ELSE
  9           INSERT INTO temp VALUES (i);
 10        END IF;
 11        x := x + 100;
 12     END LOOP;
 13     COMMIT;
 14  END;
 15  /

Warning: Procedure created with compilation errors.

hawk:(SYSTEM@hawk):PRIMARY> show error
Errors for PROCEDURE SP_TESTING:

LINE/COL ERROR
-------- -----------------------------------------------------------------
7/10     PL/SQL: SQL Statement ignored
7/22     PL/SQL: ORA-00942: table or view does not exist
9/10     PL/SQL: SQL Statement ignored
9/22     PL/SQL: ORA-00942: table or view does not exist

hawk:(SYSTEM@hawk):PRIMARY> create table temp(id int);

Table created.

hawk:(SYSTEM@hawk):PRIMARY> alter procedure SP_TESTING compile;

Procedure altered.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> @a.sql
hawk:(SYSTEM@hawk):PRIMARY> BEGIN
  2  DBMS_SCHEDULER.CREATE_PROGRAM(
  3  program_name=>'TESTING2',
  4  program_action=>'BEGIN SP_TESTING; END;',
  5  program_type=>'PLSQL_BLOCK',
  6  number_of_arguments=>0
  7  );
  8  END;
  9  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> show error
No errors.
hawk:(SYSTEM@hawk):PRIMARY> BEGIN SP_TESTING; END;
  2  /

PL/SQL procedure successfully completed.

hawk:(SYSTEM@hawk):PRIMARY> select * from temp;

        ID
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

hawk:(SYSTEM@hawk):PRIMARY>

October 22, 2016

Oracle Health Check

Filed under: 11g,oracle,RAC — mdinh @ 12:44 pm

Currently, I am working on health check for ODA and find there are too many tools with disparate information.

I am sure there are more than the ones listed below and stopped searching.

ODA Oracle Database Appliance orachk Healthcheck (Doc ID 2126926.1)
Multiplexing Redolog and Control File on ODA (Doc ID 2086289.1)

ORAchk – Health Checks for the Oracle Stack (Doc ID 1268927.2)
How to Perform a Health Check on the Database (Doc ID 122669.1)
Health Monitor (Doc ID 466920.1)

Oracle Configuration Manager Quick Start Guide (Doc ID 728988.5)
Pre-12+ OCM Collectors to Be Decommissioned Summer of 2015 (Doc ID 1986521.1)

cluvfy comp healthcheck

One example found:  ORAchk will report if less than 3 SCANs configured while cluvfy comp healthcheck (11.2) does not.

Intesteresting side track: < 3 not escaped is   ❤

Complete cluvfy comp healthcheck  results plus how to create database user CVUSYS (WARNING: ~1600 lines).

Some failures from cluvfy comp healthcheck.

******************************************************************************************
Database recommendation checks for "emu"
******************************************************************************************

Verification Check        :  DB Log Mode
Verification Description  :  Checks the database log archiving mode
Verification Result       :  NOT MET
Verification Summary      :  Check for DB Log Mode failed
Additional Details        :  If the database is in log archiving mode, then it is
                             always desirable and advisable to upgrade the database in
                             noarchivelog mode as that will reduce the time taken to
                             upgrade the database. After the upgrade, the database can
                             be reverted to the archivelog mode.
References (URLs/Notes)   :  https://support.oracle.com/CSP/main/article?cmd=show&type=N
                             OT&id=429825.1

Database(Instance)  Status    Expected Value                Actual Value
------------------------------------------------------------------------------------------

emu                 FAILED    db_log_mode = NOARCHIVELOG    db_log_mode = ARCHIVELOG

__________________________________________________________________________________________

Database(Instance)  Error details
------------------------------------------------------------------------------------------

emu                 Error - NOARCHIVELOG mode is recommended when upgrading
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available
__________________________________________________________________________________________

Verification Check        :  Users Granted CONNECT Role
Verification Description  :  Checks for the presence of any users with CONNECT role
Verification Result       :  NOT MET
Verification Summary      :  Check for Users Granted CONNECT Role failed

Database(Instance)  Status    Expected Value                Actual Value
------------------------------------------------------------------------------------------

emu                 FAILED    connect_role_grantees = 0     connect_role_grantees = 5

__________________________________________________________________________________________

Database(Instance)  Error details
------------------------------------------------------------------------------------------

emu                 Error - CONNECT role granted users found
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available
__________________________________________________________________________________________

Does Oracle itself needs a health check?

October 11, 2016

OTN Appreciation Day : Create Database Using SQL | Thinking Out Loud Blog

Filed under: 11g,12c,oracle — mdinh @ 12:30 pm

Do you ever wonder how to get all parameters for CREATE DATABASE Statement ?

I will be sharing some of the reverse engineering done to create a duplicate copy of the database.

Some of you may be thinking, “Why not just duplicate database or backup and restore?”

For the project I was working on, this was not feasible since Extended Data Types (12c NF) was enabled and there is no going back.

Restoring database from backup would result in too much data loss.

This leaves the only option is to create new database with max_string_size=standard, and perform full export/import.

From backup controlfile to trace:

SYS@DB1> alter database backup controlfile to trace as '/tmp/cf_@.sql' reuse resetlogs;
Database altered.

SYS@DB1>

From /tmp/cf_DB1.sql:

$ ll /tmp/cf_DB1.sql
-rw-r--r--. 1 oracle oinstall 2955 Oct 11 04:45 /tmp/cf_DB1.sql

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "DB1" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 32
    MAXLOGMEMBERS 4
    MAXDATAFILES 400
    MAXINSTANCES 1
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/oradata/DB1A/onlinelog/o1_mf_1_czl4h9sg_.log'  SIZE 200M BLOCKSIZE 512,
  GROUP 2 '/oradata/DB1A/onlinelog/o1_mf_2_czl4h9yr_.log'  SIZE 200M BLOCKSIZE 512,
  GROUP 3 '/oradata/DB1A/onlinelog/o1_mf_3_czl4hbdb_.log'  SIZE 200M BLOCKSIZE 512

From DBA_REGISTRY:

DB1:(SYS@DB1):PRIMARY> select name,comp_id,comp_name,version,schema,status
  2  from v$database,dba_registry
  3  order by 2
  4  ;

NAME       COMP_ID      COMP_NAME                                VERSION    SCHEMA     STATUS
---------- ------------ ---------------------------------------- ---------- ---------- --------
DB1        CATALOG      Oracle Database Catalog Views            12.1.0.2.0 SYS        VALID
DB1        CATJAVA      Oracle Database Java Packages            12.1.0.2.0 SYS        VALID
DB1        CATPROC      Oracle Database Packages and Types       12.1.0.2.0 SYS        VALID
DB1        JAVAVM       JServer JAVA Virtual Machine             12.1.0.2.0 SYS        VALID
DB1        XDB          Oracle XML Database                      12.1.0.2.0 XDB        VALID
DB1        XML          Oracle XDK                               12.1.0.2.0 SYS        VALID

6 rows selected.

DB1:(SYS@DB1):PRIMARY>

From DATABASE_PROPERTIES:

DB1:(SYS@DB1):PRIMARY> select property_name,property_value from DATABASE_PROPERTIES;

PROPERTY_NAME                            PROPERTY_VALUE
---------------------------------------- ----------------------------------------
DICT.BASE                                2
DEFAULT_TEMP_TABLESPACE                  TEMP
DEFAULT_PERMANENT_TABLESPACE             USERS
DEFAULT_EDITION                          ORA$BASE
Flashback Timestamp TimeZone             GMT
TDE_MASTER_KEY_ID
EXPORT_VIEWS_VERSION                     8
DEFAULT_TBS_TYPE                         SMALLFILE
GLOBAL_DB_NAME                           DB1
NLS_RDBMS_VERSION                        12.1.0.2.0
NLS_NCHAR_CHARACTERSET                   AL16UTF16
NLS_NCHAR_CONV_EXCP                      FALSE
NLS_LENGTH_SEMANTICS                     BYTE
NLS_COMP                                 BINARY
NLS_DUAL_CURRENCY                        $
NLS_TIMESTAMP_TZ_FORMAT                  DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_TZ_FORMAT                       HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_FORMAT                     DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_FORMAT                          HH.MI.SSXFF AM
NLS_SORT                                 BINARY
NLS_DATE_LANGUAGE                        AMERICAN
NLS_DATE_FORMAT                          DD-MON-RR
NLS_CALENDAR                             GREGORIAN
NLS_CHARACTERSET                         AL32UTF8
NLS_NUMERIC_CHARACTERS                   .,
NLS_ISO_CURRENCY                         AMERICA
NLS_CURRENCY                             $
NLS_TERRITORY                            AMERICA
NLS_LANGUAGE                             AMERICAN
DST_SECONDARY_TT_VERSION                 0
DST_PRIMARY_TT_VERSION                   18
DST_UPGRADE_STATE                        NONE
MAX_STRING_SIZE                          STANDARD
DBTIMEZONE                               US/Mountain
WORKLOAD_CAPTURE_MODE
WORKLOAD_REPLAY_MODE
NO_USERID_VERIFIER_SALT                  88C7FDB8D44CA60E05624A08A177722C

37 rows selected.

DB1:(SYS@DB1):PRIMARY>

From V$DATABASE:

DB1:(SYS@DB1):PRIMARY> r
  1  select log_mode,flashback_on,force_logging,created
  2  from V$DATABASE
  3*

LOG_MODE     FLASHBACK_ON       FORCE_LOGGING                           CREATED
------------ ------------------ --------------------------------------- -------------------
ARCHIVELOG   NO                 NO                                      2016-10-08 08:34:02

DB1:(SYS@DB1):PRIMARY>

From: V$BLOCK_CHANGE_TRACKING:

DB1:(SYS@DB1):PRIMARY> r
  1  select status, filename
  2  from V$BLOCK_CHANGE_TRACKING
  3*

STATUS     FILENAME
---------- --------------------
DISABLED

DB1:(SYS@DB1):PRIMARY>

From DBA_SCHEDULER_GLOBAL_ATTRIBUTE:

DB1:(SYS@DB1):PRIMARY> r
  1  select *
  2  from DBA_SCHEDULER_GLOBAL_ATTRIBUTE
  3  order by 1
  4*

ATTRIBUTE_NAME                           VALUE
---------------------------------------- ----------------------------------------
CURRENT_OPEN_WINDOW
DEFAULT_TIMEZONE                         US/Mountain
EMAIL_SENDER
EMAIL_SERVER
EMAIL_SERVER_CREDENTIAL
EMAIL_SERVER_ENCRYPTION                  NONE
EVENT_EXPIRY_TIME
FILE_WATCHER_COUNT                       0
LAST_OBSERVED_EVENT
LOG_HISTORY                              30
MAX_JOB_SLAVE_PROCESSES

11 rows selected.

DB1:(SYS@DB1):PRIMARY>

The finished SQL: crdb.sql

spool crdbp.log
set echo on timing on time on
host echo $ORACLE_SID
host sysresv
create spfile from pfile;
startup force nomount;
CREATE DATABASE
MAXINSTANCES 1
MAXLOGFILES 32
MAXLOGMEMBERS 4
MAXLOGHISTORY 292
MAXDATAFILES 400
ARCHIVELOG
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
SET TIME_ZONE='US/Mountain'
USER SYS IDENTIFIED BY oracle
USER SYSTEM IDENTIFIED BY oracle
EXTENT MANAGEMENT LOCAL
DATAFILE SIZE 513M AUTOEXTEND ON NEXT 128M MAXSIZE 8193M
SYSAUX DATAFILE SIZE 257M AUTOEXTEND ON NEXT 128M MAXSIZE 8193M
LOGFILE GROUP 1 SIZE 200M,GROUP 2 SIZE 200M,GROUP 3 SIZE 200M
DEFAULT TEMPORARY TABLESPACE temp TEMPFILE SIZE 256M AUTOEXTEND ON NEXT 128M MAXSIZE 8192M
DEFAULT TABLESPACE users DATAFILE SIZE 129M AUTOEXTEND ON NEXT 128M MAXSIZE 8193M
UNDO TABLESPACE undotbs1 DATAFILE SIZE 256M AUTOEXTEND ON NEXT 128M MAXSIZE 8192M;
spool off
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
@?/javavm/install/initjvm.sql
@?/xdk/admin/initxml.sql
@?/xdk/admin/xmlja.sql
@?/rdbms/admin/catjava.sql
@?/rdbms/admin/catxdbj.sql
@?/rdbms/admin/utlrp.sql
exec dbms_scheduler.set_scheduler_attribute(attribute=>'default_timezone',value=>'US/Mountain');

-- alter system set nls_length_semantics=CHAR scope=both sid='*';
-- alter database flashback on;
-- alter database FORCE LOGGING;
-- alter database enable block change tracking;

connect system/oracle @?/sqlplus/admin/pupbld.sql 
exit

October 9, 2016

cluvfy is your friend

Filed under: RAC — mdinh @ 11:54 pm

Just a reminder to self to use cluvfy

olsnodes -i -n -s -t
grep 'master node' $CRS_HOME/log/`hostname -s`/cssd/ocssd.*|tail -1

cluvfy stage -pre help
cluvfy stage -post  help

++++++++++


[grid@rac01:+ASM1:/home/grid]
$ olsnodes -i -n -s -t
rac01   1       rac01-vip       Active  Unpinned
rac02   2       rac02-vip       Active  Unpinned

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ env|grep HOME
CRS_HOME=/u01/app/11.2.0.4/grid
HOME=/home/grid
XAG_HOME=/u01/app/grid/xag
ORACLE_HOME=/u01/app/11.2.0.4/grid

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ grep 'master node' $CRS_HOME/log/`hostname -s`/cssd/ocssd.*|tail -1
/u01/app/11.2.0.4/grid/log/rac01/cssd/ocssd.log:2016-10-09 10:48:55.837: 
[    CSSD][28161792]clssgmCMReconfig: reconfiguration successful, 
incarnation 371471500 with 2 nodes, local node number 1, master node number 1

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -pre help

ERROR:
Unexpected symbol "help". See usage for detail.

USAGE:
cluvfy stage {-pre|-post}    [-verbose]

SYNTAX (for Stages):
cluvfy stage -pre cfs -n  -s  [-verbose]
cluvfy stage -pre
                   crsinst -file  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -upgrade [-n ] [-rolling] -src_crshome  -dest_crshome 
                           -dest_version  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -n  [-r {10gR1|10gR2|11gR1|11gR2}]
                           [-c ] [-q ]
                           [-osdba ] [-orainv ]
                           [-asm [-asmgrp ] [-asmdev ]] [-crshome ]
                           [-fixup [-fixupdir ]] [-networks ]
                           [-verbose]
cluvfy stage -pre acfscfg -n  [-asmdev ] [-verbose]
cluvfy stage -pre
                   dbinst -n  [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba ] [-d ]
                          [-fixup [-fixupdir ]] [-verbose]
                   dbinst -upgrade -src_dbhome  [-dbname ] -dest_dbhome  -dest_version 
                          [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre dbcfg -n  -d  [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre hacfg [-osdba ] [-orainv ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre nodeadd -n  [-vip ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -post hwos -n  [-s ] [-verbose]
cluvfy stage -post cfs -n  -f  [-verbose]
cluvfy stage -post crsinst -n  [-verbose]
cluvfy stage -post acfscfg -n  [-verbose]
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post nodeadd -n  [-verbose]
cluvfy stage -post nodedel -n  [-verbose]

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -post  help

ERROR:
Unexpected symbol "help". See usage for detail.

USAGE:
cluvfy stage {-pre|-post}    [-verbose]

SYNTAX (for Stages):
cluvfy stage -pre cfs -n  -s  [-verbose]
cluvfy stage -pre
                   crsinst -file  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -upgrade [-n ] [-rolling] -src_crshome  -dest_crshome 
                           -dest_version  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -n  [-r {10gR1|10gR2|11gR1|11gR2}]
                           [-c ] [-q ]
                           [-osdba ] [-orainv ]
                           [-asm [-asmgrp ] [-asmdev ]] [-crshome ]
                           [-fixup [-fixupdir ]] [-networks ]
                           [-verbose]
cluvfy stage -pre acfscfg -n  [-asmdev ] [-verbose]
cluvfy stage -pre
                   dbinst -n  [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba ] [-d ]
                          [-fixup [-fixupdir ]] [-verbose]
                   dbinst -upgrade -src_dbhome  [-dbname ] -dest_dbhome  -dest_version 
                          [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre dbcfg -n  -d  [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre hacfg [-osdba ] [-orainv ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre nodeadd -n  [-vip ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -post hwos -n  [-s ] [-verbose]
cluvfy stage -post cfs -n  -f  [-verbose]
cluvfy stage -post crsinst -n  [-verbose]
cluvfy stage -post acfscfg -n  [-verbose]
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post nodeadd -n  [-verbose]
cluvfy stage -post nodedel -n  [-verbose]

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -pre crsinst -n rac01,rac02 -fixup

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac02:/u01/app/11.2.0.4/grid,rac02:/tmp"
Free disk space check passed for "rac01:/u01/app/11.2.0.4/grid,rac01:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check failed
Check failed on nodes:
        rac02,rac01
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Time zone consistency check passed

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ umask
0002
[grid@rac01:+ASM1:/home/grid]
$ ssh rac02 "umask"
0022
[grid@rac0

+++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -post hwos -n rac01,rac02

Performing post-checks for hardware and operating system setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed

Checking shared storage accessibility...

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sde                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdd                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdg                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdh                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdi                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdf                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdb                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdc                              rac02 rac01

  ACFS                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /acfsmount                            rac02 rac01


Shared storage check was successful on nodes "rac02,rac01"

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Post-check for hardware and operating system setup was successful.
[grid@rac01:+ASM1:/home/grid]
$

October 7, 2016

Troubleshooting Goldengate Memory Usage

Filed under: GoldenGate — mdinh @ 4:35 pm

I was paged for GoldenGate processes memory RSS 4507248 KBytes

Here are a few ways to check memory usage.

Initially, I used ps aux –sort -rss|head -5 and Goldengate process was at the top.

After extract was restarted, I used ps ax -o rss,user,command | grep [e]xtract to determine memory usage for Goldengate.

Before: 4,763,432 
After:     31,528  

++++++++++++++++++++++++++++++

$ ps aux --sort -rss|head -5
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
ggsuser  59102 13.2  0.8 4763432 4461212 ?     Ss   Jun19 20905:40 $GG_HOME/extract PARAMFILE $GG_HOME/e_test.prm REPORTFILE $GG_HOME/dirrpt/E_TEST.rpt PROCESSID E_TEST USESUBDIRS

++++++++++++++++++++++++++++++

$ ps ax -o rss,user,command | grep [e]xtract
31528 ggsuser  $GG_HOME/extract PARAMFILE $GG_HOME/dirprm/e_test.prm REPORTFILE $GG_HOME/dirrpt/E_TEST.rpt PROCESSID E_TEST USESUBDIRS

 

Attempting to stop extract during long running transactions resulted in WARNINGS and SEND EXTRACT E_TEST, FORCESTOP

I don’t really like using FORCESTOP or KILL since it will result in recovery not knowing how long this will be.

 

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 10 on Sep 17 2009 23:52:18

Copyright (C) 1995, 2009, Oracle and/or its affiliates.  All rights reserved.



GGSCI (localhost) 1> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                          
EXTRACT     RUNNING     E_TEST      00:00:00      00:00:04   

GGSCI (localhost) 2> lag e*

Sending GETLAG request to EXTRACT E_TEST ...
Last record lag: 0 seconds.
At EOF, no more records to process.


GGSCI (localhost) 3> send e* status

Sending STATUS request to EXTRACT E_TEST ...


  EXTRACT E_TEST (PID 59102)
  Current status: Recovery complete: At EOF
  Sequence #: 142474
  RBA: 1291502592
  Timestamp: 2016-10-06 15:47:38.000000

  Output trail #1
  Current write position:
  Sequence #: 27484
  RBA: 20092406
  Timestamp: 2016-10-06 15:47:29.717308
  Extract Trail: ./dirdat/aa



GGSCI (localhost) 4> stop e*

Sending STOP request to EXTRACT E_TEST ...

There are open, long-running transactions. 
Before you stop Extract, make the archives containing data for those transactions available for when Extract restarts. 
To force Extract to stop, use the SEND EXTRACT E_TEST, FORCESTOP command.

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 142472, RBA 542064656.


GGSCI (localhost) 5> send extract e*, showtrans duration 10 MIN

Sending showtrans request to EXTRACT E_TEST ...


Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 142472, RBA 542064656

------------------------------------------------------------
XID:                  57.29.20520692
Items:                122554  
Extract:              E_TEST   
Redo Thread:          1     
Start Time:           2016-10-06:15:32:26 
SCN:                  143.1016911815 (615197235143)    
Redo Seq:             142472
Redo RBA:             542064656          
Status:               Running            


------------------------------------------------------------
XID:                  64.35.9949688
Items:                131751  
Extract:              E_TEST   
Redo Thread:          1     
Start Time:           2016-10-06:15:33:29 
SCN:                  143.1017267252 (615197590580)    
Redo Seq:             142472
Redo RBA:             1450478608         
Status:               Running            


GGSCI (localhost) 6> send extract e*, showtrans duration 20 MIN

Sending showtrans request to EXTRACT E_TEST ...
No transactions found

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 142472, RBA 542064656.


GGSCI (localhost) 7> send extract e*, showtrans duration 15 MIN

Sending showtrans request to EXTRACT E_TEST ...


Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 142472, RBA 542064656

------------------------------------------------------------
XID:                  57.29.20520692
Items:                123453  
Extract:              E_TEST   
Redo Thread:          1     
Start Time:           2016-10-06:15:32:26 
SCN:                  143.1016911815 (615197235143)    
Redo Seq:             142472
Redo RBA:             542064656          
Status:               Running            


------------------------------------------------------------
XID:                  64.35.9949688
Items:                133438  
Extract:              E_TEST   
Redo Thread:          1     
Start Time:           2016-10-06:15:33:29 
SCN:                  143.1017267252 (615197590580)    
Redo Seq:             142472
Redo RBA:             1450478608         
Status:               Running            


GGSCI (localhost) 8> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                          
EXTRACT     RUNNING     E_TEST      00:00:00      00:00:06   

GGSCI (localhost) 9>

 

Wait a little bit and try to stop extract again and this time succeeded.

 

GGSCI (localhost) 9> send extract e*, showtrans duration 15 MIN

Sending showtrans request to EXTRACT E_TEST ...
No transactions found

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 142472, RBA 542064656.

-- In hindsight, I should have "send extract e*, showtrans duration 1 MIN" before stopping extract !!!
GGSCI (localhost) 10> stop e*

Sending STOP request to EXTRACT E_TEST ...
Request processed.


GGSCI (localhost) 11> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     STOPPED     E_TEST      00:00:00      00:00:03    

GGSCI (localhost) 12> exit

++++++++++++++++++++++++++++++

Top RSS memory consumer is from Agent.

$ ps aux --sort -rss|head -5
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
oracle   89345  0.5  0.0 1055812 441812 ?      Sl   Jun10 854:16 /u01/app/oracle/product/12.1/cc_agent/core/12.1.0.3.0/jdk/bin/java 

++++++++++++++++++++++++++++++

GGSCI (localhost) 1> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     STOPPED     E_TEST      00:00:00      00:00:28    

GGSCI (localhost) 2> start e*

Sending START request to MANAGER ...
EXTRACT E_TEST starting


GGSCI (localhost) 3> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     E_TEST      00:00:35      00:00:01    

GGSCI (localhost) 4> lag e*

Sending GETLAG request to EXTRACT E_TEST ...
No records yet processed.


GGSCI (localhost) 5> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     E_TEST      00:00:45      00:00:01    

GGSCI (localhost) 6> send e* status

Sending STATUS request to EXTRACT E_TEST ...

-- Note Sequence 142475 is much better than 142472. Also, how large is REDO?

  EXTRACT E_TEST (PID 82213)
  Current status: In recovery[1]: Reading from data source
  Sequence #: 142475
  RBA: 594610192
  Timestamp: 2016-10-06 15:53:00.000000

  Output trail #1
  Current write position:
  Sequence #: 27485
  RBA: 1033
  Timestamp: 2016-10-06 15:55:22.043495
  Extract Trail: ./dirdat/aa

GGSCI (localhost) 7> exit

++++++++++++++++++++++++++++++

$ ps ax -o rss,user,command | grep [e]xtract
31220 ggsuser  $GG_HOME/extract PARAMFILE $GG_HOME/dirprm/e_test.prm REPORTFILE $GG_HOME/dirrpt/E_TEST.rpt PROCESSID E_TEST USESUBDIRS

++++++++++++++++++++++++++++++

GGSCI (localhost) 1> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     E_TEST      00:00:00      00:00:03    

GGSCI (localhost) 2> send e* status

Sending STATUS request to EXTRACT E_TEST ...


  EXTRACT E_TEST (PID 82213)
  Current status: Recovery complete: At EOF
  Sequence #: 142475
  RBA: 3804984832
  Timestamp: 2016-10-06 16:00:44.000000

  Output trail #1
  Current write position:
  Sequence #: 27485
  RBA: 163836918
  Timestamp: 2016-10-06 16:00:42.310056
  Extract Trail: ./dirdat/aa

GGSCI (localhost) 3> lag e*

Sending GETLAG request to EXTRACT E_TEST ...
Last record lag: 0 seconds.
At EOF, no more records to process.


GGSCI (localhost) 4> exit

++++++++++++++++++++++++++++++

$ ps ax -o rss,user,command | grep [e]xtract 31528 ggsuser $GG_HOME/extract PARAMFILE $GG_HOME/dirprm/e_test.prm REPORTFILE $GG_HOME/dirrpt/E_TEST.rpt PROCESSID E_TEST USESUBDIRS

Set those Environment Variables

Filed under: oracle — mdinh @ 2:31 pm

I fail to understand why people do not set environment variables or even update /etc/oratab with AGENT_HOME as an example.

It just makes thing so much easier.

Allow me to demonstrate. I have just awaken Frankenstein and don’t remember how he was configured.

As you can see, that last time I have used the system was back in Dec 2015 supposed.

I recall configuring XAG for the environment and not sure it was installed.

Having set environment variables makes it so much easier.

Make it easier on yourself and others, set those environment variables.

[grid@rac01:+ASM1:/u01/app/grid/xag]
$ env|grep HOME
CRS_HOME=/u01/app/11.2.0.4/grid
HOME=/home/grid
XAG_HOME=/u01/app/grid/xag
ORACLE_HOME=/u01/app/11.2.0.4/grid

++++++++++++++++++++++++++++++

[gguser@rac01:/home/gguser]
$ env |grep HOME
GG_HOME=/acfsmount/ggs112/
HOME=/home/gguser
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1
[gguser@rac01:/home/gguser]
$ ps -ef|grep gg

++++++++++++++++++++++++++++++

[grid@rac01:+ASM1:/home/grid]
$ date
Fri Oct  7 07:18:23 PDT 2016
[grid@rac01:+ASM1:/home/grid]
$ last shutdown

wtmp begins Sat Nov 29 22:14:25 2014
[grid@rac01:+ASM1:/home/grid]
$ last -x reboot
reboot   system boot  2.6.39-400.17.1. Fri Oct  7 06:49 - 07:18  (00:29)
reboot   system boot  2.6.39-400.17.1. Fri Dec 18 06:11 - 15:34  (09:23)
reboot   system boot  2.6.39-400.17.1. Tue Jan 13 19:39 - 19:45  (00:05)
reboot   system boot  2.6.39-400.17.1. Thu Dec 18 20:15 - 20:15  (00:00)
reboot   system boot  2.6.39-400.17.1. Thu Dec 18 19:22 - 20:15  (00:52)
reboot   system boot  2.6.39-400.17.1. Wed Dec 17 05:20 - 05:36  (00:16)
reboot   system boot  2.6.39-400.17.1. Tue Dec 16 19:18 - 19:18  (00:00)
reboot   system boot  2.6.39-400.17.1. Tue Dec 16 19:11 - 19:18  (00:06)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 12:57 - 20:18  (07:20)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 12:36 - 12:36  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 12:12 - 12:36  (00:24)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 12:11 - 12:11  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 12:03 - 12:11  (00:07)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 12:00 - 12:00  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 11:58 - 11:58  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 10:03 - 11:58  (01:55)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 09:38 - 09:55  (00:16)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 07:23 - 07:23  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 07:11 - 07:22  (00:11)
reboot   system boot  2.6.39-400.17.1. Sun Dec 14 07:10 - 07:10  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec 13 05:55 - 07:10 (1+01:15)
reboot   system boot  2.6.39-400.17.1. Sun Dec  7 21:38 - 21:38  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Dec  7 21:21 - 21:38  (00:17)
reboot   system boot  2.6.39-400.17.1. Sun Dec  7 21:09 - 21:16  (00:07)
reboot   system boot  2.6.39-400.17.1. Sun Dec  7 07:45 - 21:08  (13:22)
reboot   system boot  2.6.39-400.17.1. Sun Dec  7 05:16 - 05:16  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 22:12 - 05:16  (07:04)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 20:56 - 20:56  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 20:49 - 20:56  (00:07)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 20:48 - 20:48  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 20:09 - 20:48  (00:39)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 15:34 - 20:08  (04:34)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 15:04 - 15:29  (00:25)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 15:01 - 15:01  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 14:02 - 15:01  (00:59)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 13:59 - 13:59  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 13:53 - 13:59  (00:06)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 12:39 - 12:39  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Dec  6 08:46 - 12:39  (03:52)
reboot   system boot  2.6.39-400.17.1. Thu Dec  4 21:43 - 21:43  (00:00)
reboot   system boot  2.6.39-400.17.1. Thu Dec  4 17:35 - 21:43  (04:08)
reboot   system boot  2.6.39-400.17.1. Thu Dec  4 13:43 - 17:34  (03:51)
reboot   system boot  2.6.39-400.17.1. Wed Dec  3 20:07 - 21:53  (01:45)
reboot   system boot  2.6.39-400.17.1. Wed Dec  3 20:06 - 20:06  (00:00)
reboot   system boot  2.6.39-400.17.1. Wed Dec  3 19:07 - 20:06  (00:58)
reboot   system boot  2.6.39-400.17.1. Wed Dec  3 17:19 - 18:53  (01:34)
reboot   system boot  2.6.39-400.17.1. Mon Dec  1 20:56 - 21:27  (00:30)
reboot   system boot  2.6.39-400.17.1. Mon Dec  1 18:25 - 20:52  (02:27)
reboot   system boot  2.6.39-400.17.1. Sun Nov 30 12:14 - 20:28  (08:14)
reboot   system boot  2.6.39-400.17.1. Sun Nov 30 10:40 - 10:40  (00:00)
reboot   system boot  2.6.39-400.17.1. Sun Nov 30 08:41 - 10:40  (01:59)
reboot   system boot  2.6.39-400.17.1. Sun Nov 30 08:01 - 08:38  (00:36)
reboot   system boot  2.6.39-400.17.1. Sun Nov 30 07:02 - 07:42  (00:39)
reboot   system boot  2.6.39-400.17.1. Sat Nov 29 22:51 - 23:34  (00:43)
reboot   system boot  2.6.39-400.17.1. Sat Nov 29 22:17 - 22:17  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Nov 29 22:17 - 22:17  (00:00)
reboot   system boot  2.6.39-400.17.1. Sat Nov 29 22:14 - 22:16  (00:02)

wtmp begins Sat Nov 29 22:14:25 2014
[grid@rac01:+ASM1:/home/grid]
$

Create a free website or blog at WordPress.com.