I have upgraded my 184.108.40.206 Cloud Control to 220.127.116.11 and even if I thought it would be straightforward process it was at the end quite painful…
First you need to download the 5-6 GB (!!) of Cloud Control 18.104.22.168 in three files:
After long download, SFTP and unzip process I issued runInstaller, answered to few basic questions but got got stuck on:
Error:The OMS instance home provided is an NFS-mounted location which is not recommended. You can provide a local file system or choose to continue with the same location. However if you choose to continue with this location, then after the installation, ensure that you maintain the http lock file to a local file system and modify the location name in the httpd.conf file. For instructions, refer to the Basic Installation Guide
And you cannot move to next screen, as NFS mount point are quite common nowadays a bit puzzled by the error message…
To solve we need patch 14145094. But even the download of this patch is problematic, the one for my platform (Linux x86-64, 22.214.171.124) size 965 bytes and contain an useless xml file. I had to download the one for Generic platform of a size of 800.1 KB and patch my installation directory (so stop and rerun the installer).
When relaunching everything the error message is now a simple warning:
Warning:The OMS instance home provided is an NFS-mounted location which is not recommended. You can provide a local file system or choose to continue with the same location. However if you choose to continue with this location, then after the installation, ensure that you maintain the http lock file to a local file system and modify the location name in the httpd.conf file. For instructions, refer to the Basic Installation Guide
The httpd conf file will have to be modified later on…
And you can move to next panel. The rest of upgrade process went well..
During my 1-system upgrade I also had to specify a new middleware home directory and was not able to use the existing one. So moved from Middleware12 to Middleware126.96.36.199, problem is that agent stay in previous middleware directory and you cannot change it !
I tried to remove all targets from agent to reinstall a new one: it is simply not possible…. Quite annoying to have clean installation. I anyway deleted all directories from old Middleware12 except agent directory of course.
If you really want to make a clean installation I finally found a tips. First you need to delete the agent because installing a second management agent on your OMS is not really supported (at least Cloud Control drop a warning when doing it). Start by first stopping the agent and then delete it with (deleting through web interface is just not working):
[oragc@server1 ~]$ emcli login -username=sysman Enter password : Login successful [oragc@server1 ~]$ emcli delete_target -name="server1.domain.com:3872" -type="oracle_emd" Error: This agent is currently monitoring other targets. Please delete the monitored targets before deleting the agent or use delete_monitored_targets option to delete all the monitored targets [oragc@server1 ~]$ emcli delete_target -name="server1.domain.com:3872" -type="oracle_emd" -delete_monitored_targets Target "server1.domain.com:3872:oracle_emd" deleted successfully
Then I did a bit of cleaning in my Oracle inventory with (including empty Oracle Home to delete them):
Then deploy a new management agent on your OMS with the web interface in the new Oracle home. Few targets will come with your agent and the only ones you will need to add manually will be your Oracle Fusion Middleware component (adding it will automatically discover 11 new targets !), the repository database (emrep) and its listener.
I had then obviously to download all 188.8.131.52 agent binaries for my platforms and new plug ins. But when trying to deploy new plugins on management server I got an error clearly asking to apply patch 14340329. Patch which contains a bit more than 10 one off patches…
To change lock file directory to non-NFS one httpd.conf file to modify was in my case $ORACLE_BASE/gc_inst/WebTierIH1/config/OHS/ohs1. If you do a search in your Cloud Control base directory you will also find an httpd.conf file in $ORACLE_BASE/Oracle_WT/ohs/conf directory but apparently it is not the correct one to modify. You have to stop everything, keep a copy of original file and modify for mpm_prefork_module and mpm_worker_module modules declaration the below line to locate lock file in a local directory:
This LockFile parameter specify a directory location, not a filename, and you must create it prior to restart your Oracle Httpd Server (OHS) component.
One thing I have also noticed when you upgrade an agent with the GUI is the $AGENT_HOME/core/184.108.40.206.0 directory that remains and use a bit of space, don’t know why the cleaning is not automatically done (around 780MB though)…
I also faced a boring issue when upgrading few agents, same output as MOS note 1440045.1:
Executing command: /u01/oemagent12c/core/220.127.116.11.0/oui/bin/runInstaller -ignoreSysPrereqs -clone -forceClone -silent -waitForCompletion -nowait ORACLE_HOME=/u01/oemagent12c/core/18.104.22.168.0 AGENT_BASE_DIR=/u01/oemagent12c ORACLE_HOSTNAME=TNG1A013.corp.ads AGENT_BASE_DIR=/u01/oemagent12c OMS_HOST=tng9a060.corp.ads EM_UPLOAD_PORT=4900 AGENT_INSTANCE_HOME=/u01/oemagent12c/agent_inst b_doDiscovery=false b_startAgent=false b_forceInstCheck=true -noconfig ORACLE_HOME_NAME=agent12g1 -force AGENT_PORT=-1 Clone Action Logs Location:/u01/oraInventory/logs/cloneActions<timestamp>.log ERROR: Agent Clone Failed
CloneActions<timestamp>.log file is also not there. I tried to execute manually the command but was working fine. After having read many documents I came to conclusion that issue was space in /tmp directory. I increased it and all went fine…
If you really don’t want to change its size (or simply because you can’t) you can still change %emd_emstagedir%/EMStage in Additional Inputs before upgrading the agent and provide an alternate location with more free space.
On the Cloud Control summary page you may see Job Purge in abort while looking in DBA_SCHEDULER_JOBS view you see the job correctly executed. This is fake alert that can be solved with forum reference below.
I also had to fight with increased redo log generation on my repository database and it is also a bug (see references). I applied patch and it seems to be a bit better…
- How To Install 12c PS1 on NFS Mount point [ID 1499148.1]
- 22.214.171.124 : Agent Base Directory Location After 126.96.36.199 to 188.8.131.52 OMS upgrade [ID 1496371.1]
- EM 12c: Deploying the Enterprise Manager Cloud Control 184.108.40.206 Agent with the Cloning Method Fails with Message: ERROR: Agent Clone Failed. [ID 1440045.1]
- EM12cR2 – “Job Purge” repository scheduler job falsely(?) reported down
- INCREASED REDO LOGGING ON REPOSITORY DB AFTER UPGRADING TO OEM 220.127.116.11 [ID 1502370.1]
- How to identify table fragmentation and remove it ? - December 18, 2018
- How to non intrusively find index rebuild or shrink candidates ? - November 23, 2018
- Simple Oracle Document Access (SODA) installation and usage - November 1, 2018
- Oracle REST Data Services (ORDS) installation and usage - October 8, 2018
- Application Continuity (AC) for Java – JDBC HA – part 6 - September 13, 2018
- Transaction Guard (TG) for Java – JDBC HA – part 5 - August 27, 2018