RHEL7.3如何安装oracle12c单点grid和rdbms-成都快上网建站

RHEL7.3如何安装oracle12c单点grid和rdbms

小编给大家分享一下RHEL7.3如何安装oracle12c 单点grid和rdbms,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

创新互联公司长期为成百上千家客户提供的网站建设服务,团队从业经验10年,关注不同地域、不同群体,并针对不同对象提供差异化的产品和服务;打造开放共赢平台,与合作伙伴共同营造健康的互联网生态环境。为平遥企业提供专业的成都网站制作、成都网站设计、外贸营销网站建设平遥网站改版等技术服务。拥有十余年丰富建站经验和众多成功案例,为您定制开发。

##安装桌面

yum -y groupinstall "Server with GUI"

#启动桌面

startx

#设置启动模式

[root@localhost ~]# systemctl get-default

multi-user.target

[root@localhost ~]# cat /etc/inittab

# inittab is no longer used when using systemd.

#

# ADDING CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.

#

# Ctrl-Alt-Delete is handled by /usr/lib/systemd/system/ctrl-alt-del.target

#

# systemd uses 'targets' instead of runlevels. By default, there are two main targets:

#

# multi-user.target: analogous to runlevel 3    #多用户模式

# graphical.target: analogous to runlevel 5     #图形化模式

#

# To view current default target, run:

# systemctl get-default

#

# To set a default target, run:

# systemctl set-default TARGET.target

#

[root@localhost ~]# systemctl set-default graphical.target   

Removed symlink /etc/systemd/system/default.target.

Created symlink from /etc/systemd/system/default.target to /usr/lib/systemd/system/graphical.target.

[root@localhost ~]# systemctl get-default

graphical.target

##创建用户和组

groupadd -g 500 oinstall

groupadd -g 501 dba

groupadd -g 502 oper

groupadd -g 600 asmadmin

groupadd -g 601 asmdba

groupadd -g 602 asmoper

useradd -u 1000 -g oinstall -G dba,oper,asmdba oracle

useradd -u 1001 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

##packages for linux 7

rpm -qa | grep binutils-2.23.52.0.1-12.el7.x86_64

rpm -qa | grep compat-libcap1-1.10-3.el7.x86_64

rpm -qa | grep compat-libstdc++-33-3.2.3-71.el7.i686

rpm -qa | grep compat-libstdc++-33-3.2.3-71.el7.x86_64

rpm -qa | grep gcc-4.8.2-3.el7.x86_64

rpm -qa | grep gcc-c++-4.8.2-3.el7.x86_64

rpm -qa | grep glibc-2.17-36.el7.i686

rpm -qa | grep glibc-2.17-36.el7.x86_64

rpm -qa | grep glibc-devel-2.17-36.el7.i686

rpm -qa | grep glibc-devel-2.17-36.el7.x86_64

rpm -qa | grep ksh

rpm -qa | grep libaio-0.3.109-9.el7.i686

rpm -qa | grep libaio-0.3.109-9.el7.x86_64

rpm -qa | grep libaio-devel-0.3.109-9.el7.i686

rpm -qa | grep libaio-devel-0.3.109-9.el7.x86_64

rpm -qa | grep libgcc-4.8.2-3.el7.i686

rpm -qa | grep libgcc-4.8.2-3.el7.x86_64

rpm -qa | grep libstdc++-4.8.2-3.el7.i686

rpm -qa | grep libstdc++-4.8.2-3.el7.x86_64

rpm -qa | grep libstdc++-devel-4.8.2-3.el7.i686

rpm -qa | grep libstdc++-devel-4.8.2-3.el7.x86_64

rpm -qa | grep libXi-1.7.2-1.el7.i686

rpm -qa | grep libXi-1.7.2-1.el7.x86_64

rpm -qa | grep libXtst-1.2.2-1.el7.i686

rpm -qa | grep libXtst-1.2.2-1.el7.x86_64

rpm -qa | grep make-3.82-19.el7.x86_64

rpm -qa | grep sysstat-10.1.5-1.el7.x86_64

yum -y install binutils compat-libcap1 compat-libstdc++-33 compat-libstdc++-33*.i686 gcc gcc-c++ glibc glibc*.i686 glibc-devel glibc-devel*.i686 ksh libaio libaio-*.i686 libaio-devel libaio-devel*.i686 libgcc libgcc*.i686 libstdc++ libstdc++*.i686 libstdc++-devel libstdc++devel*.i686 libXi libXi*.i686 libXtst libXtst*.i686 make sysstat unixODBC unixODBC-devel unixODBC*.i686

yum -y localinstall compat-libstdc++-33-3.2.3-72.el7.*   #单独下载

##vi /etc/sysctl.conf

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 536870912

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

/sbin/sysctl -p

##官方文档对shmmax的设置解释,(文档 ID 567506.1),我一般直接使用当前设置/sbin/sysctl -a | grep shm

#Oracle Global Customer Support officially recommends a " maximum" for SHMMAX of "1/2 of physical RAM".

#The maximum size of a shared memory segment is limited by the size of the available user address space. On 64-bit systems, this is a theoretical 2^64bytes. So the "theoretical limit" for SHMMAX is the amount of physical RAM that you have.  However, to actually attempt to use such a value could potentially lead to a situation where no system memory is available for anything else.  Therefore a more realistic "physical limit" for SHMMAX would probably be "physical RAM - 2Gb".

#In an Oracle RDBMS application, this "physical limit" still leaves inadequate system memory for other necessary functions. Therefore, the common "Oracle maximum" for SHMMAX that you will often see is "1/2 of physical RAM". Many Oracle customers chose a higher fraction, at their discretion.

#Occasionally, Customers may erroneously think that that setting the SHMMAX as recommended in this NOTE limits the total SGA.   That is not true.  Setting the SHMMAX as recommended only causes a few more "shared memory segments" to be used for whatever total SGA that you subsequently configure in Oracle. For additional detail, please see

##

vi /etc/security/limits.conf

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

oracle hard stack 10240

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

grid soft stack 10240

grid hard stack 10240

##创建安装目录

mkdir -p /u01/app/grid

mkdir -p /u01/app/gridhome

mkdir -p /u01/app/oracle

chown grid:oinstall /u01/app/grid

chown grid:oinstall /u01/app/gridhome

chown -R oracle:oinstall /u01/app/oracle

##修改hosts文件,网卡名称

ip add | grep enp0s8 | grep inet | awk '{print $2}' | awk -F"/" '{printf $1" "}{cmd="hostname";system(cmd)}' >> /etc/hosts

cat /etc/hosts

##关闭防火墙和selinux

[root@localhost ~]# systemctl stop firewalld

[root@localhost ~]# systemctl disable firewalld  

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

[root@rhel7ora11 ~]# getenforce

Enforcing

[root@rhel7ora11 ~]# setenforce 0

[root@rhel7ora11 ~]# getenforce

Permissive

[root@rhel7ora11 ~]# vi /etc/selinux/config

SELINUX=disabled

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

##udev绑定裸设备,这里只绑定权限

[root@rhel7-ora12c-ip156 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdb1

1ATA_VBOX_HARDDISK_VBc9bd6bdf-a347ac26

[root@rhel7-ora12c-ip156 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdc1

1ATA_VBOX_HARDDISK_VB8aeda10c-71ca412a

[root@rhel7-ora12c-ip156 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="1ATA_VBOX_HARDDISK_VBc9bd6bdf-a347ac26",  OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent",

RESULT=="1ATA_VBOX_HARDDISK_VB8aeda10c-71ca412a",  OWNER="grid", GROUP="asmadmin", MODE="0660"

[root@rhel7-ora12c-ip156 ~]# systemctl restart systemd-udev-trigger.service

[root@rhel7-ora12c-ip156 ~]# ls -Ll /dev/sd?1

##设置用户参数文件

su - grid

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/gridhome

export ORACLE_SID=+ASM

PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/local/bin

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

su - oracle

ORACLE_BASE=/u01/app/oracle

ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1

ORACLE_SID=orcl

export ORACLE_BASE ORACLE_HOME ORACLE_SID

export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/bin:/sbin

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

##安装cvuqdisk

[root@rhel7-ora12c-ip156 gridhome]# rpm -ivh ./cv/rpm/cvuqdisk-1.0.10-1.rpm  

##编辑ohas文件,在执行root.sh脚本以后一直监控/etc/init.d/init.ohasd文件,文件存在马上启动ohas服务

[root@rhel7-ora12c-ip156 ~]# vi /usr/lib/systemd/system/ohas.service

[Unit]

Description=Oracle High Availability Services

After=syslog.target

[Service]

ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple

Restart=always

[Install]

WantedBy=multi-user.target

[root@rhel7-ora12c-ip156 ~]# chmod 777 /usr/lib/systemd/system/ohas.service

##执行root脚本以后,一直检查下面的文件是否存在,当文件存在马上手动启动oha,推介

[root@rhel7-ora12c-ip156 ~]# ls /etc/init.d/init.ohasd

/etc/init.d/init.ohasd

[root@rhel7-ora12c-ip156 ~]# systemctl start ohas.service

[root@rhel7-ora12c-ip156 ~]# systemctl status ohas.service

● ohas.service - Oracle High Availability Services

   Loaded: loaded (/usr/lib/systemd/system/ohas.service; disabled; vendor preset: disabled)

   Active: active (running) since Fri 2017-09-15 03:40:12 EDT; 5s ago

Main PID: 11434 (init.ohasd)

   CGroup: /system.slice/ohas.service

           └─11434 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple

Sep 15 03:40:12 rhel7-ora12c-ip156 systemd[1]: Started Oracle High Availability Services.

Sep 15 03:40:12 rhel7-ora12c-ip156 systemd[1]: Starting Oracle High Availability Services...

Sep 15 03:40:12 rhel7-ora12c-ip156 su[11461]: (to grid) root on none

[root@rhel7-ora12c-ip156 ~]# /u01/app/grid/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/grid/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/grid/oraInventory to oinstall.

The execution of the script is complete.

[root@rhel7-ora12c-ip156 ~]# /u01/app/gridhome/root.sh                     

Performing root user operation.

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/gridhome

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/gridhome/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rhel7-ora12c-ip156/crsconfig/roothas_2017-09-15_03-40-36AM.log

2017/09/15 03:40:36 CLSRSC-363: User ignored prerequisites during installation

2017/09/15 03:40:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel7-ora12c-ip156'

CRS-2673: Attempting to stop 'ora.evmd' on 'rhel7-ora12c-ip156'

CRS-2677: Stop of 'ora.evmd' on 'rhel7-ora12c-ip156' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel7-ora12c-ip156' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

rhel7-ora12c-ip156     2017/09/15 03:41:35     /u01/app/gridhome/cdata/rhel7-ora12c-ip156/backup_20170915_034135.olr     0  

##报错:

2017/09/16 03:35:51 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2017/09/16 03:36:34 CLSRSC-400: A system reboot is required to continue installing.

The command '/u01/app/gridhome/perl/bin/perl -I/u01/app/gridhome/perl/lib -I/u01/app/gridhome/crs/install /u01/app/gridhome/crs/install/roothas.pl ' execution failed

解决:

[grid@dbs0biiprc01 ~]$ acfsdriverstate -orahome $ORACLE_HOME supported

ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-514.el7.x86_64'

ACFS-9201: Not Supported

再次执行root脚本

[root@rhel7-ora12c-ip156 ~]# /u01/app/gridhome/bin/crsctl stat res -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details       

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA1.dg

               ONLINE  ONLINE       rhel7-ora12c-ip156       STABLE

ora.LISTENER.lsnr

               ONLINE  ONLINE       rhel7-ora12c-ip156       STABLE

ora.asm

               ONLINE  ONLINE       rhel7-ora12c-ip156       Started,STABLE

ora.ons

               OFFLINE OFFLINE      rhel7-ora12c-ip156       STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.cssd

      1        ONLINE  ONLINE       rhel7-ora12c-ip156       STABLE

ora.diskmon

      1        OFFLINE OFFLINE                               STABLE

ora.evmd

      1        ONLINE  ONLINE       rhel7-ora12c-ip156       STABLE

--------------------------------------------------------------------------------

以上是“RHEL7.3如何安装oracle12c 单点grid和rdbms”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注创新互联行业资讯频道!


分享文章:RHEL7.3如何安装oracle12c单点grid和rdbms
文章链接:http://kswjz.com/article/pgcicj.html
扫二维码与项目经理沟通

我们在微信上24小时期待你的声音

解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流