openstack搭建

本实验组件部署,如下图:

实验拓扑图:

控制节点基础环境部署

1、关闭控制节点的firewalld和selinux

systemctl stop firewalld.service
systemctl disable firewalld.service
vi /etc/selinux/config(将“SELINUX”的值改为“disabled”)

2、修改主机名

hostnamectl set-hostname controller
echo '192.168.80.150 controller' >> /etc/hosts

3、配置安装源aa.repo

cat > /etc/yum.repos.d/aa.repo <<EOF
[base]
name=base
baseurl=https://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0

[extras]
name=extrax
baseurl=https://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0

[updates]
name=updates
baseurl=https://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0

[queens]
name=queens
baseurl=https://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-queens/
enable=1
gpgcheck=0

[virt]
name=virt
baseurl=https://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0
EOF

yum repolist
yum install python-openstackclient -y

4、安装时钟同步服务( chrony)

yum install -y chrony vim 
vi /etc/chrony.conf
在以下“server”前加“#”注释掉,不使用默认的时间同步服务器:

在末行添加:
allow 192.168.80.0/24	#192.168.80.0/24 为允许时间同步的网段,根据实际环境修改
local stratum 10 		#本地时钟提供服务
重启服务并加入开机自启:
systemctl start chronyd
systemctl enable chronyd

 5、 安装数据库( MariaDB)

yum install -y mariadb mariadb-server python2-PyMySQL 

新增openstack数据库配置文件:
vi /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.80.150               #本机IP
default-storage-engine = innodb             #默认搜索引擎
innodb_file_per_table = on                      
max_connections = 4096                      #最大连接数
collation-server = utf8_general_ci          #字符顺序
character-set-server = utf8                 #字体

保存退出,启动数据库并加入开机自启
systemctl start mariadb
systemctl enable mariadb

初始化数据库
sudo mysql_secure_installation <<EOF

y
000000
000000
y
y
y
y
EOF

6、安装消息队列服务(rabbitmq)

yum install -y rabbitmq-server 

启动并加入开机自启
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

为rabbitmq添加名为“openstack”的用户,密码为“123456”
rabbitmqctl add_user openstack 123456

为openstack用户添加最高权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

验证rabbitmq是否成功安装,端口(5672)是否正常
netstat -lantu |grep 5672 

7、安装缓存服务(memcache)

yum install -y memcached python-memcached 

配置memcache:
vi /etc/sysconfig/memcached,修改“OPTIONS的值,末尾加入“controller”,此处的”controller“为控制节点主机名


启动服务并加入开机自启:
systemctl start memcached
systemctl enable memcached

8、安装Etcd

yum install -y etcd 

配置:etcd
vi /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.80.150:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.80.150:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.80.150:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.80.150:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.80.150:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

重启服务并加入开机自启:
systemctl enable etcd
systemctl start etcd

9、配置DNS解析

由于是局域环境,并且所需解析域名数量较少,我们可以使用hosts文件做解析
cat > /etc/hosts << EOF
192.168.80.150 controller
192.168.85.151 comput
192.168.80.152 cinder
EOF
 
验证:分别ping域名controller,compute,cinder,能解析出ip地址即可

1、创建Keystone数据库

在控制节点(controller)的MariaDB上创建Keystone数据库:
mysql -uroot -p"000000" -e "CREATE DATABASE keystone;"

为系统用户keystone,赋予开放本地/远程登录权限,登录密码为“000000”
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000';"

2、 安装keystone组件

在控制节点(controller)上安装keystone组件:
yum install -y openstack-keystone httpd mod_wsgi 

备份原始配置文件:
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/keystone/keystone.conf.bak | grep -v ^# | uniq > /etc/keystone/keystone.conf

编辑配置文件:
vim /etc/keystone/keystone.conf 
在 [database] 和[ token] 选项里分别添加以下参数:
[database]
connection=mysql+pymysql://keystone:000000@controller/keystone 
[token]
provider = fernet
保存退出

填充keystone数据库:
su -s /bin/sh -c "keystone-manage db_sync " keystone
 
无返回任何结果则表示填充正常!

初始化Fernet key库:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
 
引导身份认证服务,配置keystone的相关认证信息:(未来openstack登录界面的管理员admin密码,在此设置)
keystone-manage bootstrap --bootstrap-password 000000 \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne
参数说明:
--bootstrap-password:keystone管理员密码
--bootstrap-admin-url:管理员认证URL
--bootstrap-internal-url:内部认证URL
--bootstrap-public-url:外部认证URL
--bootstrap-region-id:指定区域名

3、配置Apache服务:

在Apache配置文件中设置ServerName为本机主机名,的第96行加入”ServerName ctrl“
vi /etc/httpd/conf/httpd.conf

为wsgi-keystone.conf创建链接到Apache服务目录:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

重启httpd服务,并加入开机自启:
systemctl enable httpd.service
systemctl start httpd.service

4、验证

(1)创建环境脚本
vi /root/admin-openrc
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3 
export OS_IDENTITY_API_VERSION=3

参数说明:
export OS_USERNAME=admin:登录keystone的admin(管理员)帐号
export OS_PASSWORD=ADMIN_PASS:keystone预设的密码
export OS_PROJECT_NAME=admin:指定Openstack的项目类型
export OS_USER_DOMAIN_NAME=Default:指定Openstack用户所属域
export OS_PROJECT_DOMAIN_NAME=Default:指定Openstack项目所属域
export OS_AUTH_URL=http://controller:35357/v3:指定认证链接
export OS_IDENTITY_API_VERSION=3:指定认证版本

执行脚本:. /root/admin-openrc 
查看当前环境:env | grep OS 


(2)验证:
openstack token issue

1、创建Glance数据库

在控制节点(controller)的MariaDB上创建glance数据库:
mysql -uroot -p"000000" -e "CREATE DATABASE glance;"
 
赋予系统用户glance,开放本地/远程登录,登录密码为“glance_db”
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON glance.* TO 'keystone'@'%' IDENTIFIED BY '000000';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON glance.*  TO 'keystone'@'localhost' IDENTIFIED BY '000000';"

刷新权限表
mysql -uroot -p"000000" -e "Flush privileges;"

2、创建glance相关信息

(1)在控制节点(controller)上创建glance用户,名为glance
先执行环境变量: . /root/admin-openrc
在openstack默认域中创建glance用户:(设置密码为:000000)
openstack user create glance --domain default --password 000000
 
(2)将glance用户在service项目(project)中赋予管理员角色admin
openstack role add --project service --user glance admin(执行后无返回结果即正常)

(3)创建一个类型为image(镜像)的服务(service)实体
openstack service create --name glance  image

(4)为image服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne image public http://gyc:9292
openstack endpoint create --region RegionOne image internal http://gyc:9292
openstack endpoint create --region RegionOne image admin http://gyc:9292
 
若要删除endpoint,可先查询endpoint的id
openstack endpoint list
openstack endpoint delete [endpoint-id]

3、安装并配置glance组件

yum install openstack-glance -y

(1)配置glance-api配置文件:
备份原始配置文件:cp /etc/glance/glance-api.conf  /etc/glance/glance-api.conf.b

将原始配置文件去掉带”#“号行:
cat /etc/glance/glance-api.conf.bak  |  grep -v ^#  |  uniq > /etc/glance/glance-api.conf

编辑配置文件:
vim /etc/glance/glance-api.conf 
在以下选项里分别添加以下参数:
[database]										#数据库设置
connection = mysql+pymysql://glance:glance_db@controller/glance 
[keystone_authtoken]								#keystone鉴权设置
auth_uri = http://controller:5000						#鉴权uri
auth_url = http://controller:5000						#鉴权url
memcached_servers = controller:11211				#memcached服务链接
auth_type = password								#认证方式
project_domain_name = Default					#指定项目域
user_domain_name = Default						#指定用户域
project_name = service							#指定项目
username = glance								#指定服务用户名
password = 000000							#服务用户名密码
[paste_deploy]									#认证模式
flavor = keystone
[glance_store]									#glance设置
stores = file,http									#存储方式
default_store = file								#默认存储类型
filesystem_store_datadir = /var/lib/glance/images/		#默认存储路径

(2)配置glance-registry配置文件:
备份原始配置文件:
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/glance/glance-registry.conf.bak | grep -v ^# | uniq > /etc/glance/glance-registry.conf

编辑配置文件:
vim /etc/glance/glance-registry.conf 
[database]
connection = mysql+pymysql://glance:glance_db@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 000000
[paste_deploy]
flavor = keystone

(3)填充glance数据库:
su -s /bin/sh -c "glance-manage db_sync" glance
 
(4)启动服务并加入开机自启
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service penstack-glance-registry.service

(5)验证
运行管理员环境脚本,下载测试镜像

. /root/admin-openrc

下载一个测试镜像:
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

创建openstack镜像 
openstack image create "cirros"  --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public											

查询镜像:
openstack image list

Nova各组件分布图:

1、安装控制节点(controller)的nova组件


(1)创建nova数据库
mysql -uroot -p"000000" -e "CREATE DATABASE nova_api;"
mysql -uroot -p"000000" -e "CREATE DATABASE nova;"
mysql -uroot -p"000000" -e "CREATE DATABASE nova_cell0;"

创建nova数据库相关用户以及访问权限:
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_db';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova_db’;"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_db';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_db';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_db';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova_db';"

(2)创建nova相关信息
在控制节点(controller)上创建nova用户

先执行环境变量: . /root/admin-openrc
在openstack默认域中创建nova用户:(密码为000000)
openstack user create --domain default nova --password 000000

将nova用户在service项目(project)中赋予管理员角色admin
openstack role add --project service --user nova admin(执行后无返回结果即正常)

创建一个名为compute的服务(service)实体
openstack service create --name nova compute 

为compute服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
 
在openstack默认域中创建placement用户:
(执行后需设置密码:placement_123456,并确认密码)
openstack user create --domain default placement --password 000000 
 
将placement用户在service项目(project)中赋予管理员角色admin
openstack role add --project service --user placement admin(执行后无返回结果即正常)

创建一个名为placement的服务(service)实体
openstack service create --name placement placement
 
为Placement服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
 
(3)安装并配置nova组件
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api 

配置nova.conf配置文件:
备份原始配置文件:
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/nova/nova.conf.bak | grep -v ^# | uniq > /etc/nova/nova.conf

编辑配置文件:
vim /etc/nova/nova.conf
在以下选项里分别添加以下参数:
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller
my_ip = 192.168.80.150					        #本机管理网口地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver	        #需要关闭自带firewalld服务

[api_database]							#数据库设置
connection=mysql+pymysql://nova:000000@controller/nova_api

[database]							#数据库设置
connection = mysql+pymysql://nova:000000@controller/nova

[api]								#API验证设置
auth_strategy = keystone

[keystone_authtoken]						#keystone鉴权设置
auth_url = http://controller:5000/v3				#鉴权url
memcached_servers = controller:11211				#memcached服务链接
auth_type = password						#认证方式
project_domain_name = default					#指定项目域
user_domain_name = default					#指定用户域
project_name = service						#指定项目
username = nova							#指定服务用户名
password = 000000						#服务用户名密码

[vnc]								#vnc设置
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]							#glance设置
api_servers = http://controller:9292 
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]							#placement设置
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 000000


(4)基于补丁bug,还需要配置Placement API配置文件,在第13行加入以下参数
vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
  
重启http服务:systemctl restart httpd

(5)填充数据库:
su -s /bin/sh -c "nova-manage api_db sync" nova (可忽略此次返回的描述信息)
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (可忽略此次返回的描述信息)

(6)创建cell1的cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (正常返回一个连串无规则代码即可)

(7)同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova (可忽略警告信息)

(8)较验cell0和cell1的注册是否正常
nova-manage cell_v2 list_cells
 
(9)启动服务并加入开机自启
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

2、安装计算节点(compute)的nova组件

实验拓扑图:

(1)配置计算节点(compute)的域名解析与yum安装源
关闭防火墙selinux
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

从控制节点拷贝/etc/hosts文件,确保域名解析一致:
scp root@192.168.80.150:/etc/hosts /etc/hosts

配置yum安装源
cat > /etc/yum.repos.d/aa.repo <<EOF
[base]
name=base
baseurl=https://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0

[extras]
name=extrax
baseurl=https://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0

[updates]
name=updates
baseurl=https://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0

[queens]
name=queens
baseurl=https://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-queens/
enable=1
gpgcheck=0

[virt]
name=virt
baseurl=https://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0
EOF
yum repolist

(2)安装chrony,确保与控制节点时间同步
yum install chrony openstack-nova-compute 

配置时间同步
在第3行-6行前加“#”注释掉,不使用默认的时间同步服务器,在末行添加:
vi /etc/chrony.conf
server controller iburst           #指ntp服务器为controller


重启服务:
systemctl restart chronyd
systemctl enable chronyd

较验时间同步:
chronyc sources -v          #下图*号表示时间已同步(需要等待一段时间):




(3)安装nova-compute并编辑相关配置文件
yum install openstack-nova-compute -y

备份原始配置文件:
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/nova/nova.conf.bak | grep -v ^# | uniq > /etc/nova/nova.conf

编辑配置文件:
vim /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@gyc
my_ip = 192.168.80.151                           #计算节点的管理网口IP 
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 000000

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default

auth_url = http://controller:5000/v3

username = placement
password = 000000

检测计算节点是否支持硬件虚拟化加速,(返回值1,为支持)
egrep -c '(vmx|svm)' /proc/cpuinfo

由于本实验是由虚拟机来做计算节点,所以返回值不>1.需要在/etc/nova/nova.conf配置文件的[libvirt]选项中加入以下内容:
vi /etc/nova/nova.conf
[libvirt]
virt_type = qemu

(VmwareWorkStation 工具,也需要开启虚拟机Intel-VT技术)


启动服务并加入开机自启
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

验证

控制节点查看/添加计算节点(compute)
在控制节点查看当前所有计算节点:
. /root/admin-openrc                        #openstack管理员环境变量

同步cell(需要配置完compute节点后,才执行以下命令)
nova-manage cell_v2 discover_hosts --verbose
openstack compute service list --service nova-compute

如果有新加入的计算节点,需要执行以下命令来发现新计算节点

su -s /bin/sh -c "nova-manage cell_v2
discover_hosts --verbose" nova

或者在控制节点的nova.conf配置文件里,设置自动发现计算节点的时间间隔
vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300              #默认单位:秒

Neutron各组件分布图:

1、安装控制节点(controller)的neutron组件
(1)创建neutron组件相关数据库:
CREATE DATABASE neutron;

为系统用户neutron赋予本机/远程访问权限以及设置登录密码:
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';"
mysql -uroot -p"000000" -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'IDENTIFIED BY '000000';"

(2)创建neutron相关信息
在控制节点(controller)上创建openstack平台用户
先执行环境变量: . /root/admin-openrc
在openstack默认域中创建neutron用户:
openstack user create --domain default neutron --password 000000 
 
将neutron用户在service项目(project)中赋予管理员角色admin
openstack role add --project service --user neutron admin

创建一个名为network的服务(service)实体
openstack service create --name neutron network
 
为neutron服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
 

(3)配置网络
以下两项配置二选一: 
1.Networking Option 1: Provider networks 
2.Networking Option 2: Self-service networks 
选项1部署了仅支持将实例附加到提供者(外部)网络的最简单的可能架构。 没有自助服务(专用)网络,路由器或浮动IP地址,只有管理员或其他特权用户才能管理提供商网络。
选项2增加了选项1,其中支持将实例附加到自助服务网络的第3层服务。 
演示或其他非特权用户可以管理自助服务网络,包括提供自助服务和提供商网络之间连接的路由器。此外,浮动IP地址可提供与使用来自外部网络(如Internet)的自助服务网络的实例的连接。 自助服务网络通常使用隧道网络。隧道网络协议(如VXLAN),选项2还支持将实例附加到提供商网络。
本实验选择Networking Option 2: Self-service networks 

安装组件(控制节点)
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables 

配置neutron相关配置文件:
备份原始配置文件:
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/neutron/neutron.conf.bak | grep -v ^# | uniq > /etc/neutron/neutron.conf

编辑配置文件:
vim /etc/neutron/neutron.conf
在以下选项里分别添加以下参数:
[DEFAULT]
core_plugin = ml2							#启动Modular Layer 2模块
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]								#数据库设置
connection = mysql+pymysql://neutron:000000@controller/neutron 

[keystone_authtoken]							#keystone鉴权设置
auth_uri = http://controller:5000					#鉴权uri
auth_url = http://controller:35357					#鉴权url
memcached_servers = controller:11211				        #memcached服务链接
auth_type = password							#认证类型
project_domain_name = default						#指定项目域
user_domain_name = default						#指定用户域
project_name = service							#指定项目
username = neutron						        #指定服务用户名
password = 000000
						                        #服务用户名密码
[nova]									#nova相关选项
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 000000

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置ml2模块配置文件:
/etc/neutron/plugins/ml2/ml2_conf.ini

备份原始配置文件:
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

将原始配置文件去掉带”#“号行:
cat /etc/neutron/plugins/ml2/ml2_conf.ini.bak | grep -v ^# | uniq > /etc/neutron/plugins/ml2/ml2_conf.ini

编辑配置文件:
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security 

[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

配置网桥配置文件:
/etc/neutron/plugins/ml2/linuxbridge_agent.ini

备份原始配置文件:
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

将原始配置文件去掉带”#“号行:
cd /etc/neutron/plugins/ml2
cat linuxbridge_agent.ini.bak | grep -v ^# | uniq > linuxbridge_agent.ini

编辑配置文件:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
#处此的“ens33”为控制节点的外部/管理网络网卡名

[vxlan]
enable_vxlan = true
local_ip = 192.168.88.150			#控制节点隧道网口IP地址
l2_population = true 

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
:保存退出

注意:由于网桥工作于数据链路层,在iptables没有开启 bridge-nf时,数据会直接经过网桥转发,结果就是对FORWARD的设置失效;
Centos默认不开启bridge-nf透明网桥功能,启动bridge-nf方式:
vim /usr/lib/sysctl.d/00-system.conf	#将0值改为“1”
或:
编辑文件vim /etc/sysctl.conf 添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
保存退出,

加重网桥模块: br_netfilter
modprobe br_netfilter
执行:/sbin/sysctl -p
 
将此模块加入开机自加载:
在/etc/新建rc.sysinit 文件,并写入以下内容:
vim /etc/rc.sysinit
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done

在/etc/sysconfig/modules/目录下新建文件br_netfilter.modules
vi /etc/sysconfig/modules/br_netfilter.modules
modprobe br_netfilter
增加权限:chmod 755 /etc/sysconfig/modules/br_netfilter.modules
重启后检查模块:lsmod |grep br_netfilter
 
配置三层代理Layer-3(L3)为自助虚拟网络提供路由和NAT服务。
cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
cat /etc/neutron/l3_agent.ini.bak | grep -v ^# | uniq > /etc/neutron/l3_agent.ini
vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge

配置DHCP代理配置文件:
/etc/neutron/dhcp_agent.ini
备份原始配置文件:
cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak

将原始配置文件去掉带”#“号行:
cat /etc/neutron/dhcp_agent.ini.bak | grep -v ^# | uniq > /etc/neutron/dhcp_agent.ini

编辑配置文件:
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

配置元数据代理配置文件:
/etc/neutron/metadata_agent.ini

备份原始配置文件:
cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak

将原始配置文件去掉带”#“号行:
cat /etc/neutron/metadata_agent.ini.bak | grep -v ^# | uniq > /etc/neutron/metadata_agent.ini

编辑配置文件:
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = meta_123456

(4)配置控制节点nova配置文件里[neutron]选项设置
vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = meta_123456

(5)在控制节点(controller)上创建ml2软链接,并同步neutron数据库
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
 
(6)在控制节点上(controller)重启/启动相关服务,并将neutron相关服务加入开机自启
systemctl restart systemctl restart openstack-nova-api.service  neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-l3-agent.service neutron-metadata-agent.service

2、计算节点(compute)安装neutron组件
(1)安装组件
yum install openstack-neutron-linuxbridge ebtables ipset

(2)配置neutron主配置文件:
/etc/neutron/neutron.conf

备份原始配置文件:
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/neutron/neutron.conf.bak | grep -v ^# | uniq > /etc/neutron/neutron.conf

编辑配置文件:
vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 000000

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

(3)配置网桥配置文件:
/etc/neutron/plugins/ml2/linuxbridge_agent.ini

备份原始配置文件:
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

将原始配置文件去掉带”#“号行:
cd /etc/neutron/plugins/ml2
cat linuxbridge_agent.ini.bak | grep -v ^# | uniq > linuxbridge_agent.ini

编辑配置文件:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]
enable_vxlan = true
local_ip = 192.168.88.151	#计算节点隧道网口IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

(4)开启bridge-nf透明网桥功能,启动bridge-nf:
vim /usr/lib/sysctl.d/00-system.conf		#将0值改为1
或:
编辑文件vim /etc/sysctl.conf 添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
保存退出

加重网桥模块:
modprobe br_netfilter
执行:
/sbin/sysctl -p

 
将此模块加入开机自加载:
在/etc/新建rc.sysinit 文件,并写入以下内容:
vim /etc/rc.sysinit
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done

在/etc/sysconfig/modules/目录下新建文件br_netfilter.modules
echo "modprobe br_netfilter" >> /etc/sysconfig/modules/br_netfilter.modules
增加权限:chmod 755 /etc/sysconfig/modules/br_netfilter.modules
重启后检查模块:lsmod |grep br_netfilter


(5)编辑计算节点的nova配置文件,加入[neuron]选项功能
vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000

(6)重启/启动相关服务,并加入开机自启
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service

(7)部署验证
在控制节点(controller)操作:
. /root/admin-openrc
openstack network agent list (provider二层网络) 


Self-service(三层自服务网络)
1.在控制节点(controller)安装horizon组件
yum install openstack-dashboard 

2.修改配置文件
备份原始配置文件:
cp /etc/openstack-dashboard/local_settings  /etc/openstack-dashboard/local_settings.bak
编辑配置文件:
vim /etc/openstack-dashboard/local_settings
第188行修改主机名为“controller”:
OPENSTACK_HOST = "controller"

第38行,设置“*”号,允许所有主机访问

第164行,缓存设置,加入以下内容:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
第167-168行加入:
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',

 
第189行为使用鉴权API的v3版本

第73行,开启多域支持:

第64行配置API版本:

	 
第95行,设置默认域为“Default”:

第189行,设置默认角色为“user”:

第464行,设置时区为“Asia/Shanghai”

在以下配置文件加入:WSGIApplicationGroup %{GLOBAL}
vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL} 


3.重启服务
systemctl restart httpd.service
systemctl restart memcached.service

4.登录Horizon
http://192.168.80.150/dashboard
#192.168.80.150为controller的管理网段IP地址
Domain:default		用户:admin		密码:123456

Cinder各组件分布图:

初始化基本环境,比如防火墙,selinux,主机名,网络,yum源,hosts文件等

安装存储节点(cinder)的相关组件 (在cinder节点操作)
1.安装LVM组件
yum install lvm2 device-mapper-persistent-data 

启动服务并加入开机自启:
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

2.配置LVM卷
(1)查看当前系统的硬盘情况:lsblk 


(2)创建LVM物理逻辑卷/dev/vdb
pvcreate /dev/vdb


(3)创建cinder-volumes逻辑卷组
vgcreate cinder-volumes /dev/vdb


重新配置LVM以仅扫描包含cinder-volumes卷组的设备。
编辑/etc/lvm/lvm.conf文件并完成以下操作: 
在devices部分中,添加一个接受/dev/vdb设备的过滤器并拒绝所有其他设备:
vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/vdb/", "r/.*/"]
过滤器数组中的每个项目都以for接受或r为拒绝开头,并包含设备名称的正则表达式。 该阵列必须以r /.*/结尾以拒绝任何剩余的设备。 可以使用vgs -vvvv命令来测试过滤器。 如果存储节点在操作系统磁盘上使用LVM,则还必须将关联的设备添加到过滤器。 例如,如果/ dev / vda设备包含操作系统:
filter = [ "a/vda/", "a/vdb/", "r/.*/"]
同样,如果计算节点在操作系统磁盘上使用LVM,则还必须修改这些节点上/etc/lvm/lvm.conf文件中的筛选器以仅包含操作系统磁盘。 
例如,如果/ dev / vda设备包含操作系统:
filter = [ "a/vda/", "r/.*/"]
所以,我们需要把操作系统盘也加进去
filter = [ "a/vda/","a/vdb/","r/.*/" ]
 
3.cinder 机器环境配置,安装cinder组件(在cinder上操作)
yum install -y centos-release-openstack-queens openstack-cinder targetcli python-keystone 

(1)配置cinder相关配置文件:
备份原始配置文件:
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/cinder/cinder.conf.bak | grep -v ^# | uniq > /etc/cinder/cinder.conf

编辑配置文件:
vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
my_ip = 192.168.80.152			#cinder管理网络IP地址
enabled_backends  =  lvm		#lvm为后端名称,任意命名
glance_api_servers = http://controller:9292

[database]										#数据库设置
connection = mysql+pymysql://cinder:000000@controller/cinder

[keystone_authtoken]								#keystone鉴权设置
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 000000

[lvm]
volume_driver  =  cinder.volume.drivers.lvm.LVMVolumeDriver 
volume_group  =  cinder-volumes 
iscsi_protocol  =  iscsi 
iscsi_helper  =  lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

在[lvm]部分中,使用LVM驱动程序,cinder-volumes卷组,iSCSI协议和相应的iSCSI服务配置LVM后端。如果该[lvm]部分不存在,就创建它
2)	启动服务并加入开机自启
systemctl start openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service


在控制节点配置cinder(controller上操作)
1、配置cinder数据库
mysql -uroot -p"000000" -e "CREATE DATABASE cinder;"
 
授予对cinder数据库的适当访问权限:
mysql -uroot -p"GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder_123456';"
mysql -uroot -p"GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_123456';"
 
2、创建cinder相关信息
运行openstack管理员环境脚本:. /root/admin-openrc

创建一个cinder用户
openstack user create cinder --domain default --password 000000
 
添加admin角色到cinder用户:
openstack role add --project service --user cinder admin	(此命令无输出结果)

创建cinderv2和cinderv3服务实体:(注意:块存储服务需要两个服务实体)
openstack service create --name cinderv2 --description "OpenStack Block Storage v2" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage v3" volumev3
 
创建块存储服务API端点:
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
 
3、安装和配置cinder组件(controller上操作)
yum install openstack-cinder

备份原始配置文件:
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

将原始配置文件去掉带”#“号行:
cat /etc/cinder/cinder.conf.bak | grep -v ^# | uniq > /etc/cinder/cinder.conf

编辑配置文件:
vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
my_ip = 192.168.80.150			#控制节点管理网口IP地址	

[database]
connection = mysql+pymysql://cinder:000000@controller/cinder 

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 000000

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4、同步块存储数据库:
su -s /bin/sh -c "cinder-manage db sync" cinder


5、配置计算服务使用块存储
vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

6、启动/重启以下服务,将cinder服务加入开机自启
systemctl restart openstack-nova-api.service openstack-cinder-api.service  openstack-cinder-scheduler.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service


cinder验证
在控制节点(controller)操作:
. /root/admin-openrc
openstack volume service list

创建一个5G大小的卷,名为volume1 
openstack volume create --size 5 volume1

查看卷的状态:
openstack volume list
 

将卷附加到实例上使用:
1.查看所有租户项目中的实例 :
. /root/demo-openrc
openstack server list 	#获取实例名  leon-vm01
openstack volume list 	#获取卷名

2.将实例关联到卷
openstack server add volume [实例名] [卷名]

分离实例与卷
openstack server remove volume [实例名] [卷名]

By admin

One thought on “openstack搭建”

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注