OpenStack云计算之路-queens版本

内容 隐藏
1 1.openstack最小化搭建
1.4 1.4 安装openstack各个组件
OpenStack云计算之路-queens版本

1.openstack最小化搭建

1.1 OpenStack services简介

服务 项目名称 描述
仪表盘 Horizon 提供了一个基于web的自服务门户,与OpenStack底层服务交互,诸如启动一个实例,分配IP地址以及配置访问控制。
Compute Nova 在OpenStack环境中计算实例的生命周期管理。按需响应包括生成、调度、回收虚拟机等操作。
Networking Neutron 确保为其它OpenStack服务提供网络连接即服务,比如OpenStack计算。为用户提供API定义网络和使用。基于插件的架构其支持众多的网络提供商和技术。
Object Storage Swift 通过一个 RESTful,基于HTTP的应用程序接口存储和任意检索的非结构化数据对象。它拥有高容错机制,基于数据复制和可扩展架构。它的实现并像是一个文件服务器需要挂载目录。在此种方式下,它写入对象和文件到多个硬盘中,以确保数据是在集群内跨服务器的多份复制。
Block Storage Cinder 为运行实例而提供的持久性块存储。它的可插拔驱动架构的功能有助于创建和管理块存储设备。
Identity service Keystone 为其他OpenStack服务提供认证和授权服务,为所有的OpenStack服务提供一个端点目录。
Image service Glance 存储和检索虚拟机磁盘镜像,OpenStack计算会在实例部署时使用此服务。
Telemetry Ceilometer 为OpenStack云的计费、基准、扩展性以及统计等目的提供监测和计量。

openstack各项目关系图

1190037-20180127174109444-562856729.png

概念架构

image.png

逻辑架构

image.png

1.2 硬件要求

image.png

1.2.1 控制节点

控制节点运行身份认证服务,镜像服务,管理部分计算和网络服务,不同的网络代理和仪表盘。同样包括像SQL数据库, :term:message queue和 :term:NTP这样的支撑服务。
可选的,可以在控制节点允许块存储,对象存储,Orchestration和Telemetry服务。

控制节点需要最少两块网卡

1.2.2 计算节点

可以部署超过1个计算节点。每个节点要求最少两个网络接口。

1.2.3 块设备存储

计算节点和这个节点间的服务流量使用管理网络。生产环境中应该实施单独的存储网络以增强性能和安全
你可以部署超过一个块存储节点。每个节点要求至少一个网卡接口。

1.2.4 对象存储

计算节点和这个节点间的服务流量使用管理网络。生产环境中应该实施单独的存储网络以增强性能和安全
这个服务要求两个节点。每个节点要求最少一个网络接口。你可以部署超过两个对象存储节点。

1.3 服务器要求及优化

 

[root@controller ~]# uname -r
3.10.0-693.el7.x86_64
[root@controller ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 
[root@controller ~]#  sestatus 
SELinux status:                 disabled
systemctl  disable  firewalld.service 
[root@controller ~]# systemctl stop firewalld.service 
[root@controller ~]# systemctl status firewalld.service 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since 二 2019-04-09 14:27:49 CST; 2s ago
     Docs: man:firewalld(1)
 Main PID: 674 (code=exited, status=0/SUCCESS)
添加阿里云yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install  vim wget bash-completion lrzsz nmap  nc  tree  htop iftop  net-tools -y

时间同步:

yum install chrony -y 
编辑/etc/chrony.conf文件,配置时钟源同步服务端
 cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server controller  iburst

设置NTP服务开机启动

systemctl enable chronyd.service
systemctl start chronyd.service
其他节点加入
 cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server controller  iburst

在controller节点执行
 chronyc sources
 在其他节点执行
 [root@compute ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? controller                    0   7     0     -     +0ns[   +0ns] +/-    0ns

参考:
https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-ntp-other.html

1.3.2 OpenStack基础配置服务

OpenStack 相关服务安装流程(keystone服务除外):

1)在数据库中,创库,授权;

2)在keystone中创建用户并授权;

3)在keystone中创建服务实体,和注册API接口;

4)安装软件包;

5)修改配置文件(数据库信息);

6)同步数据库;

7)启动服务。

 1.3.3 OpenStack服务部署顺序

参考如下

[1]  基础环境准备 https://docs.openstack.org/install-guide/environment-packages-rdo.html
[2]  部署 Keystorne 认证服务,token https://docs.openstack.org/keystone/queens/install/
[3]  部署 Glance 镜像服务 https://docs.openstack.org/glance/queens/install/
[4]  部署 Nova 计算服务(kvm) https://docs.openstack.org/nova/queens/install/
[5]  部署 Neutron 网络服务 https://docs.openstack.org/neutron/queens/install/
[6]  部署 Horizon 提供web界面 https://docs.openstack.org/horizon/queens/install/
[7]  部署 Cinder 块存储(硬盘) https://docs.openstack.org/cinder/queens/install/

1.3.4 openstack服务安装、配置

说明:无特殊说明,以下操作在所有节点上执行
yum install -y centos-release-openstack-queens       #安装Queens版yum源
yum upgrade -y                                       #系统更新
yum install -y python-openstackclient                #安装openstackclient
yum install -y openstack-selinux        #selinux开启时需要安装openstack-selinux,这里已将seliux设置为默认关闭

1.3.5 安装数据库(controller节点执行)

 yum install mariadb mariadb-server python2-PyMySQL -y 
创建配置文件
cat > /etc/my.cnf.d/openstack.cnf <<-'EOF'
[mysqld]
bind-address = 10.0.0.86
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

启动mariadb
systemctl enable mariadb.service
systemctl start mariadb.service
执行mariadb安全初始化

为了保证数据库服务的安全性,运行``mysql_secure_installation``脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码。
[root@controller ~]# mysql_secure_installation
mysql密码 我这里设置了123456
https://docs.openstack.org/install-guide/environment-sql-database-rdo.html

1.3.6 在controller节点安装、配置RabbitMQ

1.安装配置消息队列组件
yum install rabbitmq-server -y
2.设置服务开机启动
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
3.添加openstack用户:
rabbitmqctl add_user openstack 123456
4.允许用户进行配置,写入和读取访问 openstack:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

参考:
https://docs.openstack.org/install-guide/environment-messaging-rdo.html

1.3.7 安装缓存数据库Memcached(controller节点)

说明:服务的身份认证服务使用Memcached缓存令牌。 memcached服务通常在控制器节点上运行。 对于生产部署,我们建议启用防火墙,身份验证和加密的组合来保护它。

1.安装配置组件
 yum install memcached python-memcached -y
2.编辑/etc/sysconfig/memcached
 cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
#OPTIONS="-l 127.0.0.1,::1"
OPTIONS="-l 10.0.0.86"  #--修改位置,配置为memcached主机地址或网段信息

3.设置服务开机启动
systemctl enable memcached.service
systemctl start memcached.service
4.查看各服务运行状态
[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      3269/beam.smp       
tcp        0      0 10.0.0.86:3306          0.0.0.0:*               LISTEN      3454/mysqld         
tcp        0      0 10.0.0.86:11211         0.0.0.0:*               LISTEN      4832/memcached      
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3268/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3560/sendmail: acce 
tcp6       0      0 :::5672                 :::*                    LISTEN      3269/beam.smp       
tcp6       0      0 :::22                   :::*                    LISTEN      3268/sshd           
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3006/dhclient       
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3004/dhclient       
udp        0      0 127.0.0.1:323           0.0.0.0:*                           2937/chronyd        
udp6       0      0 ::1:323                 :::*                                2937/chronyd   

https://docs.openstack.org/install-guide/environment-memcached-rdo.html

1.3.8 Etcd服务安装(controller)

1.安装包:
yum install etcd -y
2.编辑/etc/etcd/etcd.conf文件
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.0.86:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.86:2379"
ETCD_NAME="controller"
##[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.86:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.86:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.86:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
3.启用并启动etcd服务
systemctl enable etcd
systemctl start etcd

https://docs.openstack.org/install-guide/environment-etcd-rdo.html

1.4 安装openstack各个组件

官网文档地址
https://docs.openstack.org/install-guide/openstack-services.html

1.4.1 安装keystone组件(controller)

1.4.1.1 创建数据库

创建数据库
mysql -u root -p
创建 keystone 数据库:
 CREATE DATABASE keystone;
对``keystone``数据库授予恰当的权限:
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';

https://docs.openstack.org/keystone/queens/install/keystone-install-obs.html

1.4.1.2 安装keystone

yum install openstack-keystone httpd mod_wsgi

1.4.1.3 编辑 /etc/keystone/keystone.conf


[database] connection = mysql+pymysql://keystone:123456@controller/keystone [token] provider = fernet

1.4.1.4 同步keystone数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

1.4.1.5 初始化Fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

1.4.1.6 引导认证服务

keystone-manage bootstrap --bootstrap-password 123456 \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

1.4.1.7 配置apache http服务

1.编辑/etc/httpd/conf/httpd.conf,配置ServerName参数
ServerName controller
2.创建 /usr/share/keystone/wsgi-keystone.conf链接文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
3.设置服务开机启动
systemctl enable httpd.service
systemctl start httpd.service
4.配置环境变量
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

1.4.1.8 创建 domain, projects, users, roles

1.创建域
[root@controller ~]# openstack domain create --description "An Example Domain" example
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | An Example Domain                |
| enabled     | True                             |
| id          | 0604805ddd344ce58b3ea3b888fd0cb6 |
| name        | example                          |
| tags        | []                               |
+-------------+----------------------------------+
2.创建服务项目
 openstack project create --domain default   --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 98c6fb71dd0145c5adba3db5f50ab8d0 |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

3.创建平台demo项目
 openstack project create --domain default    --description "Demo Project" demo
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Demo Project                     |
| domain_id   | default                          |
| enabled     | True                             |
| id          | f22400e4d08d41bca9c79d742d4ac585 |
| is_domain   | False                            |
| name        | demo                             |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

4.创建demo用户
openstack user create --domain default   --password-prompt demo
User Password:123456
Repeat User Password:123456
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | bf07826961bc48b48523ee10036aeb03 |
| name                | demo                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

5.创建用户角色
openstack role create user
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | bf1766b34fee45bfbaf78351b31831a0 |
| name      | user                             |
+-----------+----------------------------------+

6.添加用户角色到demo项目和用户
openstack role add --project demo --user demo user
说明:此条命令执行成功后不返回参数

1.4.1.9.1 验证认证服务操作

1.4.1.9.1 验证认证服务操作
1.取消环境变量
 unset OS_AUTH_URL OS_PASSWORD

2.admin用户返回的认证token
 openstack --os-auth-url http://controller:35357/v3 \
>   --os-project-domain-name Default --os-user-domain-name Default \
>   --os-project-name admin --os-username admin token issue
Password: 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-04-11T10:32:57+0000                                                                                                                                                                |
| id         | gAAAAABcrwnJbTMQkmiveA2txxPeYrVXPoBd_85wUxIWsaYnQQGpg-nJH0uBBHMqM5TZG1NtNVO-h1tiOlOZxygYGhPGX1Jt0-y_8-MThEl_o_mDtTzO4JV1KmU9maATqcfJMjKEv_AHGhM4FWLblel2HdDhFD5__WOL2205-fOjX6Bu13eFT4E |
| project_id | 49384508fdd84f40beec710e0266ac94                                                                                                                                                        |
| user_id    | 0b5bdbe507ee41b48b50cd7379843b6b                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

3.demo用户返回的认证token
 openstack --os-auth-url http://controller:5000/v3 \
>   --os-project-domain-name Default --os-user-domain-name Default \
>   --os-project-name demo --os-username demo token issue
Password: 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-04-11T10:34:05+0000                                                                                                                                                                |
| id         | gAAAAABcrwoNsJOI9NS4m20DlMU-BeAyzGKlyXGhZofIsyfYA7R4iO0iVYyF1RdT7Nm3TqvV2dlmJdNz1ak8EdAWKugDne8SwGwTRw1AEju1Q9gxJ1WqXikYNi2ZVVyL_ectDbWXU4Pv99lNUu15haewAr0QScEHe2f9Oy07u7oQaqYnpGCl1SY |
| project_id | f22400e4d08d41bca9c79d742d4ac585                                                                                                                                                        |
| user_id    | bf07826961bc48b48523ee10036aeb03                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

1.4.1.9.2 关于域、项目、用户和角色的说明:
类型 说明
Domain 表示 project 和 user 的集合,在公有云或者私有云中常常表示一个客户
Group 一个domain 中的部分用户的集合
Project 项目、IT基础设施资源的集合,比如虚机,卷,镜像等
Role 授权,角色,表示一个 user 对一个 project resource 的权限
Token 一个 user 对于某个目标(project 或者 domain)的一个有限时间段内的身份令牌

1.4.1.10 创建openstack 客户端环境脚本

1.创建admin-openrc脚本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

2.创建demo-openrc脚本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

3.使用脚本,返回认证token
[root@controller ~]# . admin-openrc
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-04-11T15:07:43+0000                                                                                                                                                                |
| id         | gAAAAABcr0ovjM7EByI_y13qlBy55SvRBiAj4oAv6IQcSxtmvdm_PZ8C7k6Tl4lzft6vfo6RBYEMIkqoks_EEXHhWm7hHr5kkbWRKvD6C-nb6vbXyox3CBXobSblhEbMaz80qEtVYoOYPkmruqp2Nu_bYvKFvGzXIb6NhBhGf6IiAf73OFzGmck |
| project_id | 49384508fdd84f40beec710e0266ac94                                                                                                                                                        |
| user_id    | 0b5bdbe507ee41b48b50cd7379843b6b                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

1.4.2 安装Glance服务(controller)


glance-api 接收镜像API的调用,负责镜像的上传、下载、查看、删除。 glance-registry 存储、处理和恢复镜像的元数据,元数据包括项诸如大小和类型。 数据库 存放镜像元数据,用户是可以依据个人喜好选择数据库的,多数的部署使用MySQL或SQLite。 镜像文件的存储仓库 支持多种类型的仓库,它们有普通文件系统、对象存储、RADOS块设备、HTTP、以及亚马逊S3。记住,其中一些仓库仅支持只读方式使用。 元数据定义服务 通用的API,是用于为厂商,管理员,服务,以及用户自定义元数据。这种元数据可用于不同的资源,例如镜像,工件,卷,配额以及集合。一个定义包括了新属性的键,描述,约束以及可以与之关联的资源的类型。

官网参考:
https://docs.openstack.org/glance/queens/install/install-rdo.html

1.4.2.1 创建glance数据库,并授权

mysql -u root -p

CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY '123456';

1.4.2.2 获取admin用户的环境变量,并创建服务认证

. admin-openrc
创建glance用户
 openstack user create --domain default --password-prompt glance
User Password:123456
Repeat User Password:123456
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 0a08f160d9d74c88b53510110ac6a0f9 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
添加 admin 角色到 glance 用户和 service 项目上
openstack role add --project service --user glance admin

1.4.2.3 创建名为glance的服务,及创建镜像服务的 API 端点

创建glance服务
 openstack service create --name glance \
>   --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | ac1adf94d2c24e408828b393bb4f79ec |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
创建镜像服务API端点
openstack endpoint create --region RegionOne \
>   image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0771486331e9494cbb7b48af7ace3de3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | ac1adf94d2c24e408828b393bb4f79ec |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

 openstack endpoint create --region RegionOne \
>   image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 6041ef6345a54fed9ab0dabb0a737f29 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | ac1adf94d2c24e408828b393bb4f79ec |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

 openstack endpoint create --region RegionOne \
>   image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 6c024a6d21d647d29cd1fd2a328df7c7 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | ac1adf94d2c24e408828b393bb4f79ec |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

1.4.2.4 安装和配置组件




/etc/glance/glance-api.conf # 接收镜像API的调用,诸如镜像发现、恢复、存储。 /etc/glance/glance-registry.conf#存储、处理和恢复镜像的元数据,元数据包括项诸如大小和类型。
1.4.2.4.1 安装软件包
yum install openstack-glance -y

1.4.2.4.2 安装自动配置软件openstack-utils

++==配置文件可以通过这个工具自动配置==++

yum install openstack-utils.noarch -y
[root@controller ~]# rpm -ql openstack-utils
/usr/bin/openstack-config
1.4.2.4.3 编辑/etc/glance/glance-api.conf文件
[database] 部分,配置数据库访问
[database]
# ...
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken] 和 [paste_deploy] 部分,配置认证服务访问
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
flavor = keystone

[glance_store] 部分,配置本地文件系统存储和镜像文件位置
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

命令:
cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf

openstack-config --set /etc/glance/glance-api.conf  database  connection  mysql+pymysql://glance:123456@controller/glance

openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken  auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken  memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken password 123456
openstack-config --set /etc/glance/glance-api.conf  paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf  glance_store stores  file,http
1.4.2.4.4 .编辑/etc/glance/glance-registry.conf文件
在 [database] 部分,配置数据库访问:
[database]
...
connection = mysql+pymysql://glance:123456@controller/glance

在 [keystone_authtoken] 和 [paste_deploy] 部分,配置认证服务访问:
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[paste_deploy]
# ...
flavor = keystone

命令配置如下:
cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf  database  connection  mysql+pymysql://glance:123456@controller/glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken password  123456
openstack-config --set /etc/glance/glance-registry.conf  paste_deploy flavor  keystone

1.4.2.5 同步数据库

su -s /bin/sh -c "glance-manage db_sync" glance

/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1336: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
  expire_on_commit=expire_on_commit, _conf=conf)
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> liberty, liberty initial
INFO  [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table
INFO  [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO  [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO  [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO  [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: queens_expand01, current revision(s): queens_expand01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO  [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO  [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: queens_contract01, current revision(s): queens_contract01
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.

检查数据库是否同步成功
mysql -uroot -p  glance -e "show tables" |wc -l 
Enter password: 
16

1.4.2.6 启动glance服务

systemctl enable openstack-glance-api.service   openstack-glance-registry.service
systemctl start  openstack-glance-api.service openstack-glance-registry.service

1.4.2.7 验证glance服务操作

1.获取admin用户的环境变量
. admin-openrc
2.下载镜像
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
3.将镜像上传到image服务,指定磁盘格式为QCOW2,指定裸容器格式和公开可见性,以便所有项目都可以访问它: 
openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                     |
| container_format | bare                                                 |
| created_at       | 2019-04-12T08:36:50Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/8e53b03c-08f8-456c-b245-0f4e7e783c5c/file |
| id               | 8e53b03c-08f8-456c-b245-0f4e7e783c5c                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirros                                               |
| owner            | 49384508fdd84f40beec710e0266ac94                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 12716032                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2019-04-12T08:36:50Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+

4.查看上传的镜像,镜像状态应为active状态
openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 8e53b03c-08f8-456c-b245-0f4e7e783c5c | cirros | active |
+--------------------------------------+--------+--------+

1.4.3 计算服务(nova)部署


OpenStack Compute与OpenStack Identity进行交互以进行身份验证; 用于磁盘和服务器映像的OpenStack映像服务; 和用于用户和管理界面的OpenStack Dashboard。镜像访问受到项目和用户的限制; 每个项目的限额是有限的(例如,实例的数量)。OpenStack Compute可以在标准硬件上水平扩展,并下载映像以启动实例。 OpenStack Compute包含以下内容及组件: nova-api service 接受并响应最终用户计算API调用。该服务支持OpenStack Compute API。它执行一些策略并启动大多数编排活动,例如运行实例。 nova-api-metadata service 接受来自实例的元数据请求。nova-api-metadata通常在nova-network 安装多主机模式下运行时使用该服务。有关详细信息,请参阅计算管理员指南中的元数据服务。 nova-compute service 通过管理程序API创建和终止虚拟机实例的工作守护程序。例如: XenAPI for XenServer/XCP libvirt for KVM or QEMU VMwareAPI for VMware 处理相当复杂。基本上,守护进程接受来自队列的动作并执行一系列系统命令,例如启动KVM实例并更新其在数据库中的状态。 nova-placement-api service 跟踪每个提供者的库存和使用情况。有关详情,请参阅 Placement API。 nova-scheduler service 从队列中获取虚拟机实例请求,并确定它在哪个计算服务器主机上运行。 nova-conductor module 调解nova-compute服务和数据库之间的交互。它消除了由nova-compute服务直接访问云数据库的情况 。该nova-conductor模块水平缩放。但是,请勿将其部署到nova-compute运行服务的节点上。有关更多信息,请参阅配置选项中的conductor部分 。 nova-consoleauth daemon(守护进程) 为控制台代理提供的用户授权令牌。见 nova-novncproxy和nova-xvpvncproxy。此服务必须运行以使控制台代理正常工作。您可以在群集配置中针对单个nova-consoleauth服务运行任一类型的代理。有关信息,请参阅关于nova-consoleauth。 nova-novncproxy daemon 提供通过VNC连接访问正在运行的实例的代理。支持基于浏览器的novnc客户端。 nova-spicehtml5proxy daemon 提供通过SPICE连接访问正在运行的实例的代理。支持基于浏览器的HTML5客户端。 nova-xvpvncproxy daemon 提供通过VNC连接访问正在运行的实例的代理。支持OpenStack特定的Java客户端。 The queue队列 守护进程之间传递消息的中心集线器。通常用RabbitMQ实现 ,也可以用另一个AMQP消息队列实现,例如ZeroMQ。 SQL database 存储云基础架构的大部分构建时间和运行时状态,其中包括: Available instance types 可用的实例类型 Instances in use 正在使用的实例 Available networks 可用的网络 Projects 项目 理论上,OpenStack Compute可以支持SQLAlchemy支持的任何数据库。通用数据库是用于测试和开发工作的SQLite3,MySQL,MariaDB和PostgreSQL。 官网:https://docs.openstack.org/nova/queens/install/controller-install-obs.html

1.4.3.1 在控制节点安装并配置

1.4.3.1.1 创建nova_api, nova, nova_cell0数据库
1.创建nova_api, nova, nova_cell0数据库
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
2.数据库登录授权
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

1.4.3.1.2 在keystone中创建用户并授权
1.创建nova用户
. admin-openrc
openstack user create --domain default --password-prompt nova
User Password:123456
Repeat User Password:123456
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 91083e18891c4c05b36e12bb506b3d1f |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
2.添加admin用户为nova用户
openstack role add --project service --user nova admin

1.4.3.1.3 在keystone中创建服务实体,和注册API接口
1.创建nova服务端点:
openstack service create --name nova \
>   --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 0f58713448fd4b9789b48ffc2c6c74c7 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

2.创建compute API 服务端点:
openstack endpoint create --region RegionOne \
>   compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ce7728ddff17454f840b659f213c8a60 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 0f58713448fd4b9789b48ffc2c6c74c7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
 openstack endpoint create --region RegionOne \
>   compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 6cda7db1af2e424e963711155720437e |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 0f58713448fd4b9789b48ffc2c6c74c7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne \
>   compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0352cba1fc2541768bb9e242a1beeae1 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 0f58713448fd4b9789b48ffc2c6c74c7 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+

3.创建一个placement服务用户
openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | ceec4a4f39044371a8fbfb33a14bf063 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
4.添加placement用户为项目服务admin角色
 openstack role add --project service --user placement admin
5.创建在服务目录创建Placement API服务
 openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 375d7a15748f4a78b4f984ddab906045 |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

6.创建Placement API服务端点
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | eed373e9c2ca4740b543f1dc1463ea42 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 375d7a15748f4a78b4f984ddab906045 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | e8aec5123fd7448f917200d530a56eac |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 375d7a15748f4a78b4f984ddab906045 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 6dd34e225e4949f69da940c1829ffbf8 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 375d7a15748f4a78b4f984ddab906045 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

1.4.3.1.4 安装软件包
yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api

软件包说明
nova-api             # 提供api接口
nova-scheduler  # 调度
nova-conductor  # 替代计算节点进入数据库操作
nova-consoleauth   # 提供web界面版的vnc管理
nova-novncproxy  # 提供web界面版的vnc管理
nova-compute   # 调度libvirtd进行虚拟机生命周期的管理

1.4.3.1.5 修改配置文件
1.编辑 /etc/nova/nova.conf
在 [DEFAULT] 部分, 只启用计算和元数据API:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
在[api_database] 和 [database] 部分, 配置数据库访问:
[api_database]
# ...
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
# ...
connection = mysql+pymysql://nova:123456@controller/nova
在 [DEFAULT] 部分, 配置RabbitMQ 消息队列访问:
[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller

在[api] 和 [keystone_authtoken] 部分, 配置认证服务访问:

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

在[DEFAULT] 部分,使用控制节点的管理接口IP地址配置my_ip选项:

[DEFAULT]
# ...
my_ip = 10.0.0.86

在[DEFAULT] 部分, 启用对网络服务的支持:

[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

在 [vnc] 部分,使用控制节点的管理接口IP地址配置VNC代理:

[vnc]
enabled = true
# ...
server_listen = my_ip
server_proxyclient_address =my_ip

在[glance] 部分, 配置Image服务API的位置:
[glance]
# ...
api_servers = http://controller:9292

在 [oslo_concurrency] 部分, 配置锁定路径:

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

命令编辑:
cp  /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  api_database connection mysql+pymysql://nova:123456@controller/nova_api
openstack-config --set /etc/nova/nova.conf  database connection mysql+pymysql://nova:123456@controller/nova
openstack-config --set /etc/nova/nova.conf  DEFAULT transport_url rabbit://openstack:123456@controller
openstack-config --set /etc/nova/nova.conf  api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf  keystone_authtoken auth_url http://controller:35357/v3
openstack-config --set /etc/nova/nova.conf  keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken password 123456
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip 10.0.0.86
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  vnc enabled true
openstack-config --set /etc/nova/nova.conf  vnc server_listen my_ip
openstack-config --set /etc/nova/nova.conf  vnc server_proxyclient_addressmy_ip
openstack-config --set /etc/nova/nova.conf  glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  placement os_region_name RegionOne 
openstack-config --set /etc/nova/nova.conf  placement project_domain_name Default 
openstack-config --set /etc/nova/nova.conf  placement project_name service
openstack-config --set /etc/nova/nova.conf  placement auth_type password
openstack-config --set /etc/nova/nova.conf  placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf  placement auth_url http://controller:35357/v3
openstack-config --set /etc/nova/nova.conf  placement username placement
openstack-config --set /etc/nova/nova.conf  placement password 123456

由于软件包的一个bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置,来启用对Placement API的访问:

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

systemctl restart httpd

1.4.3.1.6 同步数据库
同步nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1 cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
ff73e4a9-0e57-45f3-b645-4aa06901282b

同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova

1.4.3.1.7 验证 nova、 cell0、 cell1数据库是否注册正确
nova-manage cell_v2 list_cells
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
|  名称 |                 UUID                 |           Transport URL            |                    数据库连接                   |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | ff73e4a9-0e57-45f3-b645-4aa06901282b | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+

1.4.3.1.8 启动服务,设置服务为开机启动
systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service  

查看是否启动成功:
systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service |grep "active" |wc -l
5

1.4.3.2 安装和配置计算节点

官方参考:
https://docs.openstack.org/nova/queens/install/compute-install-rdo.html

1.4.3.2.1 安装软件包
yum install openstack-nova-compute
1.4.3.2.2 编辑/etc/nova/nova.conf配置文件
在[DEFAULT] 部分, 只启用计算和元数据API:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

在[DEFAULT] 部分, 配置RabbitMQ 消息队列访问:

[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller

在[api] 和 [keystone_authtoken] 部分, 配置认证服务访问:

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:35357/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

在[DEFAULT] 部分, 配置 my_ip选项:
[DEFAULT]
# ...
my_ip = 10.0.0.17

在 [DEFAULT] 部分, 启用对网络服务的支持:
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

在 [vnc] 部分, 启用和配置远程控制台访问:
[vnc]
# ...
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

在 [glance] 部分, 配置Image服务API的位置:
[glance]
# ...
api_servers = http://controller:9292

在 [oslo_concurrency]部分, 配置锁定路径:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

在 [placement] 部分, 配置 Placement API:
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

命令操作:
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT transport_url rabbit://openstack:123456@controller
openstack-config --set /etc/nova/nova.conf  api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf  keystone_authtoken auth_url http://controller:35357/v3
openstack-config --set /etc/nova/nova.conf  keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken user_domain_name default 
openstack-config --set /etc/nova/nova.conf  keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken password 123456
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip 10.0.0.17
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  vnc enabled True
openstack-config --set /etc/nova/nova.conf  vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc server_proxyclient_addressmy_ip
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf  glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  placement os_region_name RegionOne 
openstack-config --set /etc/nova/nova.conf  placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf  placement project_name service
openstack-config --set /etc/nova/nova.conf  placement auth_type password
openstack-config --set /etc/nova/nova.conf  placement user_domain_name Default 
openstack-config --set /etc/nova/nova.conf  placement auth_url http://controller:35357/v3
openstack-config --set /etc/nova/nova.conf  placement username placement 
openstack-config --set /etc/nova/nova.conf  placement password 123456

注意: openstack-config这个工具识别不了$my_ip参数。

1.4.3.2.3 设置服务开机启动
1.确定您的计算节点是否支持虚拟机的硬件加速:
 egrep -c '(vmx|svm)' /proc/cpuinfo
 如果此命令返回零值,则您的计算节点不支持硬件加速,并且您必须配置libvirt才能使用QEMU而不是KVM。
 如果此命令返回值为1或更大,则您的计算节点支持通常不需要额外配置的硬件加速。 
这里值为0 ,需修改/etc/nova/nova.conf
[libvirt]
# ...
virt_type = qemu
2.启动计算服务(包括其相关性),并将其配置为在系统引导时自动启动:
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
3.控制器:5672上的错误消息AMQP服务器无法访问可能表示控制器节点上的防火墙阻止了对端口5672的访问。配置防火墙以在控制器节点上打开端口5672,并在计算节点上重新启动nova-compute服务。
iptables -F
iptables -X
iptables -Z
4.查看启动是否成功
systemctl status libvirtd.service openstack-nova-compute.service |grep "active" |wc -l 
2
启动成功
1.4.3.2.4 添加compute节点到cell数据库
1.获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
. admin-openrc
2.列出服务组件,以验证是否成功启动并注册了每个进程:
openstack compute service list
3.Discover compute hosts:
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': ff73e4a9-0e57-45f3-b645-4aa06901282b
Checking host mapping for compute host 'compute': 957e9c70-c1f1-45f4-af9e-4a0dbb0d56c5
Creating host mapping for compute host 'compute': 957e9c70-c1f1-45f4-af9e-4a0dbb0d56c5
Found 1 unmapped computes in cell: ff73e4a9-0e57-45f3-b645-4aa06901282b

注意:
添加新计算节点时,必须在控制节点上运行nova-manage cell_v2 discover_hosts以注册这些新计算节点。 或者,您可以在/etc/nova/nova.conf中设置适当的时间间隔:
[scheduler]
discover_hosts_in_cells_interval = 300
1.4.3.2.5 检查认证服务
# 检查认证服务
openstack user list 
# 检查镜像服务
openstack image list 
# 检查计算服务
openstack compute service list
[root@controller ~]# openstack user list 
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| 0a08f160d9d74c88b53510110ac6a0f9 | glance    |
| 0b5bdbe507ee41b48b50cd7379843b6b | admin     |
| 91083e18891c4c05b36e12bb506b3d1f | nova      |
| bf07826961bc48b48523ee10036aeb03 | demo      |
| ceec4a4f39044371a8fbfb33a14bf063 | placement |
+----------------------------------+-----------+
列出镜像
[root@controller ~]# openstack image list 
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 8e53b03c-08f8-456c-b245-0f4e7e783c5c | cirros | active |
+--------------------------------------+--------+--------+
列出服务组件
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  3 | nova-conductor   | controller | internal | enabled | up    | 2019-04-15T06:04:20.000000 |
|  4 | nova-consoleauth | controller | internal | enabled | up    | 2019-04-15T06:04:12.000000 |
|  5 | nova-scheduler   | controller | internal | enabled | up    | 2019-04-15T06:04:20.000000 |
|  6 | nova-compute     | compute    | nova     | enabled | up    | 2019-04-15T06:04:19.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
该输出应该显示三个服务组件在控制节点上启用,一个服务组件在计算节点上启用。

#列出身份服务中的API端点以验证与身份服务的连接
 openstack catalog list
 openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| nova      | compute   | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:5000/v3/     |
|           |           | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+

检查cells和placement API是否正常
 nova-status upgrade check
Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement".
+-------------------------------+
| 升级检查结果                  |
+-------------------------------+
| 检查: Cells v2                |
| 结果: 成功                    |
| 详情: None                    |
+-------------------------------+
| 检查: Placement API           |
| 结果: 成功                    |
| 详情: None                    |
+-------------------------------+
| 检查: Resource Providers      |
| 结果: 成功                    |
| 详情: None                    |
+-------------------------------+
| 检查: Ironic Flavor Migration |
| 结果: 成功                    |
| 详情: None                    |
+-------------------------------+
| 检查: API Service Version     |
| 结果: 成功                    |
| 详情: None                    |
+-------------------------------+

1.4.4 Networking(neutron)服务

1.4.4.1 网络服务概述

OpenStack Networking(neutron)允许您创建由其他OpenStack服务管理的接口设备并将其连接到网络。可以实现插件以适应不同的网络设备和软件,为OpenStack架构和部署提供灵活性。 
网络服务包含以下组件: 
neutron-server 
接受API请求并将其路由到适当的OpenStack Networking插件以便采取行动。 
OpenStack Networking plug-ins and agents 
插拔端口,创建网络或子网,并提供IP地址。这些插件和代理根据特定云中使用的供应商和技术而有所不同。OpenStack Networking带有用于思科虚拟和物理交换机,NEC OpenFlow产品,Open vSwitch,Linux桥接和VMware NSX产品的插件和代理。 
通用代理是L3(第3层),DHCP(动态主机IP寻址)和插件代理。 
Messaging queue 
大多数OpenStack Networking安装用于在neutron-server和各种代理之间路由信息。还充当存储特定插件的网络状态的数据库。 
OpenStack Networking主要与OpenStack Compute进行交互,为其实例提供网络和连接。

1.4.4.2 安装和配置controller节点

官网参考:https://docs.openstack.org/neutron/queens/install/environment-networking-obs.html

1.4.4.2.1 创建nuetron数据库和授权
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY '123456';

1.4.4.2.2 在keystone中创建用户并授权
 . admin-openrc
创建neutron用户
openstack user create --domain default --password-prompt neutron
User Password:123456
Repeat User Password:123456
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 82777eb147594bab9d1ccc4704076724 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
添加admin角色为neutron用户
 openstack role add --project service --user neutron admin
在keystone中创建服务实体,和注册API接口
 openstack service create --name neutron \
>   --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 72b421cdb2d64ef6911f0d41a8911cb1 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

创建网络服务API端点
openstack endpoint create --region RegionOne \
>   network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c7027bc70489438499e4583e56bf8a2c |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 72b421cdb2d64ef6911f0d41a8911cb1 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]#  openstack endpoint create --region RegionOne \
>   network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4460bc2e1f77426c8a5c7842cee78f9b |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 72b421cdb2d64ef6911f0d41a8911cb1 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 3d11005ec8714e91bdd06d73e71b4a6f |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 72b421cdb2d64ef6911f0d41a8911cb1 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

1.4.4.2.3 配置网络部分(controller节点)
您可以使用选项1和2所代表的两种体系结构之一来部署网络服务。

选项1部署了仅支持将实例附加到提供者(外部)网络的最简单的可能架构。 没有自助服务(专用)网络,路由器或浮动IP地址。 
只有管理员或其他特权用户才能管理提供商网络。
选项2增加了选项1,其中支持将实例附加到自助服务网络的第3层服务。 
演示或其他非特权用户可以管理自助服务网络,包括提供自助服务和提供商网络之间连接的路由器。此外,浮动IP地址可提供与使用来自外部网络(如Internet)的自助服务网络的实例的连接。 自助服务网络通常使用隧道网络。隧道网络协议(如VXLAN),选项2还支持将实例附加到提供商网络。
选择Networking Option 2: Self-service networks
1.4.4.2.3.1 安装相关软件



yum install openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables
1.4.4.2.3.2 配置服务组件 编辑/etc/neutron/neutron.conf配置文件



在[database]部分, 配置数据库访问: [database] # ... connection = mysql+pymysql://neutron:123456@controller/neutron 在[DEFAULT]部分, 启用模块化第2层(ML2)插件,路由器服务和overlapping IP addresses: [DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true 在 [DEFAULT] 部分, 配置RabbitMQ消息队列访问: [DEFAULT] # ... transport_url = rabbit://openstack:123456@controller 在 [DEFAULT]和 [keystone_authtoken]部分, 配置认证服务访问: [DEFAULT] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 123456 在 [DEFAULT] 和[nova] 部分, 配置网络通知计算网络拓扑更改: [DEFAULT] # ... notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [nova] # ... auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 123456 在 [oslo_concurrency] 部分,配置锁定路径: [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp 命令行: cp /etc/neutron/neutron.conf{,.bak} grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:123456@controller/neutron openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:123456@controller openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 123456 openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_type password openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne openstack-config --set /etc/neutron/neutron.conf nova project_name service openstack-config --set /etc/neutron/neutron.conf nova username nova openstack-config --set /etc/neutron/neutron.conf nova password 123456 openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
1.4.4.2.3.3 配置网络二层插件/etc/neutron/plugins/ml2/ml2_conf.ini
ML2插件使用Linux桥接机制为实例构建第2层(桥接和交换)虚拟网络基础结构。
编辑/etc/neutron/plugins/ml2/ml2_conf.ini 

在 [ml2]部分,启用 flat, VLAN, and VXLAN 网络:
[ml2]
# ...
type_drivers = flat,vlan,vxlan

在 [ml2]部分,启用VXLAN 自助服务网络:
[ml2]
# ...
tenant_network_types = vxlan

在 [ml2]部分, 启用Linux网桥和第2层集群机制:
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
在你配置完ML2插件之后,删除可能导致数据库不一致的``type_drivers``项的值。

在 [ml2]部分, 启用端口安全扩展驱动程序:
[ml2]
# ...
extension_drivers = port_security

在 [ml2_type_flat] 部分, 将提供者虚拟网络配置为扁平网络:
[ml2_type_flat]
# ...
flat_networks = provider

在 [ml2_type_vxlan] 部分, 为自助服务网络配置VXLAN网络标识符范围:

[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
在 [securitygroup] 部分, 启用ipset以提高安全组规则的效率:
[securitygroup]
# ...
enable_ipset = true

命令集:
vim /etc/neutron/plugins/ml2/ml2_conf.ini
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

1.4.4.2.3.4 配置linux网桥代理/etc/neutron/plugins/ml2/linuxbridge_agent.ini
Linux桥接代理为实例构建层-2(桥接和交换)虚拟网络基础结构,并处理安全组
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件并完成以下操作:
在 [linux_bridge]部分, 将提供者虚拟网络映射到提供者物理网络接口
[linux_bridge]
physical_interface_mappings = provider:eth1
注意:这里的物理网卡是外部网络的网卡(underlying provider physical network interface)。 
在[vxlan]部分中,启用vxlan隧道网络,配置处理隧道网络的物理网络接口的IP地址,并启用
[vxlan]
enable_vxlan = true
local_ip = 10.0.0.86 
l2_population = true
注意这里的ip地址10.0.0.86 为管理网络的ip地址(IP address of the underlying physical network interface that handles overlay networks)

在 [securitygroup]部分, 启用安全组并配置Linux网桥iptables防火墙驱动程序:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通过验证下列所有SysTL值设置为1以确保Linux操作系统内核支持网桥过滤器:

net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables

vim /usr/lib/sysctl.d/00-system.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

sysctl -p

命令集:
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth1
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan  enable_vxlan true 
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan  local_ip 192.168.123.23 
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan  l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

1.4.4.2.3.5 配置三层代理 /etc/neutron/l3_agent.ini
Layer-3(L3)代理为自助虚拟网络提供路由和NAT服务。
在 [DEFAULT]部分, 配置Linux网桥接口驱动程序和外部网络桥接器:
[DEFAULT]
# ...
interface_driver = linuxbridge

cp /etc/neutron/l3_agent.ini{,.bak} 
grep '^[a-Z\[]'  /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set  /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
1.4.4.2.3.6 配置DHCP代理
DHCP代理为虚拟网络提供DHCP服务。 
编辑/etc/neutron/dhcp_agent.ini文件并完成以下操作:
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
在[DEFAULT]部分,配置Linux网桥接口驱动程序,Dnsmasq DHCP驱动程序,并启用隔离的元数据,以便提供商网络上的实例可以通过网络访问元数据:

cp  /etc/neutron/dhcp_agent.ini{,.bak}
grep '^[a-Z\[]'  /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

1.4.4.2.3.7 配置metadata

元数据代理为实例提供配置信息,例如凭据。
编辑 /etc/neutron/metadata_agent.ini文件并完成以下操作:

[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET

cp /etc/neutron/metadata_agent.ini{,.bak}
grep '^[a-Z\[]'  /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set  /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set  /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret 123456

1.4.4.2.3.8 配置计算服务使用网络服务

vim /etc/nova/nova.conf

[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

命令:
openstack-config --set  /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set  /etc/nova/nova.conf neutron auth_type password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name service
openstack-config --set  /etc/nova/nova.conf neutron username neutron
openstack-config --set  /etc/nova/nova.conf neutron password 123456
openstack-config --set  /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set  /etc/nova/nova.conf neutron metadata_proxy_shared_secret 123456

1.4.4.2.3.9 完成安装启动服务
1.网络服务初始化脚本需要一个指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini的符号链接/etc/neutron/plugin.ini。 如果此符号链接不存在,请使用以下命令创建它:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
>   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  正在对 neutron 运行 upgrade...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial
INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
.........
........
INFO  [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0, migrate to pluggable ipam
INFO  [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62, add standardattr to qos policies
INFO  [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353, Add Name and Description to the networksegments table
INFO  [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding
INFO  [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges.
  确定

3.重启compute API服务
systemctl restart openstack-nova-api.service

4.启动网络服务并设为开机启动
systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
5.验证服务
systemctl status neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service |grep "active"|wc -l
4
6.对于联网选项2,还启用并启动第三层服务:
systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service

1.4.4.3 安装和配置compute节点

以下操作在计算节点执行

官方参考:https://docs.openstack.org/neutron/queens/install/compute-install-rdo.html

1.4.4.3.1 安装compute上neutron
yum install -y openstack-neutron-linuxbridge ebtables ipset
1.4.4.3.2 配置公共组件

网络通用组件配置包括身份验证机制,消息队列和插件。
编辑/etc/neutron/neutron.conf文件并完成以下操作:

1.4.4.3.3 配置网络部分

选择您为控制器节点选择的相同网络选项以配置特定的服务。 之后,返回此处并继续配置计算服务以使用网络服务。

网络选项1:提供商网络

网络选项2:自助服务网络

这里选择网络选项2:自助服务网络

1.4.4.3.3.1 配置Linux网桥
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
在[vxlan]部分中,启用VXLAN隧道网络,配置处理隧道网络的物理网络接口的IP地址,并启用第2层群体:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true

注意:这里的192.168.91.71为计算节点隧道网络的IP地址(underlying physical network interface that handles overlay networks) 
在[securitygroup]节中,启用安全组并配置Linux网桥iptables防火墙驱动程序:

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth1
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.0.0.17
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

vim /usr/lib/sysctl.d/00-system.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
sysctl -p

1.4.4.3.4 配置计算服务使用网络服务
编辑/etc/nova/nova.conf

[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

命令:
 openstack-config --set  /etc/nova/nova.conf neutron url http://controller:9696
 openstack-config --set  /etc/nova/nova.conf neutron auth_url http://controller:35357
 openstack-config --set  /etc/nova/nova.conf neutron auth_type password
 openstack-config --set  /etc/nova/nova.conf project_domain_name default
 openstack-config --set  /etc/nova/nova.conf user_domain_name default
 openstack-config --set  /etc/nova/nova.conf region_name RegionOne
 openstack-config --set  /etc/nova/nova.conf project_name service 
 openstack-config --set  /etc/nova/nova.conf username neutron
 openstack-config --set  /etc/nova/nova.conf password 123456

1.4.4.3.5 启动服务
重启compute服务
systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service

设置网桥服务开机启动
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

1.4.4.4 验证服务

在这里,我只进行验证网络,网络正常说明服务正常

 neutron agent-list 
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 1e3a4f5c-0717-47bf-acbc-d825a579fc95 | Linux bridge agent | compute    |                   | :-)   | True           | neutron-linuxbridge-agent |
| 4819066d-a5b9-4b24-9d43-eaff246a60d0 | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent          |
| 74c50c99-157e-4203-b7ad-8e8912fc783a | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| bded30c6-0bfd-4a1d-9c2b-ad063ebf1273 | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| d2b54a9d-b187-40a0-97a1-12c10ae28fff | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

1.4.5 安装Horizon服务

1.安装软件包
yum install openstack-dashboard -y 
2.编辑 /etc/openstack-dashboard/local_settings 文件并完成以下操作:

配置仪表板以在controller节点上使用OpenStack服务 :
OPENSTACK_HOST = "controller"
允许您的主机访问仪表板:
ALLOWED_HOSTS = ['*']
或者ALLOWED_HOSTS = [‘one.example.com’, ‘two.example.com’] 
ALLOWED_HOSTS也可以[‘*’]接受所有主机。这对开发工作可能有用,但可能不安全,不应用于生产。

配置仪表板以在controller节点上使用OpenStack服务 :
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    }
}

启用第3版认证API:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
启用对域的支持:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本:
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

通过仪表盘创建用户时的默认域配置为 default :

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
通过仪表盘创建的用户默认角色配置为 user :

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
如果您选择网络参数1,禁用支持3层网络服务:
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

可以选择性地配置时区

TIME_ZONE = "Asia/Shanghai"
如果未包括,则将以下行添加到/etc/httpd/conf.d/openstack-dashboard.conf。
WSGIApplicationGroup %{GLOBAL}

允许您的主机访问仪表板:

完成安装启动服务

systemctl restart httpd.service memcached.service

登录web验证配置

访问地址
http://10.0.0.86/dashboard/auth/login/

image.pngimage.png

域:default

用户名:admin

密码:123456

至此 horizon 安装完成。

1.4.6 启动第一台实例

创建过程:
1) 创建虚拟网络
2) 创建m1.nano规格的主机(相等于定义虚拟机的硬件配置)
3) 生成一个密钥对(openstack的原理是不使用密码连接,而是使用密钥对进行连接)
4) 增加安全组规则(用iptables做的安全组)
5) 启动一个实例(启动虚拟机有三种类型:1.命令CLI 2.api 3.Dashboard)实际上Dashboard也是通过api进行操作
6) 虚拟网络分为提供者网络和私有网络,提供者网络就是跟主机在同一个网络里,私有网络自定义路由器等,跟主机不在一个网络

1.4.6.1 上传一个ubuntu镜像

 openstack image create "Ubuntu16.04" --file xenial-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | 3c9ef66aea001f0cad0623ddab1f8fc1                     |
| container_format | bare                                                 |
| created_at       | 2019-04-17T12:10:13Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/0b99a562-868c-4cd2-b8f7-050b5821aac9/file |
| id               | 0b99a562-868c-4cd2-b8f7-050b5821aac9                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | Ubuntu16.04                                          |
| owner            | 49384508fdd84f40beec710e0266ac94                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 300941312                                            |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2019-04-17T12:10:18Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+
查看上传的镜像
 openstack image list
+--------------------------------------+-------------+--------+
| ID                                   | Name        | Status |
+--------------------------------------+-------------+--------+
| 0b99a562-868c-4cd2-b8f7-050b5821aac9 | Ubuntu16.04 | active |
| 8e53b03c-08f8-456c-b245-0f4e7e783c5c | cirros      | active |
+--------------------------------------+-------------+--------+
删除一个镜像
openstack image delete 8e53b03c-08f8-456c-b245-0f4e7e783c5c
[root@controller ~]# openstack image list
+--------------------------------------+-------------+--------+
| ID                                   | Name        | Status |
+--------------------------------------+-------------+--------+
| 0b99a562-868c-4cd2-b8f7-050b5821aac9 | Ubuntu16.04 | active |
+--------------------------------------+-------------+--------+

1.4.6.2 创建虚拟网络

参考:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance-networks-selfservice.html

https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance-provider.html

1.加载环境变量
. admin-openrc
2.创建网络

3.在网络上创建出一个子网

1.4.7 安装Cinder服务

块存储服务(cinder)为访客实例提供块存储设备。 存储配置和使用的方法由块存储驱动程序确定,或者在多后端配置的情况下由驱动程序确定。 有多种可用的驱动程序:NAS / SAN,NFS,iSCSI,Ceph等。
块存储API和调度程序服务通常在控制节点上运行。 根据所使用的驱动程序,卷服务可以在控制节点,计算节点或独立存储节点上运行。
一旦能够在OpenStack环境中“启动实例”,请按照以下说明将Cinder添加到基本环境。

1.5 openstack常用命令

openstack --help #查看命令用法

1.5.1 endpoint

查看创建的endpoint
openstack endpoint  list
删除创建的endpoint
openstack endpoint  delete 6dd34e225e4949f69da940c1829ffbf8

 openstack endpoint --help
Command "endpoint" matches:
  endpoint add project
  endpoint create
  endpoint delete
  endpoint list
  endpoint remove project
  endpoint set
  endpoint show

1.5.2 查看默认的计算配额

nova quota-defaults
+----------------------+-------+
| Quota                | Limit |
+----------------------+-------+
| instances            | 10    |
| cores                | 20    |
| ram                  | 51200 |
| metadata_items       | 128   |
| key_pairs            | 100   |
| server_groups        | 10    |
| server_group_members | 10    |
+----------------------+-------+

更新默认 quota-defaults
usage : nova quota-class-update default key value
# example 
nova quota-class-update default --instances 1
nova quota-class-update default --ram 5120
nova quota-class-update default --cores 2

+-----------------------------+-------+
| Quota                       | Limit |
+-----------------------------+-------+
| instances                   | 1     |
| cores                       | 2     |
| ram                         | 5120  |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| server_groups               | 10    |
| server_group_members        | 10    |
+-----------------------------+-------+

1.5.3 使用api常用获取命令

使用api端口查看镜像列表

curl -H "X-Auth-Token:token"  -H "Content-Type: application/json"  http://10.0.0.32:9292/v2/images
获取roles列表
curl -H "X-Auth-Token:token" -H "Content-Type: application/json" http://10.0.0.11:35357/v3/roles
获取主机列表
curl -H "X-Auth-Token:token" -H "Content-Type: application/json" http://10.0.0.11:8774/v2.1/servers
获取网络列表
curl -H "X-Auth-Token:token" -H "Content-Type: application/json" http://10.0.0.11:9696/v2.0/networks
获取子网列表
curl -H "X-Auth-Token:token" -H "Content-Type: application/json" http://10.0.0.11:9696/v2.0/subnets
下载一个镜像
curl -o clsn.qcow2 -H "X-Auth-Token:token" -H "Content-Type: application/json" http://10.0.0.11:9292/v2/images/eb9e7015-d5ef-48c7-bd65-88a144c59115/file

参考:openstack官方文档链接

版本链接:https://docs.openstack.org/releasenotes/openstack-manuals/index.html
https://docs.openstack.org/install-guide/environment.html

  • 我的微信
  • 这是我的微信扫一扫
  • weinxin
  • 我的微信公众号
  • 我的微信公众号扫一扫
  • weinxin
avatar

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: