编程开源技术交流,分享技术与知识

网站首页 > 开源技术 正文

centos7 环境安装 (kilo 版本)(centos7.5安装详细图解)

wxchong 2024-08-18 00:41:51 开源技术 30 ℃ 0 评论

centos7 环境安装 (kilo 版本)

第一部分:架构





第二部分:基础环境设定

基本环境设定:

192.168.211.128 controller

192.168.211.135 compute1

192.168.211.130 block1

192.168.211.138 storage1

192.168.211.141 storage2

双网卡:

第一步:设置host 及网络

controller && computer1:

vi /etc/hosts

192.168.211.128 controller

192.168.211.135 compute1

192.168.211.130 block1

192.168.211.138 storage1

192.168.211.141 storage2

compute1 第二网卡配置:

如:/etc/sysconfig/network-scripts/ifcfg-eno33554984

DEVICE=eth1 TYPE=Ethernet ONBOOT="yes" BOOTPROTO="none"

确认节点之间能通过主机名称互PING,并且能够联网

第二步:安装NTP服务

yum install ntp -y

controller:

vi /etc/ntp.conf

server 127.127.1.0

fudge 127.127.1.0 stratum 8

配置开机启动

systemctl enable ntpd.service

systemctl start ntpd.service

compute1:

yum install ntp -y

vi /etc/ntp.conf

server controller iburst

配置开机启动

systemctl enable ntpd.service

systemctl start ntpd.service

第三步:安装openstack 包

controller && compute1 :

epel 库

yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -y

rdo-release-kilo

yum install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm -y

yum upgrade -y

重启机器

配置selinux安全策略:

yum install openstack-selinux -y

第四步:配置数据库

controller:

yum install mariadb mariadb-server MySQL-python -y

配置:

/etc/my.cnf.d/mariadb_openstack.cnf

[mysqld]

bind-address = 192.168.211.128

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

启动并配置开机启动:

systemctl enable mariadb.service

systemctl start mariadb.service

安全配置:

mysql_secure_installation

根据提示:设置Root密码,去掉匿名访问,删除test库

验证:

mysql -uroot -phkdb

第四步:配置消息队列服务:

controller:

yum install rabbitmq-server -y

配置:

开机启动:

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

增加 openstack 用户

rabbitmqctl add_user openstack rabbit_pwd

配置权限

rabbitmqctl set_permissions openstack ".*" ".*" ".*"


第三部分:身份验证服务

第一步:配置身份验证服务

controller:

1.配置数据库

mysql -uroot -phkdb

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY 'keystone_pwd';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'keystone_pwd';

flush privileges;

exit

2.生成安装过程中的管理员token

openssl rand -hex 10 >admin_token.txt

如:61c1fc9404d6b44f3fe1

3.安装服务包

yum install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached -y

4.启动并配置开机启动

systemctl enable memcached.service systemctl start memcached.service

5.配置服务:

/etc/keystone/keystone.conf

[DEFAULT]

admin_token = 61c1fc9404d6b44f3fe1

[database]

connection = mysql://keystone:keystone_pwd@controller/keystone

[memcache]

servers = localhost:11211

[token]

provider = keystone.token.providers.uuid.Provider

driver = keystone.token.persistence.backends.memcache.Token

[revoke]

driver = keystone.contrib.revoke.backends.sql.Revoke

可选配:

[DEFAULT]

verbose = True

debug = false


6.同步数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

验证:

mysql -ukeystone -pkeystone_pwd

use keystone

show tables;


7.编辑 /etc/httpd/conf/httpd.conf

ServerName controller


8.编辑 /etc/httpd/conf.d/wsgi-keystone.conf

Listen 5000 Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /var/www/cgi-bin/keystone/main
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LogLevel info
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>
<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /var/www/cgi-bin/keystone/admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    LogLevel info
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>

8.建立wscgi目录

mkdir -p /var/www/cgi-bin/keystone

9.获取模板文件

curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo \ | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin

10.配置权限

chown -R keystone:keystone /var/www/cgi-bin/keystone chmod 755 /var/www/cgi-bin/keystone/*

11.启动服务并配置开机启动

systemctl enable httpd.service systemctl start httpd.service


第二步:创建租约,用户,角色

controller:

1.配置环境变量

export OS_TOKEN=61c1fc9404d6b44f3fe1

export OS_URL=http://controller:35357/v2.0

2.创建服务实体

openstack service create \ --name keystone --description "OpenStack Identity" identity

3.创建API终端

openstack endpoint create \ --publicurl http://controller:5000/v2.0 \ --internalurl http://controller:5000/v2.0 \ --adminurl http://controller:35357/v2.0 \ --region RegionOne \ identity

4.建立管理角色项目

openstack project create --description "Admin Project" admin

5.建立管理者

openstack user create --password-prompt admin

输入密码:admin_pwd

6.创建管理角色

openstack role create admin

7.管理角色到项目

openstack role add --project admin --user admin admin

8.创建服务项目

openstack project create --description "Service Project" service

9.创建非管理员项目

openstack project create --description "Demo Project" hk

10.创建非管理员账号

openstack user create --password-prompt hk

输入密码:hk_pwd

11.创建用户角色

openstack role create user

12.把hk用户添加到user 组中

openstack role add --project hk --user hk user


第四步:验证身份验证服务

controller:

1.清除配置

/usr/share/keystone/keystone-dist-paste.ini

[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3]

在这些节中去掉

admin_token_auth

2.清除环境变量:

echo $OS_TOKEN $OS_URL

unset OS_TOKEN OS_URL

echo $OS_TOKEN $OS_URL

2.管理用户发起认证请求 (v2.0 api)

openstack --os-auth-url http://controller:35357 \ --os-project-name admin --os-username admin --os-auth-type password \ token issue

输入密码:admin_pwd

(v3.0 api)

openstack --os-auth-url http://controller:35357 \ --os-project-domain-id default --os-user-domain-id default \ --os-project-name admin --os-username admin --os-auth-type password \ token issue

3.管理用户列出项目

openstack --os-auth-url http://controller:35357 \ --os-project-name admin --os-username admin --os-auth-type password \ project list

4.管理用户列出用户

openstack --os-auth-url http://controller:35357 \ --os-project-name admin --os-username admin --os-auth-type password \ user list

5.管理用户列出角色

openstack --os-auth-url http://controller:35357 \ --os-project-name admin --os-username admin --os-auth-type password \ role list

6.非管理账号发起请求

openstack --os-auth-url http://controller:5000 \ --os-project-domain-id default --os-user-domain-id default \ --os-project-name hk --os-username hk --os-auth-type password \ token issue

7.非管理账号发起列出账号(命令行只允许管理角色操作)

openstack --os-auth-url http://controller:5000 \ --os-project-domain-id default --os-user-domain-id default \ --os-project-name hk --os-username hk --os-auth-type password \ user list


第五步:简化客户端环境变量

vi admin-openrc.sh

exportOS_PROJECT_DOMAIN_ID=default

exportOS_USER_DOMAIN_ID=default

exportOS_PROJECT_NAME=admin

exportOS_TENANT_NAME=admin

exportOS_USERNAME=admin

exportOS_PASSWORD=ADMIN_PASS

exportOS_AUTH_URL=http://controller:35357/v3

vi hk-openrc.sh

exportOS_TENANT_NAME=hk

exportOS_USERNAME=hk

exportOS_PASSWORD=hk_pwd

exportOS_AUTH_URL=http://controller:5000/v2.0

加载环境变量

source admin-openrc.sh

测试:

openstack token issue


第四部分:镜像服务

第一步:安装配置

模块:

glance-api

glance-registry

Database

Storage repository

controller:

1.创建数据库

mysql -uroot -phkdb

CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'glance_pwd';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'glance_pwd';

flush privileges;

exit

2.初始化管理者环境变量

source admin-openrc.sh

3.创建glance 用户

openstack user create --password-prompt glance

输入密码:glance_pwd

4.把glance 用户注册到service项目上并分配管理角色

openstack role add --project service --user glance admin

5.创建glance 服务实体

openstack service create --name glance \ --description "OpenStack Image service" image

6.创建镜像服务的api 终端

openstack endpoint create \ --publicurl http://controller:9292 \ --internalurl http://controller:9292 \ --adminurl http://controller:9292 \ --region RegionOne \ image

7.安装glance包

yum install openstack-glance python-glance python-glanceclient -y

8.配置

/etc/glance/glance-api.conf

[database]

connection=mysql://glance:glance_pwd@controller/glance

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = glance_pwd

[paste_deploy]

flavor = keystone

[glance_store]

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

[DEFAULT]

notification_driver = noop

verbose = True

debug=true


9.配置

/etc/glance/glance-registry.conf

[database]

connection=mysql://glance:glance_pwd@controller/glance

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = glance_pwd

[paste_deploy]

flavor = keystone

[DEFAULT]

notification_driver = noop

verbose = True

debug=true

10.同步glance数据库

su -s /bin/sh -c "glance-manage db_sync" glance

11.验证数据库

mysql -uglance -pglance_pwd

use glance

show tables;

12.启动服务并配置开机启动

systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service


第十一步:验证glance服务

controller:

1.使用2.0版本

echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh

2.初始化环境变量

source admin-openrc.sh

3.建立临时目录

mkdir /tmp/images

4.下载文件

wget -P /tmp/images http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img

5.上传镜像到镜像服务中

glance image-create --name "cirros-0.3.4-x86_64" --file /tmp/images/cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare --visibility public --progress

5.检测镜像状态

glance image-list

6.重命名临时镜像文件

mv /tmp/images /tmp/images.bak

rm -r /tmp/images

(因为镜像已经存储在镜像服务中)


第五部分:计算服务

模块

API

nova-api

nova-api-metadata

Compute core

nova-compute

nova-scheduler

nova-conductor

Networking for VMs

nova-network

Console interface

nova-consoleauth

nova-novncproxy

nova-spicehtml5proxy

nova-xvpnvncproxy

nova-cert

Image management

nova-objectstore

euca2ools client

第一步:controller 安装配置

controller:

1.创建数据库及授权

mysql -uroot -phkdb

CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'nova_pwd';

flush privileges;

exit

2.初始化环境变量

source admin-openrc.sh

3.配置服务证书

a. 建立用户

openstack user create --password-prompt nova

输入密码:nova_pwd

b.添加到管理角色组

openstack role add --project service --user nova admin

c.创建服务

openstack service create --name nova \ --description "OpenStack Compute" compute

4.创建服务API终端

openstack endpoint create \ --publicurl http://controller:8774/v2/%\(tenant_id\)s \ --internalurl http://controller:8774/v2/%\(tenant_id\)s \ --adminurl http://controller:8774/v2/%\(tenant_id\)s \ --region RegionOne \ compute

5.安装服务包

yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \ python-novaclient -y

6.配置/etc/nova/nova.conf

[database]

connection = mysql://nova:nova_pwd@controller/nova

[DEFAULT]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = rabbit_pwd

#身份服务

[DEFAULT]

auth_strategy=keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = nova_pwd

#

my_ip=192.168.211.128

#VNC proxy

vncserver_listen = 192.168.211.128

vncserver_proxyclient_address = 192.168.211.128

[glance]

host=controller

verbose=true

debug=true

6.同步nova数据库

su -s /bin/sh -c "nova-manage db sync" nova

7.验证nova 数据库

mysql -unova -pnova

use nova

show tables;

8.启动服务并配置开机启动

systemctl enable openstack-nova-api.service openstack-nova-cert.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-cert.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service


第二步:

compute1

安装配置

compute1:

1.安装包

yum install openstack-nova-compute sysfsutils -y

2.编辑配置文件

/etc/nova/nova.conf

[DEFAULT]

#消息队列

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = rabbit_pwd

#认证方式

auth_strategy = keystone

[keystone_authtoken]

#认证访问方式

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = nova_pwd

[DEFAULT]

#监听IP,本机IP地址

my_ip = 192.168.211.135

#配置远程访问

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 192.168.211.135

novncproxy_base_url = http://controller:6080/vnc_auto.html

#镜像服务地址

[glance]

host = controller

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

#选配,日志输出

verbose=true

debug=true

3.检测是否支持硬件加速(虚拟化)

egrep -c '(vmx|svm)' /proc/cpuinfo

返回要大于0

如果返回等于0 ,需编辑

/etc/nova/nova.conf

[libvirt]

virt_type = qemu

4.启动服务并配置开机启动

systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service

(启动失败,可能是因为防火墙阻挡了端口,关闭防火墙,或开启端口)

(systemctl stop firewalld)

第三步:验证服务

controller:

1.初始化环境变量

source admin-openrc.sh

2.列出服务列表

nova service-list

3.列出镜像API 终端

nova endpoints

4.列出镜像

nova image-list


第六部分:网络服务

模型:传统网络(Legacy networking (nova-network))

第一步:controller配置

1.配置/etc/nova/nova.conf

[DEFAULT]

network_api_class = nova.network.api.API

security_group_api = nova

2.重启服务

systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \ openstack-nova-conductor.service

第二步:compute1 配置

1.安装包

yum install openstack-nova-network openstack-nova-api -y

2.配置 /etc/nova/nova.conf

[DEFAULT]

network_api_class = nova.network.api.API

security_group_api = nova

firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver

network_manager = nova.network.manager.FlatDHCPManager

network_size = 254

allow_same_net_traffic = False

multi_host = True

send_arp_for_ha = True

share_dhcp_address = True

force_dhcp_release = True

flat_network_bridge = br100

flat_interface = eth1

public_interface = eth1

#外网网卡 eth1

3.启动服务并开机启动

systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service systemctl start openstack-nova-network.service openstack-nova-metadata-api.service

第三步:初始化网络

controller

1.初始化环境变量

source admin-openrc.sh

2.创建网络组

nova network-create hk-net --bridge br100 --multi-host T \ --fixed-range-v4 10.0.0.0/26

#删除网络

nova network-delete 39798339-8ff9-4ec6-97da-9a6f260e9734

3.验证网络

nova net-list

至此,可以启动VM实例


第七部分:WEB UI服务

controller

1.安装包

yum install openstack-dashboard httpd mod_wsgi memcached python-memcached -y

2.配置/etc/openstack-dashboard/local_settings

#服务地址

OPENSTACK_HOST ="controller"

#允许所有机器访问

ALLOWED_HOSTS =['*']

#memcached

CACHES ={

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': '127.0.0.1:11211',

}

}

#UI 访问时的角色

OPENSTACK_KEYSTONE_DEFAULT_ROLE ="user"

#配置时区,默认为UTC

TIME_ZONE = "Asia/Shanghai"

3.允许http访问 openstack service

setsebool -P httpd_can_network_connect on

4.配置权限

chown -R apache:apache /usr/share/openstack-dashboard/static

5.启动服务并配置开机启动

systemctl enable httpd.service memcached.service systemctl start httpd.service memcached.service

6.验证服务

http://192.168.211.128/dashboard

使用认证用户登录

admin admin_pwd

hk hk_pwd



第八部分:

Block

服务

cinder-api

cinder-volume

cinder-scheduler

Messaging queue

第一步:controller安装配置

1.配置数据库

mysql -uroot -phkdb

CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ IDENTIFIED BY 'cinder_pwd'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY 'cinder_pwd';

flush privileges;

exit

2.初始化环境变量

source admin-openrc.sh

3.创建服务证书

a. 创建用户

openstack user create --password-prompt cinder

输入密码:cinder_pwd

b.赋予管理权限

openstack role add --project service --user cinder admin

c.创建服务实例

openstack service create --name cinder \ --description "OpenStack Block Storage" volume

openstack service create --name cinderv2 \ --description "OpenStack Block Storage" volumev2

d.创建API终端

openstack endpoint create \ --publicurl http://controller:8776/v2/%\(tenant_id\)s \ --internalurl http://controller:8776/v2/%\(tenant_id\)s \ --adminurl http://controller:8776/v2/%\(tenant_id\)s \ --region RegionOne \ volume

openstack endpoint create \ --publicurl http://controller:8776/v2/%\(tenant_id\)s \ --internalurl http://controller:8776/v2/%\(tenant_id\)s \ --adminurl http://controller:8776/v2/%\(tenant_id\)s \ --region RegionOne \ volumev2

3.安装包

yum install openstack-cinder python-cinderclient python-oslo-db -y

4.拷贝配置文件

cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf

chown -R cinder:cinder /etc/cinder/cinder.conf

5.配置文件/etc/cinder/cinder.conf

[database]

connection = mysql://cinder:cinder_pwd@controller/cinder

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = rabbit_pwd

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder_pwd

[DEFAULT]

my_ip=192.168.211.128

verbose = True

[oslo_concurrency]

lock_path = /var/lock/cinder

5.同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

6.验证

mysql -ucinder -pcinder_pwd

use cinder

show tables;

7.启动服务并配置开机启动

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service


第二步:block 存储机器配置

IP:192.168.211.130

新独立磁盘:/dev/sdb5

hostname:block1

配置各节点/etc/hosts,确保相互可PING通

1.安装ntp服务

yum -y install ntp

2.安装格式支持

yum install qemu -y

2.安装 lvm 包

yum install lvm2 -y

3.启动并配置开机启动

systemctl enable lvm2-lvmetad.service systemctl start lvm2-lvmetad.service

4.创建LVM

pvcreate /dev/sdb

5.创建 cinder-volumes

vgcreate cinder-volumes /dev/sdb

6.配置/etc/lvm/lvm.conf,增加/dev/sdb 访问

filter = [ "a/sdb/","r/.*/" ]

7.安装配置包

yum install openstack-cinder targetcli python-oslo-db python-oslo-log MySQL-python -y

8.编辑文件/etc/cinder/cinder.conf

[database]

connection = mysql://cinder:cinder_pwd@controller/cinder

[DEFAULT]

rpc_backend = rabbit

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = rabbit_pwd

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder_pwd

[DEFAULT]

my_ip = 192.168.211.137

[lvm]

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group = cinder-volumes

iscsi_protocol = iscsi

iscsi_helper = lioadm

[DEFAULT]

enabled_backends = lvm

glance_host = controller

verbose = True

[oslo_concurrency]

lock_path = /var/lock/cinder

9.启动服务并配置开机启动

systemctl enable openstack-cinder-volume.service target.service systemctl start openstack-cinder-volume.service target.service

10.验证服务

controller:

a.初始化环境变量

echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh hk-openrc.sh

b.管理员环境

source admin-openrc.sh

b.显示服务组件

cinder service-list

c.初始化环境,非管理用户

source hk-openrc.sh

d.创建1G存储

cinder create --display-name hk-volume1 1

e.列出可用存储卷

cinder list


第九部分:

storage

服务

Proxy servers (swift-proxy-server)

Account servers (swift-account-server)

Container servers (swift-container-server)

Object servers (swift-object-server)

Various periodic processes

WSGI middleware

第一步:controller 配置

1.创建swift用户

openstack user create --password-prompt swift

输入密码:swift_pwd

2.设置管理权限

openstack role add --project service --user swift admin

3.创建服务实体

openstack service create --name swift \ --description "OpenStack Object Storage" object-store

4.创建终端

openstack endpoint create \ --publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \ --internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \ --adminurl http://controller:8080 \ --region RegionOne \ object-store

5.安装组件包

yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token \ python-keystonemiddleware memcached

-y

6.下载缓存配置文件

curl -o /etc/swift/proxy-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/kilo

7.编辑/etc/swift/proxy-server.conf

[DEFAULT] bind_port = 8080 user = swift swift_dir = /etc/swift

[pipeline:main] pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo proxy-logging proxy-server

[app:proxy-server] account_autocreate = true

[filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin,user

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = swift

password = swift_pwd

delay_auth_decision = true

[filter:cache] memcache_servers = 127.0.0.1:11211

第二步:storge 配置

1.2个新硬盘分区

2.确保2个节点能够通过域名互相PING通

/etc/hosts 所有节点:

192.168.211.128 controller

192.168.211.135 compute1

192.168.211.130 block1

192.168.211.138 storage1

192.168.211.141 storage2

3.安装配置NTP服务

yum -y install ntp

4.安装工具包

yum install xfsprogs rsync -y

5.格式化磁盘

mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc

6.创建挂载点

mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc

7.编辑 /etc/fstab

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

8.加载

mount /srv/node/sdb mount /srv/node/sdc

9.配置/etc/rsyncd.conf

uid = swift

gid = swift

log file= /var/log/rsyncd.log

pid file= /var/run/rsyncd.pid

address = 192.168.211.141 (storage节点IP地址)

[account]

max connections = 2

path = /srv/node/

readonly = false

lock file= /var/lock/account.lock

[container]

max connections = 2

path = /srv/node/

readonly = false

lock file= /var/lock/container.lock

[object]

max connections = 2

path = /srv/node/

readonly = false

lock file= /var/lock/object.lock


10.启动rsync服务并配置开机启动

systemctl enable rsyncd.service systemctl start rsyncd.service

11.安装依赖包

yum install openstack-swift-account openstack-swift-container \ openstack-swift-object -y

12.获取模板配置文件

curl -o /etc/swift/account-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/kilo

curl -o /etc/swift/container-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/kilo

curl -o /etc/swift/object-server.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/kilo

curl -o /etc/swift/container-reconciler.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/container-reconciler.conf-sample?h=stable/kilo

curl -o /etc/swift/object-expirer.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/object-expirer.conf-sample?h=stable/kilo

13.编辑文件/etc/swift/account-server.conf

各节点配置

[DEFAULT]

bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS (storage节点IP)

bind_port = 6002

user = swift

swift_dir = /etc/swift

devices = /srv/node

[pipeline:main]

pipeline = healthcheck recon account-server

[filter:recon]

recon_cache_path = /var/cache/swift

14.编辑文件/etc/swift/container-server.conf

各节点配置

[DEFAULT]

bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS (storage节点IP)

bind_port = 6001

user = swift

swift_dir = /etc/swift

devices = /srv/node

[pipeline:main]

pipeline = healthcheck recon container-server

[filter:recon]

recon_cache_path = /var/cache/swift

15.编辑 /etc/swift/object-server.conf

各节点配置

[DEFAULT]

bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS (storage节点IP)

bind_port = 6000

user = swift

swift_dir = /etc/swift

devices = /srv/node

[pipeline:main]

pipeline = healthcheck recon object-server

[filter:recon]

recon_cache_path = /var/cache/swift

16.配置属性

chown -R swift:swift /srv/node

17.配置目录及权限

mkdir -p /var/cache/swift chown -R swift:swift /var/cache/swift


第三步:创建初始环

controll 节点:

a.账号环

1.进入目录 /etc/swift

2.创建文件 account.builder

swift-ring-builder account.builder create 10 3 1

3.把各个storage节点添加到环里去

swift-ring-builder account.builder add r1z1-192.168.211.138:6002/sdb 100

swift-ring-builder account.builder add r1z1-192.168.211.138:6002/sdc 100

swift-ring-builder account.builder add r1z1-192.168.211.141:6002/sdb 100

swift-ring-builder account.builder add r1z1-192.168.211.141:6002/sdc 100

4.验证环

swift-ring-builder account.builder (出现4个节点)

5.重新分配环

swift-ring-builder account.builder rebalance

b.容器环

cd /etc/swift

1.创建 container.builder 文件

swift-ring-builder container.builder create 10 3 1

2.把每个storage节点添加到环里去

swift-ring-builder container.builder add r1z1-192.168.211.138:6001/sdb 100

swift-ring-builder container.builder add r1z1-192.168.211.138:6001/sdc 100

swift-ring-builder container.builder add r1z1-192.168.211.141:6001/sdb 100

swift-ring-builder

container.builder

add r1z1-192.168.211.141:6001/sdc 100

3.验证

swift-ring-builder container.builder

4.重新分配

swift-ring-builder container.builder rebalance

c.对象环

1.进入目录 /etc/swift

2.创建object

swift-ring-builder object.builder create 10 3 1

3.添加storage 到环

swift-ring-builder object.builder add r1z1-192.168.211.138:6000/sdb 100

swift-ring-builder object.builder add r1z1-192.168.211.138:6000/sdc 100

swift-ring-builder object.builder add r1z1-192.168.211.141:6000/sdb 100

swift-ring-builder object.builder add r1z1-192.168.211.141:6000/sdc 100

4.验证

swift-ring-builder object.builder

5.重新分配

swift-ring-builder object.builder rebalance

6.分发配置文件到各storage节点 /etc/swift 目录下

account.ring.gz container.ring.gz object.ring.gz


第四步:最终配置

controller 各节点:

1.获取配置文件模板

curl -o /etc/swift/swift.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/kilo

2.编辑 /etc/swift/swift.conf

[swift-hash]

swift_hash_path_suffix = hk

swift_hash_path_prefix = hk

[storage-policy:0]

name = Policy-0

default = yes

3.复制到各storage 节点及安装了 proxy的节点(controller)

scp /etc/swift/swift.conf root@192.168.211.128:/etc/swift/swift.conf

4.修改权限所有节点

chown -R swift:swift /etc/swift

5. controller 启动服务(openstack-swift-proxy memcached.service 服务安装在节点【controller】)

systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service

6.在各storage 节点启动并配置开机启动

systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \ openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \

openstack-swift-container-replicator.service openstack-swift-container-updater.service

openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \

systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \ openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \ openstack-swift-object-replicator.service openstack-swift-object-updater.service

systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \ openstack-swift-account-reaper.service openstack-swift-account-replicator.service

#重启

systemctl restart openstack-swift-container.service openstack-swift-container-auditor.service \ openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl restart openstack-swift-object.service openstack-swift-object-auditor.service \ openstack-swift-object-replicator.service openstack-swift-object-updater.service

systemctl restart openstack-swift-account.service openstack-swift-account-auditor.service \ openstack-swift-account-reaper.service openstack-swift-account-replicator.service


第五步 :验证服务

controller:

1.初始化环境变量

source hk-openrc.sh

2.检测swift状态

systemctl disable firewalld

systemctl stop firewalld

swift -V 3 stat

3.上传测试文件

swift upload hk-container1 a.txt

export OS_SERVICE_TOKEN=c47c52ab4a34a6272740

export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0





Tags:

本文暂时没有评论,来添加一个吧(●'◡'●)

欢迎 发表评论:

最近发表
标签列表