**一、在添加monitor的时候报错**
~~~
[wlwjfx25][DEBUG ] connected to host: WLWJFX32
[wlwjfx25][INFO ] Running command: ssh -CT -o BatchMode=yes wlwjfx25
[wlwjfx25][DEBUG ] connection detected need for sudo
sudo: sorry, you must have a tty to run sudo
[ceph_deploy][ERROR ] RuntimeError: connecting to host: wlwjfx25 resulted in errors: IOError cannot send (already closed?)
~~~
**解决方法:**
使用不同账户,执行执行脚本时候sudo经常会碰到 sudo: sorry, you must have a tty to run sudo这个情况,其实修改一下sudo的配置就好了
~~~
vi /etc/sudoers (最好用visudo命令)
注释掉 Default requiretty 一行
#Default requiretty
~~~
意思就是sudo默认需要tty终端。注释掉就可以在后台执行了。
**二、 ceph-deploy mon create-initial 遇到错误**
admin_socket: exception getting command descriptions: [Errno 2] No such file or director
要在配置文件中加入以下内容:
~~~
[osd]
osd max object name len = 256 //这里必须写,否则在创建mon会出错
osd max object namespace len = 64 //同上
rbd default features = 1
~~~
**三、 ceph状态为HEALTH_WARN **
~~~
[root@WLWJFX62 ~]# ceph -s
cluster e062ce71-bfb3-4895-8373-6203de2fa793
health HEALTH_WARN
too few PGs per OSD (10 < min 30)
monmap e1: 3 mons at {WLWJFX23=10.255.213.133:6789/0,WLWJFX24=10.255.213.134:6789/0,WLWJFX25=10.255.213.135:6789/0}
election epoch 10, quorum 0,1,2 WLWJFX23,WLWJFX24,WLWJFX25
mdsmap e7: 1/1/1 up {0=WLWJFX34=up:active}
osdmap e611: 145 osds: 145 up, 145 in
pgmap v1283: 512 pgs, 3 pools, 11667 bytes data, 20 objects
742 GB used, 744 TB / 785 TB avail
512 active+clean
~~~
执行ceph health 可得知:
~~~
[root@WLWJFX62 ~]# ceph health
HEALTH_WARN too few PGs per OSD (10 < min 30)
~~~
需调整 需要修改pg_num , pgp_num
1、查看 所拥有的pool
~~~
[root@WLWJFX23 ceph]# ceph osd pool stats
pool rbd id 0
nothing is going on
pool fs_data id 3
nothing is going on
pool fs_metadata id 4
nothing is going on
~~~
2、获取对应pool的pg_num和pgp_num.值
~~~
ceph osd pool get fs_data pg_num
ceph osd pool get fs_data pgp_num
ceph osd pool get fs_metadata pg_num
ceph osd pool get fs_metadata pgp_num
~~~
2、修改pool对应的pg_num和pgp_num.
~~~
ceph osd pool set fs_data pg_num 512
ceph osd pool set fs_data pgp_num 512
ceph osd pool set fs_metadata pg_num 512
ceph osd pool set fs_metadata pgp_num 512
~~~
通过ceph -s检查:
~~~
[root@WLWJFX23 ceph]# ceph -s
cluster e062ce71-bfb3-4895-8373-6203de2fa793
health HEALTH_WARN
too few PGs per OSD (26 < min 30)
monmap e1: 3 mons at {WLWJFX23=10.255.213.133:6789/0,WLWJFX24=10.255.213.134:6789/0,WLWJFX25=10.255.213.135:6789/0}
election epoch 10, quorum 0,1,2 WLWJFX23,WLWJFX24,WLWJFX25
mdsmap e7: 1/1/1 up {0=WLWJFX34=up:active}
osdmap e627: 145 osds: 145 up, 145 in
pgmap v1352: 1280 pgs, 3 pools, 11667 bytes data, 20 objects
742 GB used, 744 TB / 785 TB avail
1280 active+clean
~~~
若还出现 too few PGs per OSD (26 < min 30) 报错,则pg_num和pgp_num还需增加,设定的值最好是2的**整数幂**
3、需要注意, pg_num只能增加, 不能缩小.
~~~
[root@mon1 ~]# ceph osd pool set rbd pg_num 64
Error EEXIST: specified pg_num 64 <= current 128
~~~
四、创建osd时报错:
[ceph_deploy][ERROR ] RuntimeError: bootstrap-osd keyring not found; run 'gatherkeys'
登录跳板节点:
ceph-deploy gatherkeys WLWJFX{64..72}
注意问题:
1、ceph 10版本会将glib库的版本要求你为centos-1611的版本,否则会出现不兼容的报错。
2、检查各主机时钟是否一致
3、
[root@xhw342 ~]# yum -y install yum-plugin-priorities
Loaded plugins: fastestmirror
CentOS7_1611-media | 3.6 kB 00:00:00
ZStack | 3.6 kB 00:00:00
ceph-jewel | 2.9 kB 00:00:00
ceph-jewel_deprpm | 2.9 kB 00:00:00
ceph-jewel_noarch | 2.9 kB 00:00:00
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable <repoid>
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again
解决方法:
进入/etc/yum.repos.d中删除epel.repo和epel-testing.repo
4、
[xhw342][DEBUG ] Configure Yum priorities to include obsoletes
[xhw342][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[xhw342][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[xhw342][WARNIN] curl: (6) Could not resolve host: download.ceph.com; Unknown error
[xhw342][WARNIN] error: https://download.ceph.com/keys/release.asc: import read failed(2).
[xhw342][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm --import https://download.ceph.com/keys/release.asc
- 一、日常运维脚本
- 1.1 批量创建用户并赋予sudo权限
- 1.2 批量主机建立互信
- 1.3create_images.sh
- 1.4monitor.sh
- 1.5ftp脚本
- 1.6格式化分区
- 1.7简单的远程执行脚本
- 二、常用服务使用
- 1 ceph (分布式文件系统)
- 1.1 ceph 简介
- 1.2 准备环境
- 1.3 开始部署
- 1.4 cephfs挂载
- 1.5 RBD块存储
- 1.6 object 对象存储
- 1.7 集群扩展
- 1.7.1 增加删除MON
- 1.7.2 增加删除OSD
- 1.7.3 删除MDS
- 注意事项
- 遇到的问题
- 1.8ceph查找数据文件
- 1.9卸载并清理环境
- 2、mysql (数据库)
- 2.1 搭建
- 2.2 使用教程
- 2.2.1 mysql基础配置
- 2.2.1.1 用户权限管理
- 2.2.1.2用户资源限制
- 2.2.1.3 密码管理
- 2.2.1.4用户lock
- 2.2.2mysql语法详解
- 2.2.1建库、表语句
- 2.2.2.2 插入 insert
- 2.2.2.3更新 update
- 2.2.2.4删除 delete
- 2.2.2.5查询 select
- 2.2.6视图 索引 view index
- 2.2.7 修改 alert
- 2.2.2.8清理 truncate drop
- 2.2.9重命名 rename
- 示例语句
- 2.2.3mysql常用函数
- 2.3.1 对比操作符统概
- 2.3.2对比操作符详解
- 2.3.3逻辑操作符
- 2.2.4分配操作符
- 2.2.5流程控制函数
- 2.2.6字符串函数
- 2.2.7字符串对比函数
- 2.2.8数字函数
- 2.2.9日期和时间函数
- 2.2.10聚合/格式转换函数
- 2.2.11 子查询
- 示例语句
- 2.2.4 mysql 高级应用
- 2.2.4.1 存储过程 函数
- 2.2.4.2流程控制
- 2.2.4.3游标
- 2.2.4.4触发器
- 课堂练习
- 2.2.2.5 数据库设计
- 2.2.5.1 数据类型
- 2.2.5.2存储引擎
- 2.2.6Innodb内核
- 1、innodb事务和多版本控制
- 2、体系结构
- 3、InnoDB配置
- 4、buffer pool设置
- 5、其他配置
- innodb限制
- 2.7 字符集
- 2.8锁机制和事务
- 2.8.1锁机制
- 2.8.2事务
- 2.9分区
- 2.9.1 自动分区
- 2.10复制
- 2.11mysql搬移数据目录
- 2.12组复制 GR
- 简介
- 搭建
- 2.3日常运维
- 2.3.1定时任务
- 2.4mycat
- 2.4.1 报错分析
- 2.4.2 修改字符集
- 2.11 mycat使用
- 2.5遇到问题
- 2.5.1 表名库名忽略大小写
- 3、PAAS平台搭建
- 问题汇总
- 1、docker
- 2、日常运维
- 3.1 Kubernetes
- 3.1 kubernetes 高版本搭建
- 4、GlusterFS搭建
- 5、MooseFS搭建
- 5.1搭建
- 5.2运维
- 5.2.1 mfs日志解析
- 5.2.2清理mfs的垃圾数据
- 5.2.3元数据故障恢复
- 5.2.4 MFS优化
- 5.2.5 配置机架感知
- 5.2.6 客户端工具集
- 6、集群切换命令
- 7、ntp服务
- 8、monggoDB
- 8.1搭建单机
- 2、搭建集群及分片
- 9、MariaDB Galera Cluster
- 9.1源码安装MariaDB
- 9.2galera cluster 优劣
- 9.3 rpm安装mariadb
- 10 HAproxy1.7搭建
- 11、sysbench 搭建使用
- 0.5版本
- 12 percona-xtradb-cluster
- 13http服务相关
- 13.1 http状态码解析
- 14 zookeeper
- 14.1 zookeeper日志查看
- 14.2 配置解析
- 14.3 优化
- 15搭建私有pip源
- 16/var/log的日志文件解释
- 15 ansible的搭建及使用
- 15.1 搭建
- 15.2 使用说明
- 16. 搭建本地yum源
- zookeeper
- 优化
- 四、开发语言
- 1、GO语言
- 1.1go简介
- 1.1.1hello_world初识GO
- 1.1.2并发介绍
- 1.1.3 chan介绍
- 1.1.4多返回值
- 1.2go基础
- 1.2.1数据类型
- 1.2.2 go基础结构
- 1.2.3 const及变量介绍
- 1.2.3os和time介绍
- 1.2.4 字符串
- 1.2.5条件判断
- 1.2.6 homework
- go--help
- 1.3 go基础2
- 1.3.1 数组 array
- 1.3.2切片 slice
- 1.3.3 时间和日期
- 1.3.4指针类型
- 1.3.5函数
- 1.3.6可变参数
- 1.3.7 defer
- 1.3.8递归
- 1.9闭包
- 1.10 map
- 1.11 sort
- 1.12 struct 结构体
- 2.perl语言
- 2.1 安装lib包
- 3 python
- 1.语言基础
- 2、编程教学
- 2.1变量和序列
- 2.2 条件语句