Docker官方帮助文档:https://docs.docker.com/engine/install/centos/
Docker三要素:镜像、容器、仓库
准备篇:
1、卸载旧版本的docker(可省去,根据实际情况狂选择是否卸载)
若上面方法无法卸载,可参考如下方法:sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
查询安装的docker包:
卸载:rpm -qa |grep docker
2、安装yum-utils以及配置docker源为阿里云。rpm -e 包名
3、更新yum源sudo yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
安装篇:
1、安装docker
2、查看已安装的docker软件包(可以忽略)sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin
3、启动Dockeryum list docker-ce --showduplicates | sort -r
4、验证docker安装是否成功,执行如下命令如果页面有显示hello-world 等字样则说明成功了。sudo systemctl start docker
5、如果要查看docker版本信息可执行如下命令sudo docker run hello-world
docker version
卸载篇:
卸载Docker
sudo systemctl stop docker
sudo yum remove docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
设置篇:
配置阿里云的镜像加速器
可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
请将xxxxxx.mirror.aliyuncs.com换成自己的加速地址
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
命令篇:
帮助启动类命令
启动docker
停止dockersystemctl start docker
重启dockersystemctl stop docker
查看docker状态systemctl restart docker
开机自启:systemctl status docker
查看docker摘要信息:systemctl enable docker
查看已拉取的docker镜像docker info
参数:docker images
- -a:列出本地所有镜像,包含历史镜像
- -q:只显示镜像ID
REPOSITORY:镜像的仓库源[[email protected] ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest feb5d9fea6a5 10 months ago 13.3kB
TAG:镜像版本
IMAGE ID:镜像id
CREATED:镜像创建时间
SIZE:镜像大小
注意:如果不指定Tag版本,默认是latest 表示最新版。
镜像类命令:
搜索镜像:
参数:docker search xxxx
- --limit:只列出N个镜像,默认25个
- 范例:docker search --limit 5 nginx
docker pull xxxx (下载最新版)
查看镜像/容器/数据卷所占用的空间:(类似于linux的df命令)docker pull xxxx:1.6 (下载指定版本的镜像)
docker system df
删除某个镜像:[[email protected] ~]# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 2 1 133.3MB 133.2MB (99%)
Containers 2 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
强制删除:docker rmi 镜像ID/镜像名
同时删除多个镜像:docker rmi -f 镜像ID/镜像名
删除全部镜像(谨慎操作)docker rmi -f A B C
docker images -qa:显示全部镜像且只显示镜像id。docker rmi -f $(docker images -qa)
虚悬镜像:
仓库名、标签都是<none>的镜像,俗称虚悬镜像,dangling image
删除虚悬镜像:(谨慎操作,建议做好快照后操作,否则数据删除无法恢复)
docker rmi $(docker images -q -f dangling=true)
容器类命令:
通过镜像拉起一个容器,例如nginx
docker run -itdp 80:80 --name nginxtest 容器名或镜像ID
或docker run -itdp 80:80 --name nginxtest nginx
如果报错“docker: Error response from daemon: No command specified”可以在后面加上 /bin/bash在执行,例如docker run -d --name nginx_mirrors -p 80:80 605c77e624dd
注意:例如nginx默认监听的是80,如果设置为80:88 那么端口外部不会通,因为容器中监听的是80而不是88,可以进入容器,将80监听改成88即可docker run -itdp 80:80 --name nginxtest nginx /bin/bash
新建+启动容器命令:
参数:docker run 选项 镜像名 命令
- --name=“容器新名字” 为容器指定一个名称;
- -d:后台运行容器并返回容器ID,也即启动守护式容器(后台运行);
- -i:以交互模式运行容器,通畅与-t同时使用;
- -t:为容器重新分配一个伪输入终端,等待交互;
- -P:随机段鸥映射,大写P
- -p:指定端口映射,小写p
启动容器并修改name:docker run -it 镜像ID /bin/bash
查看已启动的容器:docker run -it --name=bzyyc 0d493297b409 /bin/bash
docker ps
参数:[[email protected] ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ec7065fa89f 0d493297b409 "/bin/bash" About a minute ago Up About a minute 80/tcp, 443/tcp gracious_hoover
- -a:列出当前所有正在运行的容器+历史上运行过的容器
- -l:显示最近创建的容器
- -n:显示最近n个创建的容器
- -q:静默模式,只显示容器编号
两种退出方式:
- exit:run进去容器,exit退出,容器停止
- ctrl+p+q:run进去容器,ctrl+p+q退出,容器不停止
重启容器:docker start 容器ID/容器名
停止容器:docker restart 容器ID/容器名
强制停止容器:docker stop 容器ID/容器名
删除已停止的容器:docker kill 容器ID/容器名
强制删除运行中的容器(高危)docker rm 容器ID/容器名
一次性删除所有容器(高危)docker rm -f 容器ID/容器名
强制是删除所有容器,包含运行中的
docker ps -a -q显示所有容器的id
启动守护式容器(后台运行) -d后台运行。docker rm -f $(docker ps -a -q)
docker ps -a -q |xargs docker rm (传参方式)
注意:很容易混淆的问题docker run -d 容器名
比如:使用了docker run -d centos 后台运行这个容器,但使用docker ps 看不到容器,使用docker ps -a 查看到容器是退出状态,所以,docker容器后台运行,就必须有一个前台进程,容器运行的命令如果不是那些一直挂起的命令(比如top、tail命令等)就会自动退出。
这是docker的机制问题,比如是web容器,以nginx为例,正常情况下配置了启动服务只需要启动相应的service即可,例如 service nginx start 但是这样做nginx为进程模式运行,就导致docker 前台没有运行应用。这样的容器后台启动后会立即自杀因为他觉得他没事情做了。
所以,最佳的解决方案是:将您要运行的程序以前太进程的形式运行。常见就是命令行模式,表示我还有交互操作,别中断。则用docker run -it cenots 退出时可以用ctrl+q+p
查看容器的日志:
注意:(待定)某些情况下执行上面命令没有返回任何日志,原因 docker log只能记录输出到终端的内容(stdout或者stderr),输出到文件的无法显示docker logs 容器ID
比如:(待定)运行了redis的容器,可以用这个命令查看,因为redis日志本身是直接输出到终端的。运行了nginx的容器这个命令则无法查看,因为nginx服务日志是记录在文件中的,若要查看,则需要打开nginx日志文件看。
查看最近2条日志: docker logs --tail 2 容器名
查看容器内运行的进程:
查看容器内部细节:docker top 容器ID
运行后可以看到该容器的配置信息,以json格式显示。docker inspect 容器ID
进入正在运行的容器,并以命令行交互:
docker exec -it 容器ID bashshell
注意:exec和attach区别如下:docker attach 容器ID
exec:在容器中打开新的终端,并且可以启动新的进程用于exit退出,不会导致容器停止。
attach :直接进入容器启动命令的终端,不会启动新的进程用于exit退出,会导致容器停止
如果不需要容器停止,请使用exec
从容器内拷贝文件到宿主机上:
例如:docker cp 容器ID:容器内内路径 宿主机路径
拷贝文件后并重命名:docker cp fb869f8045a4:/www.zfcdn.xyz.txt /root/
镜像导入导出方法:docker cp fb869f8045a4:/www.zfcdn.xyz.txt /root/www.zfcdn.xyz.docker.txt
export:导出容器的内容留作为一个tar归档文件(将容器导出备份)
import:从tar报中的内容创建一个新的文件系统在导入镜像(将备份的镜像导入)
导出:
范例:docker export 容器ID >文件名.tar
导入:docker export fb869f8045a4 > /root/abcd.tar
范例:cat 文件名.tar|docker import - 镜像用户/镜像名:镜像版本号
www.zfcdn.xyz/nginx:1.8(镜像用户/镜像名:镜像版本号)对应的是 REPOSITORY TAGcat abcd.tar|docker import - www.zfcdn.xyz/nginx:1.8
导入后执行命 docker images 可以看到镜像,在执行命令 docker start 7d0a7a994226 启动容器,并执行 docker run -it 7d0a7a994226 /bin/bash 则可启动查看。
将本地镜像发布到阿里云:
1、创建命名空间 2、创建镜像仓库 www.zfcdn.xyz.test一般为镜像名字
推送镜像:
将如下的信息换成自己的
执行命令登录仓库:
执行如下命令设置镜像版本([镜像版本号]可以自己指定,比如我指定的是1.5.5)docker login --username=261****@qq.com registry.cn-shenzhen.aliyuncs.com
原始命令:
我执行的命令:docker tag [ImageId] registry.cn-shenzhen.aliyuncs.com/blog_tag_gg/www.zfcdn.xyz.test:[镜像版本号]
推送镜像:docker tag 72c3668f916e registry.cn-shenzhen.aliyuncs.com/blog_tag_gg/www.zfcdn.xyz.test:1.5.5
原始命令:
我执行的命令:docker push registry.cn-shenzhen.aliyuncs.com/blog_tag_gg/www.zfcdn.xyz.test:[镜像版本号]
推送完成后会有显示。 推送后可参考如图查看 将阿里云自己的镜像拉取到本地使用docker push registry.cn-shenzhen.aliyuncs.com/blog_tag_gg/www.zfcdn.xyz.test:1.5.5
登录服务器执行命令即可拉取到本地
docker pull registry.cn-shenzhen.aliyuncs.com/blog_tag_gg/www.zfcdn.xyz.test:1.5.5
搭建本地私有仓库
安装:registry
运行私有仓库:registrydocker pull registry
用CURL命令验证私有仓库上有什么镜像:docker run -d -p 5000:5000 -v /usr/local/registry:/var/lib/registry registry
如果显示“{"repositories":[]}”这表示为空。curl -XGET 192.168.0.222:5000/v2/_catalog
将mysql镜像修改为符合私有镜像规范的Tag
原来如下:
执行如下命令将 mysql镜像修改为192.168.0.222:5000/mysql_test[[email protected] ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
www.zfcdn.xyz_nginx/nginx 1.5.5 efba7e9d4652 4 hours ago 199MB
registry.cn-shenzhen.aliyuncs.com/blog_tag_gg/www.zfcdn.xyz.test 1.5.5 efba7e9d4652 4 hours ago 199MB
nginx latest 605c77e624dd 9 months ago 141MB
mysql latest 3218b38490ce 10 months ago 516MB
registry latest b8604a3fe854 11 months ago 26.2MB
说明:将mysql:latest修改为192.168.0.222:5000/mysql_test:9.0docker tag mysql:latest 192.168.0.222:5000/mysql_test:9.0
修改配置文件使之支持http
编辑:/etc/docker/daemon.json文件增加如下代码:
注意,该文件是json格式,"insecure-registries"前面有一个逗号不能省略,否则会报错。,"insecure-registries":["192.168.0.222:5000"]
完整的格式如下:
push推送到私有仓库{
"registry-mirrors": ["https://v8gxxxxx.mirror.aliyuncs.com"],
"insecure-registries":["192.168.0.222:5000"]
}
推送完成后再执行如下命令查看仓库中的镜像docker push 192.168.0.222:5000/mysql_test:9.0
从私有仓库下载镜像安装curl -XGET 192.168.0.222:5000/v2/_catalog
docker pull 192.168.0.222:5000/mysql_test:9.0
Docker容器数据卷
注意(坑):Docker挂载主机目录访问如果出现:cannot open directory /xx/: Permission denied
原因是权限不足,解决方法是在挂载目录后面多加一个参数 :--privileged=true 即可解决,并将selinux关闭
完整范例如下:
或docker run -i -t -v /soft:/soft --privileged=true 686672a1d0cc /bin/bash
添加docker容器卷实现数据持久化:docker run -d -p 80:80 -v /blog-tag-gg/miir:/rongqi/miir --privileged=true blogtaggg
容器卷作用:将docker容器内的数据保存到宿主磁盘中,达到数据持久化存储,更安全。
格式:
默认情况下是双向读写,也就是rw,等同于:docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录 --name=名称 镜像名 /bin/bash
范例:docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录:rw --name=名称 镜像名 /bin/bash
docker run -p 80:80 -it --privileged=true -v /www.zfcdn.xyz/rong_nginx_data:/www.zfcdn.xyz/data --name=nginx_test nginx /bin/bash
-v /宿主机绝对路径目录:/容器内目录 可以同时绑定多个目录,届时多添加相应路径即可,比如日志、配置、数据目录等。docker run -it --privileged=true -v /www.zfcdn.xyz/rong_nginx_data:/www.zfcdn.xyz/data --name=nginx_test nginx /bin/bash
特点:
- 数据卷可在容器自检共享或重用数据
- 卷中的更改可以直接实时生效
- 数据卷中的更改不会包含在镜像的更新中
- 数据卷的生命周期一直持续到没有容器使用它为止。
其中会显示如下:docker inspect c3e6a7695923
"Type": "bind",
"Source": "/www.zfcdn.xyz/rong_nginx_data",
"Destination": "/www.zfcdn.xyz/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
Source:宿主路径
Destination:容器内路径
- docker修改、宿主内也会同步
- 宿主内修改、docker也会同步
- docker容器停止,宿主修改文件或添加文件,docker容器启动后数据也会同步。
限制容器内只能读取不能写
0docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录:ro --name=名称 镜像名
此时在宿主中创建的文件可以同步到容器中,但若在容器中创建或修改文件会报错如图,原因是设置了容器只读。
容器卷的继承和共享:root@6b6254fd024b:/gg/data# echo "test" >www.zfcdn.xyz.txt
bash: www.zfcdn.xyz.txt: Read-only file system
1、容器1完整与主机的映射
2、容器2集成容器1映射的目录docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录 --name=U1 镜像名
范例:docker run -it --privileged=true --volumes-from 父类 --name=U2 镜像名 /bin/bash
说明:docker run -it --privileged=true --volumes-from u1 --name=U2 nginx /bin/bash
父类:表示U1,也就是--name=U1
设置后表示容器2映射的目录与容器1一样,在宿主机容器1或者容器2中读写及修改都会相互生效,即便将容器1停止,容器2的读写也会与宿主同步,将容器1启用后之前容器2的读写内容也会同步。
Docker容器中常见软件的安装部署:
总体步骤:搜索镜像(docker search 镜像名)、拉取镜像(docker pull 镜像名)、查看镜像(docker images)、启动镜像及端口映射(docker run -itd -p 80:80 tomcat)、停止容器(docker stop tomcat)、移除容器(docker rm tomcat)
安装Tomcat:
1、搜索tomcat版本:
2、拉取tomcat镜像到本地docker search tomcat
默认表示最新版。docker pull tomcat
3、启动并后台运行tomcat
注意(坑):docker run -itd -p 80:8080 --name=tomcat_test tomcat /bin/bash
1、安装好后看80端口有正常被docker监听,但端口不通,原因是容器里面tomcat服务没启动起来,进入容器/usr/local/tomcat/bin目录,然后执行 startup.sh启动脚本即可。
2、访问后提示404,并没有显示tomcat默认猫的页面,原因是tomcat10这个版本后 /usr/local/tomcat/webapps 目录为空,所以没有文件报404,若要显示默认界面,可以执行如下命令即可
tomcat默认页面存放在webapps.dist 改名为webapps即可正常看到。mv webapps ./webapps_bak
mv webapps.dist ./webapps
3、tomcat网站根目录默认为:/usr/local/tomcat/webapps/ROOT
简单安装mysql:(没有设置持久存储,该方法只用于测试)
1、拉取mysql5.7
docker pull mysql:5.7
2、生成mysql容器并设置数据库的root密码[root@blog-tag-gg ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql 5.7 c20987f18b13 10 months ago 448MB
3、进入mysql容器docker run -p 3306:3306 --name=myslq_test -e MYSQL_ROOT_PASSWORD=blogtagggpw -d mysql:5.7
4、进入mysql数据库docker exec -it cbc87ba3f507 /bin/bash
回车后输入数据库的root即可进入数据库,能正常进入,则表示正常mysql -uroot -p
创建数据库及创建表及字段以及设置字段值:
注意1:默认情况下mysql是有开启root远程权限的,这有安全隐患,若不需要外部调用mysql,建议禁止数据库root远程。mysql> create database db01;
Query OK, 1 row affected (0.00 sec)
mysql> use db01;
Database changed
mysql> create table t1(id int,name varchar(30));
Query OK, 0 rows affected (0.01 sec)
mysql> insert into t1 values(1,'zhangsan');
Query OK, 1 row affected (0.02 sec)
mysql> select * from t1;
+------+----------+
| id | name |
+------+----------+
| 1 | zhangsan |
+------+----------+
1 row in set (0.00 sec)
注意2超级坑:
默认情况下插入中文数据时会报错.
例如插入:
会报错:insert into t1 values(2,'301免备案跳转');
报错原因:是数据库字符编码的问题,需调整编码[SQL]insert into t1 values(2,'301免备案跳转');
[Err] 1366 - Incorrect string value: '\xE6\x8A\x80\xE6\x9C\xAF...' for column 'name' at row 1
再次注意:查看字符编码必须登录容器里面查看,如果使用第三方工具查看,例如Navicat查看,工具会自动将编码修改为utf-8,实际行数据库中的编码没有改变。
查看编码:
mysql> SHOW VARIABLES LIKE 'character%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.00 sec)
第三方工具查询结果 解决方法可参考下面
运行mysql容器并设置持久存储(容器卷)
容器中数据库存放路径及配置文件路径可以在/etc/mysql/mysql.cnf 或 /etc/mysql/mysql.conf.d下的mysqld.cnf查看,不同版本可能略微不同,可以自己查找,具体请根据自己路径填写。 解决编码问题:docker run -d -p 3306:3306 --privileged=true -v /blogtaggg/mysql/log:/var/log/mysql -v /blogtaggg/mysql/data:/var/lib/mysql -v /blogtaggg/mysql/conf:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=blogtagggpw --name=mysql_test mysql:5.7
进入:/blogtaggg/mysql/conf目录,创建 my.cnf文件,并写入如下代码再重启mysql即可。
重启mysql容器后在插入中文后正常。[mysqld]
character_set_server=utf8
collation_server=utf8_general_ci
[mysql]
default-character-set = utf8
[mysql.server]
default-character-set = utf8
[mysqld_safe]
default-character-set = utf8
[client]
default-character-set = utf8
编码如下:mysql> select * from t1;
+------+-----------------+
| id | name |
+------+-----------------+
| 1 | lisi |
| 2 | 301免备案跳转 |
+------+-----------------+
2 rows in set (0.00 sec)
结论:亲测,设置好之久存储(容器卷后)数据会同步到宿主目录里面,即便将mysql容器删除,重新运行容器后数据库数据还存在。mysql> SHOW VARIABLES LIKE 'character%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.00 sec)
将容器内的软件源(yum)修改为阿里云的源方法.
查看容器所使用的操作系统:
不同系统查看方法不一样可以都试下。
1、debian(亲测)cat /etc/issue
cat /etc/redhat-release
部分容器没有安装wget或者vi或者mv命令只有通过echo命令实现修改。cp /etc/apt/sources.list /etc/apt/sources.list.bak
echo "" > /etc/apt/sources.list
echo "deb http://mirrors.aliyun.com/debian buster main" >> /etc/apt/sources.list ;
echo "deb http://mirrors.aliyun.com/debian-security buster/updates main" >> /etc/apt/sources.list ;
echo "deb http://mirrors.aliyun.com/debian buster-updates main" >> /etc/apt/sources.list ;
其他版本还没测试过。
简单安装redis:(没有加容器卷,持久存储)
1、从仓库拉取版本为6的redis镜像
2、使用镜像生成redis容器docker pull redis:6
docker run -itd -p 6379:6379 redis:6
3、进入redis容器
4、检测redis是否正常docker exec -it 53c51c649bae /bin/bash
返回:redis-cli
复杂安装redis(添加容器卷(持久存储))root@53c51c649bae:/data# redis-cli
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> get k1
"v1"
拉起redis容器,并设置持久化存储
daemonize 参数必须为no,否则 与docker中的-d参数冲突docker run -p 6379:6379 --name=redis_test --privileged=true -v /app/redis/redis.conf:/etc/redis/redis.conf -v /app/redis/data:/data -d redis:6 redis-server /etc/redis/redis.conf
Docker高级篇:
容器中mysql主从搭建:
1、新建主服务器容器实例:端口为3307
2、进入/mydata/mysql-master/conf目录,创建my.cnf文件,并写入如下内容。docker run -p 3307:3306 --name=mysql-master \
-v /mydata/mysql-master/log:/var/log/mysql \
-v /mydata/mysql-master/data:/var/lib/mysql \
-v /mydata/mysql-master/conf:/etc/mysql \
-e MYSQL_ROOT_PASSWORD=blogtagggpw \
-d mysql:5.7
重启mysql-master容器。[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=101
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能
log-bin=mall-mysql-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
重启后一定要执行如下命令看下是否有启动成功,若启动成功,则说明配置正常,容器正常。docker restart mysql-master
docker ps -a
3、进入mysql-master容器内部:[root@blog-tag-gg conf]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1308309a139 mysql:5.7 "docker-entrypoint.s…" 6 minutes ago Up 4 seconds 33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp mysql-master
docker exec -it mysql-master /bin/bash
输入如下命令回车并输入数据库root密码看是否可进入数据库,若能进入,则正常。
4、mysql-master容器实例内部创建数据同步用户mysql -uroot -p
创建slave用户并设置密码,以及设置连接权限为%
创建后:CREATE USER 'slave'@'%' IDENTIFIED BY 'blogtagggpw';
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%';
5、新建从服务器容器实例 端口为:3308mysql> CREATE USER 'slave'@'%' IDENTIFIED BY 'blogtagggpw';
Query OK, 0 rows affected (0.01 sec)
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql>
docker run -p 3308:3306 --name mysql-slave \
-v /mydata/mysql-slave/log:/var/log/mysql \
-v /mydata/mysql-slave/data:/var/lib/mysql \
-v /mydata/mysql-slave/conf:/etc/mysql \
-e MYSQL_ROOT_PASSWORD=blogtagggpw \
-d mysql:5.7
6、进入/mydata/mysql-slave/conf目录,创建my.cnf文件并写入如下代码[root@blog-tag-gg conf]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
738d0b3cc403 mysql:5.7 "docker-entrypoint.s…" 25 seconds ago Up 23 seconds 33060/tcp, 0.0.0.0:3308->3306/tcp, :::3308->3306/tcp mysql-slave
a1308309a139 mysql:5.7 "docker-entrypoint.s…" 17 minutes ago Up 11 minutes 33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp mysql-master
7、重启从服务器容器。[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=102
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能,以备Slave作为其它数据库实例的Master时使用
log-bin=mall-mysql-slave1-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
## relay_log配置中继日志
relay_log=mall-mysql-relay-bin
## log_slave_updates表示slave将复制事件写进自己的二进制日志
log_slave_updates=1
## slave设置为只读(具有super权限的用户除外)
read_only=1
重启后执行如下命令看是否有启动成功。(有up表示启动成功)docker restart mysql-slave
8、在主数据库中查看主从同步状态。docker ps -a
登录主服务器容器进入数据库,执行如下命令查看同步状态
结果如下:表示设置成功。show master status;
9、进入从容器服务器 mysql-slavemysql> show master status;
+-----------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-----------------------+----------+--------------+------------------+-------------------+
| mall-mysql-bin.000001 | 617 | | mysql | |
+-----------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
10、在从数据库中配置主从复制:docker exec -it mysql-slave
通过数据库密码进入数据库后执行如下命令配置
各项说明可参考:change master to master_host='192.168.16.4',master_user='slave',master_password='blogtagggpw',master_port=3307,master_log_file='mall-mysql-bin.000001',master_log_pos=617,master_connect_retry=30;
master_host:主数据库的IP地址;可以通过ifconfig 命令查看。
master_port:主数据库的运行端口;
master_user:在主数据库创建的用于同步数据的用户账号;
master_password:在主数据库创建的用于同步数据的用户密码;
master_log_file:指定从数据库要复制数据的日志文件,通过查看主数据的状态,获取File参数;
master_log_pos:指定从数据库从哪个位置开始复制数据,通过查看主数据的状态,获取Position参数;
master_connect_retry:连接失败重试的时间间隔,单位为秒。
11、在从数据库中查看主从同步状态
结果:show slave status \G;
显示如下则表示还没开始同步mysql> show slave status \G;
*************************** 1. row ***************************Slave_IO_State:
Master_Host: 192.168.16.4
Master_User: slave
Master_Port: 3307
Connect_Retry: 30
Master_Log_File: mall-mysql-bin.000001
Read_Master_Log_Pos: 617
Relay_Log_File: mall-mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mall-mysql-bin.000001
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 617
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State:
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
12、在从数据库中开启主从同步:Slave_IO_Running: No
Slave_SQL_Running: No
13、查看从数据库中状态,此时发现已同步mysql> start slave;
Query OK, 0 rows affected (0.38 sec)
执行命令
返回结果:发现编程了yes,则表示正常了。show slave status \G;
14、主从复制测试:mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.16.4
Master_User: slave
Master_Port: 3307
Connect_Retry: 30
Master_Log_File: mall-mysql-bin.000003
Read_Master_Log_Pos: 154
Relay_Log_File: mall-mysql-relay-bin.000004
Relay_Log_Pos: 377
Relay_Master_Log_File: mall-mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 812
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 101
Master_UUID: 5a4efa74-552b-11ed-a3c5-0242ac110002
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.13 sec)
ERROR:
No query specified
在主数据库中创建数据库及表以及写入信息
登录从数据库,看是否可以直接打开 blogtaggg_db 这个数据库并查看数据。mysql> create database blogtaggg_db;
Query OK, 1 row affected (0.11 sec)
mysql> use blogtaggg_db;
Database changed
mysql> create table t1(id int,name varchar(20));
Query OK, 0 rows affected (0.09 sec)
mysql> insert into t1 values(1,'www.zfcdn.xyz');
Query OK, 1 row affected (0.02 sec)
mysql> select * from t1;
+------+-------------+
| id | name |
+------+-------------+
| 1 | www.zfcdn.xyz |
+------+-------------+
1 row in set (0.00 sec)
查看数据已经同步到从数据库了,大工搞成。mysql> use blogtaggg_db;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from t1;
+------+-------------+
| id | name |
+------+-------------+
| 1 | www.zfcdn.xyz |
+------+-------------+
1 row in set (0.03 sec)
亿级别3主3从Redis集群扩容缩容实战案例说明
3主3从集群配置:
1、拉取Redis镜像,这里忽略。
2、拉取后执行如下命令拉起6台redis容器实例。
参数说明:docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381
docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382
docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383
docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384
docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385
docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386
查看是否有运行成功:(目前六台机器不分主从)docker run:创建并运行容器
--name redis-node-6:容器的名字
--net host:使用宿主机的ipHE 端口
--privileged=true:获取宿主机root权限
-v /data/redis/share/redis-node-6:/data:容器卷,宿主机地址:docker内部地址
redis:6.0.8:redis镜像和版本号
--cluster-enabled yes:开启redis集群
--appendonly yes:开启持久化
--port 6386:redis的端口号
3、构建主从关系,进入其中一台机器:[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea1e315e6661 redis:6.0.8 "docker-entrypoint.s…" 11 seconds ago Up 10 seconds redis-node-6
a2f50b8b242a redis:6.0.8 "docker-entrypoint.s…" 12 seconds ago Up 11 seconds redis-node-5
0ddf1b14abf5 redis:6.0.8 "docker-entrypoint.s…" 12 seconds ago Up 12 seconds redis-node-4
a647cff0653d redis:6.0.8 "docker-entrypoint.s…" 13 seconds ago Up 12 seconds redis-node-3
1d342b9e7d0a redis:6.0.8 "docker-entrypoint.s…" 13 seconds ago Up 12 seconds redis-node-2
19f7c6d53b20 redis:6.0.8 "docker-entrypoint.s…" 13 seconds ago Up 12 seconds redis-node-1
docker exec -it redis-node-1 /bin/bash
注意:redis-cli --cluster create 192.168.0.170:6381 192.168.0.170:6382 192.168.0.170:6383 192.168.0.170:6384 192.168.0.170:6385 192.168.0.170:6386 --cluster-replicas 1
1、将ip地址和端口替换为自己的信息
2、--cluster-replicas 1 表示为每个master创建一个slave节点(一主一从,刚好三主三从)
3、 --cluster 表示构建集群
操作完后会自动分配三台主服务器及从服务器。
执行后显示如下:
输入yes后完成配置(m代表主,S代表从)[email protected]:/data# redis-cli --cluster create 192.168.0.170:6381 192.168.0.170:6382 192.168.0.170:6383 192.168.0.170:6384 192.168.0.170:6385 192.168.0.170:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460 --->第一个主服务器,哈希槽位是0-5640
Master[1] -> Slots 5461 - 10922 ---> 第二个主服务器,哈希槽位是5461-10922
Master[2] - > Slots 10923 - 16383 --->第三个主服务器,哈希槽位是10923-16383
Adding replica 192.168.0.170:6385 to 192.168.0.170:6381
Adding replica 192.168.0.170:6386 to 192.168.0.170:6382 这三个是主从映射关系
Adding replica 192.168.0.170:6384 to 192.168.0.170:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots:[0-5460] (5461 slots) master
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
S: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
replicates a7d3b130cd14ad6877881cd717244ff8520444c0
Can I set the above configuration? (type 'yes' to accept): ---->提示 如果确认上面的配置,输入yes回车即可。
查看集群状态:(以6381节点为例)Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.0.170:6381)
M: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots: (0 slots) slave
replicates a7d3b130cd14ad6877881cd717244ff8520444c0
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[email protected]:/data#
redis-cli -p 6381
cluster info
cluster nodes命令使用说明[email protected]:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384 一共的哈西槽
cluster_slots_ok:16384 分配了的哈西槽
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6 --4个节点
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:264
cluster_stats_messages_pong_sent:239
cluster_stats_messages_sent:503
cluster_stats_messages_ping_received:234
cluster_stats_messages_pong_received:264
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:503
127.0.0.1:6381>
执行结果下:cluster nodes
说明:127.0.0.1:6381> cluster nodes
fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383@16383 master - 0 1667053731700 3 connected 10923-16383
b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386@16386 slave a7d3b130cd14ad6877881cd717244ff8520444c0 0 1667053730000 1 connected
fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385@16385 slave fdaa13136f44c0be3139edfc2ef48eb00579caaf 0 1667053730699 3 connected
433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384@16384 slave ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 0 1667053729000 2 connected
a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381@16381 myself,master - 0 1667053727000 1 connected 0-5460
ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382@16382 master - 0 1667053730000 2 connected 5461-10922
127.0.0.1:6381>
1、16381 myself:则表示当前是在 6381这个节点中执行命令
2、6383、6382、6381 这三台服务器是master,也就是主节点。
3、注意:谁挂载谁的下面了?以6382为例,该主节点的id为:ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 他下面挂载了 6384这个从节点,从id为:433118ba0840594c14716fba173a7895a3836662
主从容错切换迁移案例:
数据读写存储:
进入例如6381节点执行命令
问题:为什么k1和k4执行后无法写入数据?127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.0.170:6383
127.0.0.1:6381> set k2 v2
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> set k4 v4
(error) MOVED 8455 192.168.0.170:6382
127.0.0.1:6381>
答:因为目前是单机登录的redis,set k1 v1 和set k4 v4的槽点分别是12706和 8455 没有在当前6381这个redis节点中,所以无法写入。其他能写入的是因为刚好槽点在这个范围内。
解决该问题:
需要防止路由失效,添加参数 -c以集群方式连接,并新增两个key
redis-cli -p 6381 -c
说明:[email protected]:/data# redis-cli -p 6381 -c
127.0.0.1:6381> FLUSHALL 清空之前的数据
OK
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.0.170:6383
OK
192.168.0.170:6383> set k2 v2
-> Redirected to slot [449] located at 192.168.0.170:6381
OK
192.168.0.170:6381> set k3 v3
OK
192.168.0.170:6381> set k4 v4
-> Redirected to slot [8455] located at 192.168.0.170:6382
OK
192.168.0.170:6382>
1、执行 set k1 v1 后槽点为:12706 是在6383节点,目前是集群构架进入,所以会自动跳转到6383节点。
2、执行 set k2 v2 后槽点为:449 是在6381节点,会自动从6383节点跳转回6381节点。
3、执行 set k3 v3 后槽点刚好是在当前6381节点范围内,所以显示ok并且不会重定向到其他节点。
4、执行 set k4 v4 后槽点为:8455,这个槽点刚好在6382节点,所以会从6381节点重定向到6382节点并执行成功。
查看集群信息2:
执行后如下:redis-cli --cluster check 192.168.0.170:6381
容错切换迁移:[email protected]:/data# redis-cli --cluster check 192.168.0.170:6381
192.168.0.170:6381 (a7d3b130...) -> 2 keys | 5461 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 5461 slots | 1 slaves.
192.168.0.170:6382 (ca8b40b5...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6381)
M: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots: (0 slots) slave
replicates a7d3b130cd14ad6877881cd717244ff8520444c0
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
1、主6381和从机切换,先停止6381节点(此时6381的从节点为6386)
停止后随便进入一个容器,例如:redis-node-2 并连接redis集群[[email protected] ~]# docker stop redis-node-1
redis-node-1
[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea1e315e6661 redis:6.0.8 "docker-entrypoint.s…" 25 hours ago Up 25 hours redis-node-6
a2f50b8b242a redis:6.0.8 "docker-entrypoint.s…" 25 hours ago Up 25 hours redis-node-5
0ddf1b14abf5 redis:6.0.8 "docker-entrypoint.s…" 25 hours ago Up 25 hours redis-node-4
a647cff0653d redis:6.0.8 "docker-entrypoint.s…" 25 hours ago Up 25 hours redis-node-3
1d342b9e7d0a redis:6.0.8 "docker-entrypoint.s…" 25 hours ago Up 25 hours redis-node-2
19f7c6d53b20 redis:6.0.8 "docker-entrypoint.s…" 25 hours ago Exited (0) 12 seconds ago redis-node-1
说明:[[email protected] ~]# docker exec -it redis-node-2 /bin/bash
[email protected]:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes 再次查看进群信息。
433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384@16384 slave ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 0 1667056567481 2 connected
fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383@16383 master - 0 1667056566000 3 connected 10923-16383
ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382@16382 myself,master - 0 1667056567000 2 connected 5461-10922
a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381@16381 master,fail - 1667056454231 1667056450000 1 disconnected
fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385@16385 slave fdaa13136f44c0be3139edfc2ef48eb00579caaf 0 1667056566479 3 connected
b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386@16386 master - 0 1667056565477 7 connected 0-5460
从上面信息可以看出6381原来是主节点,现在变成了“master,fail ”而“6386 ”原来是6381的从节点,现在变成了主节点。
此时已完成故障切换.
重新启动6381节点,看下状态
说明:[[email protected] ~]# docker exec -it redis-node-1 /bin/bash
[email protected]:/data# redis-cli -p 6381 -c
127.0.0.1:6381> cluster nodes
433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384@16384 slave ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 0 1667057037443 2 connected
ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382@16382 master - 0 1667057036000 2 connected 5461-10922
fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383@16383 master - 0 1667057037000 3 connected 10923-16383
a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381@16381 myself,slave b567fd9306230772798b02845296ad3948d3e1fa 0 1667057035000 7 connected
b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386@16386 master - 0 1667057034000 7 connected 0-5460
fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385@16385 slave fdaa13136f44c0be3139edfc2ef48eb00579caaf 0 1667057036000 3 connected
如果之前主节点死掉从节点上位后,即便在主节点恢复后,原来6381主节点变成了6386的从节点了,他们的身份交换了一下。
也就是说,主节点死掉恢复后会变成从节点。
问题:那么如何还原为之前的状态呢?也就是如何还原6381为主节点、6386为从节点??
答:将6386节点停止,此时6381会从从节点变成主节点,在启动6386节点,此时6386会变成从节点了,即可还原为第一次的架构。
主从扩容扩容案例:
两个方面:新增机器、重新分配哈希槽位
需求:目前是三主三从,扩容为四主四从:
新增两台机器(一主一从)
进入redis-node-7这个节点docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
说明:docker exec -it redis-node-7 /bin/bash
将新增的6387作为master节点加入集群
redis-cli --cluster add-node 自己实际IP地址:6387 自己实际IP地址:6381
6387 就是将要作为master新增节点
6381 就是原来集群节点里面的领路人,相当于6387拜拜6381的码头从而找到组织加入集群
命令:
加入成功,执行后如下:redis-cli --cluster add-node 192.168.0.170:6387 192.168.0.170:6381
检查集群情况第一次:[email protected]:/data# redis-cli --cluster add-node 192.168.0.170:6387 192.168.0.170:6381
>>> Adding node 192.168.0.170:6387 to cluster 192.168.0.170:6381
>>> Performing Cluster Check (using node 192.168.0.170:6381)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.0.170:6387 to make it join the cluster.
[OK] New node added correctly.
执行后如下可以看到四个M主节点,但会显示没有槽位“ 0 keys | 0 slots | 0 slaves.”redis-cli --cluster check 192.168.0.170:6381
重新分派槽号[email protected]:/data# redis-cli --cluster check 192.168.0.170:6381
192.168.0.170:6382 (ca8b40b5...) -> 1 keys | 5462 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 5461 slots | 1 slaves.
192.168.0.170:6387 (f3825e0d...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.170:6386 (b567fd93...) -> 2 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6381)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots: (0 slots) master
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
命令:
redis-cli --cluster reshard IP地址:端口号
执行后如下:redis-cli --cluster reshard 192.168.0.170:6381
解释:[email protected]:/data# redis-cli --cluster reshard 192.168.0.170:6381
>>> Performing Cluster Check (using node 192.168.0.170:6381)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots: (0 slots) master
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? f3825e0d9e03fa0330207e47de958a81a2ec7a92
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all
Ready to move 4096 slots.
Source nodes:
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[0-5460] (5461 slots) master
1 additional replica(s)
Destination node:
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots: (0 slots) master
Resharding plan:
Moving slot 5461 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5462 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5463 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5464 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5465 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5466 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5467 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5468 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5469 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
Moving slot 5470 from ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
How many slots do you want to move (from 1 to 16384)? 4096
问:为什么是4096个槽位?
答:目前四个主从,一共是16384个槽位,16384/4=4096表示每个主从分配 4096个槽位
What is the receiving node ID? f3825e0d9e03fa0330207e47de958a81a2ec7a92
问:这里的id是什么id?
答:是新增加的例如6387主节点id
Source node #1: all
问:all表示什么?
答:表示所有。
第二次查看分配情况:
显示如下:此时每个节点的槽位变了redis-cli --cluster check 192.168.0.170:6381
注意:此时6387节点的哈希槽位是:[0-1364],[5461-6826],[10923-12287] (一共是4096)三个区间,并且显示:0 slaves
为什么6387是3个新的区间,以前的还是连续?
重新分配成本太高,所以前3家各自匀出来一部分,从之前三个节点分别匀出1364个坑位给新节点6387
为6387主节点重新分配从节点6388:[email protected]:/data# redis-cli --cluster check 192.168.0.170:6381
192.168.0.170:6382 (ca8b40b5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6387 (f3825e0d...) -> 1 keys | 4096 slots | 0 slaves.
192.168.0.170:6386 (b567fd93...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6381)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
命令格式:
执行如下命令:redis-cli --cluster add-node ip:新slave端口 ip:新master端口 --cluster-slave --cluster-master-id 新主机节点ID
说明:新主机节点ID:这个是6387的编号,按照自己实际情况redis-cli --cluster add-node 192.168.0.170:6388 192.168.0.170:6387 --cluster-slave --cluster-master-id f3825e0d9e03fa0330207e47de958a81a2ec7a92
执行后如下:
再次检测集群情况:[email protected]:/data# redis-cli --cluster add-node 192.168.0.170:6388 192.168.0.170:6387 --cluster-slave --cluster-master-id f3825e0d9e03fa0330207e47de958a81a2ec7a92
>>> Adding node 192.168.0.170:6388 to cluster 192.168.0.170:6387
>>> Performing Cluster Check (using node 192.168.0.170:6387)
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.0.170:6388 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 192.168.0.170:6387.
[OK] New node added correctly.
如下,现在看每个节点都正常了,现在是四主四从集群了redis-cli --cluster check 192.168.0.170:6381
主从缩容案例[email protected]:/data# redis-cli --cluster check 192.168.0.170:6381
192.168.0.170:6382 (ca8b40b5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6387 (f3825e0d...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6386 (b567fd93...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6381)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
S: 81b45f21ca7d758112aeed4ef7e0343b7e312362 192.168.0.170:6388
slots: (0 slots) slave
replicates f3825e0d9e03fa0330207e47de958a81a2ec7a92
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
需求:访问量下去了,需要恢复到原来的三主三从,将6387和6388主从释放掉,需先删除从再删除主
1、检查集群节点情况,获取6388的节点ID
检查情况如下:redis-cli --cluster check 192.168.0.170:6382
2、从集群中4号从节点删除6388(先删从在删主)[root@blog-tag-gg ~]# docker exec -it redis-node-2 /bin/bash
root@hecs-76160:/data# redis-cli --cluster check 192.168.0.170:6382
192.168.0.170:6382 (ca8b40b5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6387 (f3825e0d...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6386 (b567fd93...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6382)
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 81b45f21ca7d758112aeed4ef7e0343b7e312362 192.168.0.170:6388
slots: (0 slots) slave
replicates f3825e0d9e03fa0330207e47de958a81a2ec7a92
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
命令格式:
完整命令:redis-cli --cluster del-node ip:从机端口 从机6388节点ID
执行后返回:删除成功。redis-cli --cluster del-node 192.168.0.170:6388 81b45f21ca7d758112aeed4ef7e0343b7e312362
再执行命令:[email protected]:/data# redis-cli --cluster del-node 192.168.0.170:6388 81b45f21ca7d758112aeed4ef7e0343b7e312362
>>> Removing node 81b45f21ca7d758112aeed4ef7e0343b7e312362 from cluster 192.168.0.170:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
从返回的节点信息看只有三个S从节点了。redis-cli --cluster check 192.168.0.170:6382
3、将6387的哈希槽号清空,重新分配,本利是将清出来的槽号以6381节点为入口将槽号重新分配给其他所有集群节点。
执行上面命令后会提示:redis-cli --cluster reshard 192.168.0.170:6381
解释:Source node #1:要删除的节点ID,我们要删除6387 则输入6387的节点idHow many slots do you want to move (from 1 to 16384)? 4096(输入4096)
解释:意思是您想移动多少个槽号?因为之前我们6387是4096个槽点,那么这里就输入4096回车即可。
What is the receiving node ID? ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
解释:哪一个节点来接收已删除的槽点?比如我们这里想放到6382节点,则输入6382的节点ID (注意:接收的节点必须是主节点)
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:f3825e0d9e03fa0330207e47de958a81a2ec7a92
Source node #2:done
解释:done表示所有节点
执行后会自动删除
再次查看节点情况:
说明:[email protected]:/data# redis-cli --cluster check 192.168.0.170:6382
192.168.0.170:6382 (ca8b40b5...) -> 2 keys | 8192 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6387 (f3825e0d...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.170:6386 (b567fd93...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6382)
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[0-1364],[5461-12287] (8192 slots) master
1 additional replica(s)
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: f3825e0d9e03fa0330207e47de958a81a2ec7a92 192.168.0.170:6387
slots: (0 slots) master
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
[ERR] Nodes don't agree about configuration!
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
1、目前查看6387节点还存在,但槽点显示为0:0 keys | 0 slots | 0 slaves
2、之前将槽点转交给了6382,所以该节点会显示:8192 slots ,4096个槽位都指给6382,它变成了8192个槽位,相当于全部都给6382了
4、将6387删除:
命令格式:
完整命令:redis-cli --cluster del-node ip:端口 6387节点ID
显示如下:删除成功。redis-cli --cluster del-node 192.168.0.170:6387 f3825e0d9e03fa0330207e47de958a81a2ec7a92
5、再执行如下命令看下节点情况[email protected]:/data# redis-cli --cluster del-node 192.168.0.170:6387 f3825e0d9e03fa0330207e47de958a81a2ec7a92
>>> Removing node f3825e0d9e03fa0330207e47de958a81a2ec7a92 from cluster 192.168.0.170:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
会显示如下:恢复到了原来的三主三从redis-cli --cluster check 192.168.0.170:6382
[email protected]:/data# redis-cli --cluster check 192.168.0.170:6382
192.168.0.170:6382 (ca8b40b5...) -> 2 keys | 8192 slots | 1 slaves.
192.168.0.170:6383 (fdaa1313...) -> 1 keys | 4096 slots | 1 slaves.
192.168.0.170:6386 (b567fd93...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.0.170:6382)
M: ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a 192.168.0.170:6382
slots:[0-1364],[5461-12287] (8192 slots) master
1 additional replica(s)
S: 433118ba0840594c14716fba173a7895a3836662 192.168.0.170:6384
slots: (0 slots) slave
replicates ca8b40b5ae55c32210d783b0f0cdf0965bcd5b8a
M: fdaa13136f44c0be3139edfc2ef48eb00579caaf 192.168.0.170:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
S: a7d3b130cd14ad6877881cd717244ff8520444c0 192.168.0.170:6381
slots: (0 slots) slave
replicates b567fd9306230772798b02845296ad3948d3e1fa
S: fb9fa790b29cafa1ff84c7a3173d806f72a2d4f3 192.168.0.170:6385
slots: (0 slots) slave
replicates fdaa13136f44c0be3139edfc2ef48eb00579caaf
M: b567fd9306230772798b02845296ad3948d3e1fa 192.168.0.170:6386
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
DockerFile知识介绍
1、什么是DockerFile:
DockerFile是用来构建docker镜像的文本文件,是由多条构建镜像所需的指令集构成的脚本,类似于shell脚本。
DockerFile官方文档说明:https://docs.docker.com/engine/reference/builder/
2、DockerFile构建三部曲:
- 编写DockerFile文件规则
- Docker build命令构建镜像
- Docker run用镜像运行容器
- 每条保留命令(关键字)都必须为大写字母,且后面至少要跟随一个参数。
- 什么是保留命令:例如:FROM、RUN、CMD、ENV、ADD、COPY等,更多保留命令可以看官方文档。
- 指令是从上到下依次执行。
- #表示注释,与linux配置文件一样。
- 每条指令都会创建一个新的镜像层进行提交。
- Docker从基础镜像中运行一个容器
- 执行一条命令并对容器做出修改
- 执行类似docker commit的操作提交一个新的镜像层
- docker再基于刚提交的镜像运行一个新的容器
- 执行dockerfile中的下一条命令知道所有命令都执行完成
从应用软件的角度来看,Dockerfile、Docker镜像与Docker容器分别代表软件的三个不同阶段,
* Dockerfile是软件的原材料
* Docker镜像是软件的交付品
* Docker容器则可以认为是软件镜像的运行态,也即依照镜像运行的容器实例
Dockerfile面向开发,Docker镜像成为交付标准,Docker容器则涉及部署与运维,三者缺一不可,合力充当Docker体系的基石。
graphic
1 Dockerfile,需要定义一个Dockerfile,Dockerfile定义了进程需要的一切东西。Dockerfile涉及的内容包括执行代码或者是文件、环境变量、依赖包、运行时环境、动态链接库、操作系统的发行版、服务进程和内核进程(当应用进程需要和系统服务和内核进程打交道,这时需要考虑如何设计namespace的权限控制)等等;
2 Docker镜像,在用Dockerfile定义一个文件之后,docker build时会产生一个Docker镜像,当运行 Docker镜像时会真正开始提供服务;
3 Docker容器,容器是直接提供服务的。
DockerFile常用的指令介绍:
DockerFile范例可参考:https://github.com/docker-library/tomcat/blob/master/10.0/jdk8/corretto-al2/Dockerfile或如下:
1、DockerFile范例:
2、DockerFile常见指令说明:#
# NOTE: THIS DOCKERFILE IS GENERATED VIA "apply-templates.sh"
#
# PLEASE DO NOT EDIT IT DIRECTLY.
#
FROM amazoncorretto:8-al2-jdk
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
WORKDIR $CATALINA_HOME
# let "Tomcat Native" live somewhere isolated
ENV TOMCAT_NATIVE_LIBDIR $CATALINA_HOME/native-jni-lib
ENV LD_LIBRARY_PATH ${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$TOMCAT_NATIVE_LIBDIR
# see https://www.apache.org/dist/tomcat/tomcat-10/KEYS
# see also "versions.sh" (https://github.com/docker-library/tomcat/blob/master/versions.sh)
ENV GPG_KEYS A9C5DF4D22E99998D9875A5110C01C5A2F6059E7
ENV TOMCAT_MAJOR 10
ENV TOMCAT_VERSION 10.0.27
ENV TOMCAT_SHA512 33c51be9410eaa0ce1393f8ce80a42a9639b68c7b7af1e9e642045614c170a12f8841ce4142933d1f4d18ba7efc85c630f91c312e959dcdc64aae396c46bdd97
RUN set -eux; \
\
# http://yum.baseurl.org/wiki/YumDB.html
if ! command -v yumdb > /dev/null; then \
yum install -y --setopt=skip_missing_names_on_install=False yum-utils; \
yumdb set reason dep yum-utils; \
fi; \
# a helper function to "yum install" things, but only if they aren't installed (and to set their "reason" to "dep" so "yum autoremove" can purge them for us)
_yum_install_temporary() { ( set -eu +x; \
local pkg todo=''; \
for pkg; do \
if ! rpm --query "$pkg" > /dev/null 2>&1; then \
todo="$todo $pkg"; \
fi; \
done; \
if [ -n "$todo" ]; then \
set -x; \
yum install -y --setopt=skip_missing_names_on_install=False $todo; \
yumdb set reason dep $todo; \
fi; \
) }; \
_yum_install_temporary gzip tar; \
\
ddist() { \
local f="$1"; shift; \
local distFile="$1"; shift; \
local mvnFile="${1:-}"; \
local success=; \
local distUrl=; \
for distUrl in \
# https://issues.apache.org/jira/browse/INFRA-8753?focusedCommentId=14735394#comment-14735394
"https://www.apache.org/dyn/closer.cgi?action=download&filename=$distFile" \
# if the version is outdated (or we're grabbing the .asc file), we might have to pull from the dist/archive :/
"https://downloads.apache.org/$distFile" \
"https://www-us.apache.org/dist/$distFile" \
"https://www.apache.org/dist/$distFile" \
"https://archive.apache.org/dist/$distFile" \
# if all else fails, let's try Maven (https://www.mail-archive.com/[email protected]/msg134940.html; https://mvnrepository.com/artifact/org.apache.tomcat/tomcat; https://repo1.maven.org/maven2/org/apache/tomcat/tomcat/)
${mvnFile:+"https://repo1.maven.org/maven2/org/apache/tomcat/tomcat/$mvnFile"} \
; do \
if curl -fL -o "$f" "$distUrl" && [ -s "$f" ]; then \
success=1; \
break; \
fi; \
done; \
[ -n "$success" ]; \
}; \
\
ddist 'tomcat.tar.gz' "tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz" "$TOMCAT_VERSION/tomcat-$TOMCAT_VERSION.tar.gz"; \
echo "$TOMCAT_SHA512 *tomcat.tar.gz" | sha512sum --strict --check -; \
ddist 'tomcat.tar.gz.asc' "tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz.asc" "$TOMCAT_VERSION/tomcat-$TOMCAT_VERSION.tar.gz.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key"; \
done; \
gpg --batch --verify tomcat.tar.gz.asc tomcat.tar.gz; \
tar -xf tomcat.tar.gz --strip-components=1; \
rm bin/*.bat; \
rm tomcat.tar.gz*; \
command -v gpgconf && gpgconf --kill all || :; \
rm -rf "$GNUPGHOME"; \
\
# https://tomcat.apache.org/tomcat-9.0-doc/security-howto.html#Default_web_applications
mv webapps webapps.dist; \
mkdir webapps; \
# we don't delete them completely because they're frankly a pain to get back for users who do want them, and they're generally tiny (~7MB)
\
nativeBuildDir="$(mktemp -d)"; \
tar -xf bin/tomcat-native.tar.gz -C "$nativeBuildDir" --strip-components=1; \
_yum_install_temporary \
apr-devel \
gcc \
make \
openssl11-devel \
; \
( \
export CATALINA_HOME="$PWD"; \
cd "$nativeBuildDir/native"; \
aprConfig="$(command -v apr-1-config)"; \
./configure \
--libdir="$TOMCAT_NATIVE_LIBDIR" \
--prefix="$CATALINA_HOME" \
--with-apr="$aprConfig" \
--with-java-home="$JAVA_HOME" \
--with-ssl \
; \
nproc="$(nproc)"; \
make -j "$nproc"; \
make install; \
); \
rm -rf "$nativeBuildDir"; \
rm bin/tomcat-native.tar.gz; \
\
# mark any explicit dependencies as manually installed
find "$TOMCAT_NATIVE_LIBDIR" -type f -executable -exec ldd '{}' ';' \
| awk '/=>/ && $(NF-1) != "=>" { print $(NF-1) }' \
| xargs -rt readlink -e \
| sort -u \
| xargs -rt rpm --query --whatprovides \
| sort -u \
| tee "$TOMCAT_NATIVE_LIBDIR/.dependencies.txt" \
| xargs -r yumdb set reason user \
; \
\
# clean up anything added temporarily and not later marked as necessary
yum autoremove -y; \
yum clean all; \
rm -rf /var/cache/yum; \
\
# sh removes env vars it doesn't support (ones with periods)
# https://github.com/docker-library/tomcat/issues/77
find ./bin/ -name '*.sh' -exec sed -ri 's|^#!/bin/sh$|#!/usr/bin/env bash|' '{}' +; \
\
# fix permissions (especially for running as non-root)
# https://github.com/docker-library/tomcat/issues/35
chmod -R +rX .; \
chmod 777 logs temp work; \
\
# smoke test
catalina.sh version
# verify Tomcat Native is working properly
RUN set -eux; \
nativeLines="$(catalina.sh configtest 2>&1)"; \
nativeLines="$(echo "$nativeLines" | grep 'Apache Tomcat Native')"; \
nativeLines="$(echo "$nativeLines" | sort -u)"; \
if ! echo "$nativeLines" | grep -E 'INFO: Loaded( APR based)? Apache Tomcat Native library' >&2; then \
echo >&2 "$nativeLines"; \
exit 1; \
fi
EXPOSE 8080
CMD ["catalina.sh", "run"]
- FROM:基础镜像,当前镜像是基于哪一个镜像创建的,指定一个已经存在的镜像作为模板,dockerfile第一条必须是FROM,格式:FROM amazoncorretto:8-al2-jdk
- MAINTAINER:作者或者维护者,一般是维护者的姓名或邮箱,格式:MAINTAINER <name>
- RUN:容器构建时需要执行的命令,一般两种格式:1、shell格式:RUN yum -y install vim 2、exec格式:RUN ["可执行文件","参数1","参数2"] RUN [".test.php","dev","offline"] 等价于 RUN .test.php dev offline 。RUN命令是在docker build构建时运行。
- EXPOSE:当前容器对外暴露出的端口。
- WORKDIR:执行在创建容器后,终端默认登录进来的工作目录,一个落脚点,也就是进入容器后默认的目录。
- USER:执行该镜像以什么样的用户去执行,若都不指定,则默认是root。
- ENV:用来在构建镜像过程中设置环境变量。
- ADD:将宿主目录下的文件拷贝到镜像且会自动处理RUL和解压tar压缩包。(相当于copy复制+解压操作,相比用得多)
- COPY:类似ADD,拷贝文件和目录到镜像中。类似于docker cp命令。
- VOLUME:容器卷
- CMD:指定容器启动后要做的操作,可以有多个CMD指令,只有最后一个生效。CMD会被docker run之后的参数替换。
- ENTRYPOINT:也是用来指定一个容器启动时要运行的命令,类似于CMD命令,但ENTRYPOINT不会被docker run后面的命令覆盖,而且这些命令参数会被当做参数传给ENTRYPOINT指令指定的程序。可以结合CMD命令运行,格式:ENTRYPOINT "CMD命令"例如:ENTRYPOINT ["nginx","-c"] #定参 CMD ["/etc/nginx/nginx.conf] #变参 变成:nginx -c /etc/nginx/nginx.conf
以上面tomcat镜像为例:
执行
或者docker run -it -p 8080:8080 2d2bccf89f53
可以监听8080且容器也会启动8080,外部能访问8080(d参数是后台运行,不加d则是前台运行,终端关闭后结束运行)docker run -itd -p 8080:8080 2d2bccf89f53
但执行如下命令后端口有监听,但8080端口不通,宿主有8080监听,但容器里面不会有8080监听。
为什么加上 /bin/bash 后端口不通?原因如下:docker run -itd -p 8080:8080 2d2bccf89f53 /bin/bash
1、执行docker run命令后 最后会执行 CMD ["catalina.sh", "run"] 这个命令,这个命令表示会启动容器里面的这个脚本,脚本是启动tomcat服务的。
2、若加上/bin/bash参数后则变为如下执行顺序
CMD ["catalina.sh", "run"]
CMD ["/bin/bash", "run"]
只会有最后一个CMD命令才会生效,会覆盖原来CMD ["catalina.sh", "run"] ,也就说,镜像拉起容器后执行了CMD ["/bin/bash", "run"] 不会执行sh脚本,所以容器里面不会有tomcat服务启动,端口不会监听
解决方法:
1、在拉起容器时不加/bin/bash
2、进入容器手动执行catalina.sh脚本启动tomcat服务即可。
通过DockerFile构建一个自己需要的镜像:
需求:对默认的centos镜像增加四个功能
- 1、具备vim编辑命令。
- 2、安装ifconfig命令
- 3、安装好jdk8环境
- 4、安装好ip这个命令
1、编写DockerFile文件:D字母必须大写
进入某个目录,例如 /blogtest/ 在这个目录下载jdk文件,下载地址:https://mirrors.yangxingzhen.com/jdk/jdk-8u171-linux-x64.tar.gz
执行vi DockerFile编辑文件写入如下代码cd /blogtest/
wget https://mirrors.yangxingzhen.com/jdk/jdk-8u171-linux-x64.tar.gz
2、构建镜像FROM centos:7
MAINTAINER yangmazi<www.zfcdn.xyz>
ENV MYPATH /usr/local
WORKDIR $MYPATH
#安装vim编辑器
RUN yum -y install vim
#安装ifconfig命令查看网络IP
RUN yum -y install net-tools
#安装ip命令
RUN yum -y install iproute2
#安装java8及lib库
RUN yum -y install glibc.i686
RUN mkdir /usr/local/java
#ADD 是相对路径jar,把jdk-8u171-linux-x64.tar.gz添加到容器中,安装包必须要和Dockerfile文件在同一位置
ADD jdk-8u171-linux-x64.tar.gz /usr/local/java/
#配置java环境变量
ENV JAVA_HOME /usr/local/java/jdk1.8.0_171
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
ENV PATH $JAVA_HOME/bin:$PATH
EXPOSE 80
CMD echo $MYPATH
CMD echo "success--------------ok成功"
CMD /bin/bash
格式:
范例:(有个点号,表示当前目录)docker build -t 新镜像名字:tag .
在dockerfile文件当前目录执行命令,结果如下表示安装成功docker build -t centos_test:6.6 .
---> c3250c024505
Step 8/17 : RUN mkdir /usr/local/java
---> Running in 968d01b0469e
Removing intermediate container 968d01b0469e
---> 39b28e0fda93
Step 9/17 : ADD jdk-8u171-linux-x64.tar.gz /usr/local/java/
---> 3c33d055d513
Step 10/17 : ENV JAVA_HOME /usr/local/java/jdk1.8.0_171
---> Running in bedd48f73cff
Removing intermediate container bedd48f73cff
---> f691fa41c364
Step 11/17 : ENV JRE_HOME $JAVA_HOME/jre
---> Running in b6a311b74590
Removing intermediate container b6a311b74590
---> bee962f68fac
Step 12/17 : ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
---> Running in aafdcb488c92
Removing intermediate container aafdcb488c92
---> e4e1c8202e87
Step 13/17 : ENV PATH $JAVA_HOME/bin:$PATH
---> Running in d53b2970bb69
Removing intermediate container d53b2970bb69
---> 6a1af3e50711
Step 14/17 : EXPOSE 80
---> Running in ec013a2776e5
Removing intermediate container ec013a2776e5
---> 8e69a03ff5ba
Step 15/17 : CMD echo $MYPATH
---> Running in 1e449e712ef6
Removing intermediate container 1e449e712ef6
---> 5f5b196abbf6
Step 16/17 : CMD echo "success--------------ok"
---> Running in eefc77bd928b
Removing intermediate container eefc77bd928b
---> 5e4cde2bfc4b
Step 17/17 : CMD /bin/bash
---> Running in 62da6abd873a
Removing intermediate container 62da6abd873a
---> abadc0f24536
Successfully built abadc0f24536
Successfully tagged centos_test:6.6
[root@blog-tag-gg blogtest]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos_test 6.6 abadc0f24536 9 seconds ago 1.24GB
虚悬镜像:
什么是虚悬镜像:进项是none标签的是虚悬镜像
虚悬镜像是怎样产生的:安装镜像或操作镜像出现异常。
虚悬镜像已经失去了存在价值,可以删除。
删除虚悬镜像命令:
docker images ls -f dangling=true
Docker网络:
1、查看docker网络
显示:(常用是前两种)docker network ls
2、创建一个网络[root@blog-tag-gg ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
040d7bd470aa bridge bridge local
258ddf6a447a host host local
5c74c7287431 none null local
查询结果:docker network create test_network
3、删除一个网络[root@blog-tag-gg ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
040d7bd470aa bridge bridge local
258ddf6a447a host host local
5c74c7287431 none null local
61ef3f4e29b9 test_network bridge local
4、查看网络:docker network rm test_network
Docker网络能干什么?docker network inspect test_network
- 容器间的互联和通信以及端口映射
- 容器IP变动时候可以通过服务名直接网络通信而不受到影响
- bridge:为每一个容器分配、设置IP等,并将容器连接到一个docker0(虚拟网桥,默认为该模式)
- host:容器将不会虚拟出自己的网卡,配置自己的ip等,而是使用宿主的ip和端口
- none:容器有独立的network namespace,但并没有堆起进行任何网络设置,如分配veth pait和网桥连连接,ip等。
- container:新创建的容器不会创建自己的网卡和配置自己的IP,而是和一个指定的容器共享IP,端口范围等。
- bridge模式:使用--network bridge 指定,默认使用docker0
- host模式:使用--network host指定
- none模式:使用--network none指定
- container模式:使用--network container:NAME或容器ID指定。
bridge网络:
1、bridge是什么?
Docker 服务默认会创建一个 docker0 网桥(其上有一个 docker0 内部接口),该桥接网络的名称为docker0,它在内核层连通了其他的物理或虚拟网卡,这就将所有容器和本地主机都放到同一个物理网络。Docker 默认指定了 docker0 接口 的 IP 地址和子网掩码,让主机和容器之间可以通过网桥相互通信。
# 查看 bridge 网络的详细信息,并通过 grep 获取名称项
2、案例:docker network inspect bridge | grep name
每个容器内部都有自己的网卡比如eth0,该网卡与宿主ip addr命令看到的是一一对应的。宿主显示:veth
例如在容器中执行 ip addr显示如下:
在宿主行执行ip addr 如下root@d985c2bc9055:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Host网络:[root@blog-tag-gg ~]# ip addr |tail -n 4
8: veth214ece3@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether f2:86:42:12:e5:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::f086:42ff:fe12:e55a/64 scope link
valid_lft forever preferred_lft forever
1、什么是host网络
- 直接使用宿主机的 IP 地址与外界进行通信,不再需要额外进行NAT 转换。
- 容器将不会获得一个独立的Network Namespace, 而是和宿主机共用一个Network Namespace。容器将不会虚拟出自己的网卡而是使用宿主机的IP和端口。
docker run -d -p 8083:8080 --network host --name tomcat83 billygoo/tomcat8-jdk8
注意:执行命令后会出现“WARNING: Published ports are discarded when using host network mode”这个只是一个警告对使用没影响。[root@blog-tag-gg ~]# docker run -d -p 8083:8080 --network host --name tomcat83 billygoo/tomcat8-jdk8
WARNING: Published ports are discarded when using host network mode
86dbd86804b08b8e155c0cc2db82960a370d5ea5d1b308e3fa3fe79fdeb2f432
问题:
docke启动时总是遇见标题中的警告
原因:
docker启动时指定--network=host或-net=host,如果还指定了-p映射端口,那这个时候就会有此警告,
并且通过-p设置的参数将不会起到任何作用,端口号会以主机端口号为主,重复时则递增。
解决:
解决的办法就是使用docker的其他网络模式,例如--network=bridge,这样就可以解决问题,或者直接无视。
正确的执行方法
将端口映射去掉即可,此时可以使用 docker inspect 容器名查看网络模式会显示hostdocker run -d --network host --name tomcat83 billygoo/tomcat8-jdk8
none网络:
1、什么是none网络
禁用网络功能,只有lo标识(就是127.0.0.1表示本地回环)(很少用)
在none模式下,并不为Docker容器进行任何网络配置。也就是说,这个Docker容器没有网卡、IP、路由等信息,只有一个lo,需要我们自己为Docker容器添加网卡、配置IP等。
2、案例:
none网络模式下容器内部没有网络,也无法使用命令安装任何业务docker run -d -p 8084:8080 --network none --name blogtaggg tomcat
如何修改docker容器已运行实例端口等配置文件信息
1、建议将容器停止,进入 /var/lib/docker/containers/XXXXXX 对应容器目录
2、打开该容器文件夹下config.v2.json文件和hostconfig.json文件,进行修改(修改前建议复制备份一份,防止改错)
3、修改完成后在启动容器。
container网络:
新建的容器和已经存在的一个容器共享一个网络ip配置而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。
案例1:
1、坑:
先运行一个容器tomcat85
再运行一个容器 tomcat86共用tomcat85的网络。docker run -d -p 8085:8080 --name tomcat85 tomcat
执行后会提示docker run -d -p 8086:8080 --network container:tomcat85 --name tomcat86 tomcat
有报错,原因是:相当于tomcat86和tomcat85公用同一个ip同一个端口,导致端口冲突,则说明当前镜像不用于container网络模式。[root@blog-tag-gg /]# docker run -d -p 8085:8080 --name tomcat85 tomcat
[root@blog-tag-gg /]# docker run -d -p 8086:8080 --network container:tomcat85 --name tomcat86 tomcat
docker: Error response from daemon: conflicting options: port publishing and the container type network mode.
See 'docker run --help'.
案例2(正确演示)
可以换个其他镜像演示:
Alpine Linux 是一款独立的、非商业的通用 Linux 发行版,专为追求安全性、简单性和资源效率的用户而设计。 可能很多人没听说过这个 Linux 发行版本,但是经常用 Docker 的朋友可能都用过,因为他小,简单,安全而著称,所以作为基础镜像是非常好的一个选择,可谓是麻雀虽小但五脏俱全,镜像非常小巧,不到 6M的大小,所以特别适合容器打包。
1、拉起alpine第一个容器:docker pull alpine
2、拉起第二个容器并设置共用第一个容器的网络docker run -it --name alpine1 alpine /bin/sh
3、验证:分别登录容器执行命令 ip addr 命令可以看到网络都是一样的。docker run -it --network container:alpine1 --name alpine2 alpine /bin/sh
4、注意:将alpine1 停止或删除后 alpine2 的网卡设置也将消失还原到none状态。
自定义网络:
什么是link:dokcer官方在新版本中已删除了这个存在,可以忽略了
1、为什么要使用自定义网络?
答:默认启动的docker是bridge网络模式,相同网络模式下的容器用ip能ping通,但ip地址是不固定的,容器停止或者再次启动重新创建ip地址会变化,如果ip变化了您的业务也无法访问另一个容器数据,同时此时ping容器名(服务名)—相互之间也无法ping通的。
自定义网络:
1、自定义桥接网络,自定义网络默认使用的是桥接网络bridge
2、新建自定义网络
查看网络情况:docker network create blog_tag_gg_network_Test
3、新建容器加入新建的自定义网络[root@blog-tag-gg /]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f1404287fa8a blog_tag_gg_network_Test bridge local
040d7bd470aa bridge bridge local
258ddf6a447a host host local
5c74c7287431 none null local
61ef3f4e29b9 test_network bridge local
届时分别进入两个容器ping tomcat82 或者 ping tomcat81 可以ping通的。docker run -d -p 8081:8080 --network blog_tag_gg_network_Test --name tomcat81 tomcat
docker run -d -p 8082:8080 --network blog_tag_gg_network_Test --name tomcat82 tomcat
自定义网络本身就维护好了主机名和ip的对应关系(ip和域名都能通)
提醒:生产场景下,不建议写死ip,建议写在同一个网络下的服务名。
Docker-Compose容器编排
1、什么是Compose容器编排?
答:Compose 是 Docker 公司推出的一个工具软件,可以管理多个 Docker 容器组成一个应用。你需要定义一个 YAML 格式的配置文件docker-compose.yml,写好多个容器之间的调用关系。然后,只要一个命令,就能同时启动/关闭这些容器。
2、Compose容器编排能干什么?
答:docker建议我们每一个容器中只运行一个服务,因为docker容器本身占用资源极少,所以最好是将每个服务单独的分割开来但是这样我们又面临了一个问题?
如果我需要同时部署好多个服务,难道要每个服务单独写Dockerfile然后在构建镜像,构建容器,这样累都累死了,所以docker官方给我们提供了docker-compose多服务部署的工具
例如要实现一个Web微服务项目,除了Web服务容器本身,往往还需要再加上后端的数据库mysql服务容器,redis服务器,注册中心eureka,甚至还包括负载均衡容器等等。。。。。。
Compose允许用户通过一个单独的docker-compose.yml模板文件(YAML 格式)来定义一组相关联的应用容器为一个项目(project)。
可以很容易地用一个配置文件定义一个多容器的应用,然后使用一条指令安装这个应用的所有依赖,完成构建。Docker-Compose 解决了容器与容器之间如何管理编排的问题。
3、怎样安装Compose?
官网地址:https://docs.docker.com/compose/compose-file/compose-file-v3/
官网下载地址:https://docs.docker.com/compose/install/
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
查看是否安装成功,查看版本chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
或
对所有用户有效
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
卸载Compose:docker compose version
或rm $DOCKER_CONFIG/cli-plugins/docker-compose
卸载所有用户下的compose组件。
rm /usr/local/lib/docker/cli-plugins/docker-compose
Compose核心概念:
1、一个文件:docker-compose.yml
2、两个要素
- 服务:一个个应用容器实例,比如订单微服务,库存微服务,mysql容器,nginx容器。
- 工程:有一组关联的应用容器组成的一个完整业务单元,在docker-compose.yml文件中定义。
- 编写Dockerfile定义各个微服务应用并构建出对应的镜像文件
- 使用 docker-compose.yml 定义一个完整业务单元,安排好整体应用中的各个容器服务。
- 最后,执行docker-compose up命令,来启动并运行整个应用程序,完成一键部署上线
docker-compose -h # 查看帮助
docker-compose up # 启动所有docker-compose服务
docker-compose up -d # 启动所有docker-compose服务并后台运行
docker-compose down # 停止并删除容器、网络、卷、镜像。
docker-compose exec yml里面的服务id # 进入容器实例内部 docker-compose exec docker-compose.yml文件中写的服务id /bin/bash
docker-compose ps # 展示当前docker-compose编排过的运行的所有容器
docker-compose top # 展示当前docker-compose编排过的容器进程
docker-compose logs yml里面的服务id # 查看容器输出日志
docker-compose config # 检查配置
docker-compose config -q # 检查配置,有问题才有输出
docker-compose restart # 重启服务
docker-compose start # 启动服务
docker-compose stop # 停止服务
文章评论 本文章有个评论