CentOS下iSCSI的挂载 时间: 2018-12-05 分类: 教程 阅读:次 `以下内容均以centos7为操作环境` ### 一个小目标 首先,我们需要新增一块10G的虚拟硬盘作为实验对象:/deb/sdb。再增加一个网卡作为多路径,IP分别为172.16.10.243和172.16.10.213。对应的给客户端还需要增加一个网卡,客户端对应IP分别为172.16.10.242和172.16.10.212。最后,完成多路径的客户端iSCSI挂载。 ```` +--------------------+ | +-------------------+ | [ iSCSI Target ] | 172.16.10.213 | 172.16.10.212 |[ iSCSI Initiator ]| | storage0 +-----------------+-----------------+ virthm | | 172.16.10.243 | | 172.16.10.242 | +--------------------+ +-------------------+ ```` # 目标端 #### 磁盘 ```bash [root@storage0 ~]# fdisk /dev/sdb Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xd116e8ef Device Boot Start End Blocks Id System /dev/sdb1 2048 20971519 10484736 83 Linux ``` #### 配置服务端 ```bash [root@storage0 ~]# yum -y update [root@storage0 ~]# yum -y install targetcli [root@storage0 ~]# systemctl enable target.service [root@storage0 ~]# systemctl restart target.service # targetcli targetcli shell version 2.1.fb46 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. #默认初始状态 /> ls o- / ......................................................................................................... [...] o- backstores .............................................................................................. [...] | o- block .................................................................................. [Storage Objects: 0] | o- fileio ................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................ [Targets: 0] o- loopback ......................................................................................... [Targets: 0] /> cd backstores/block create iscsi_disk dev=/dev/sdb1 //创建磁盘 Created block storage object iscsi_disk using /dev/sdb1. //创建成功 /> iscsi creat iqn.2018-12.net.sujx:storage0 //创建iSCSI目标 Created target iqn.2018-12.net.sujx:storage0. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /> ls //当前状态 o- / ......................................................................................................... [...] o- backstores .............................................................................................. [...] | o- block .................................................................................. [Storage Objects: 1] | | o- iscsi_disk ................................................... [/dev/sdb1 (0 bytes) write-thru deactivated] | | o- alua ................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................ [Targets: 1] | o- iqn.2018-12.net.sujx:storage0 ..................................................................... [TPGs: 1] | o- tpg1 ............................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................... [ACLs: 0] | o- luns .......................................................................................... [LUNs: 0] | o- portals .................................................................................... [Portals: 1] | o- 0.0.0.0:3260 ..................................................................................... [OK] o- loopback ......................................................................................... [Targets: 0] //创建ACL访问控制 /> iscsi/iqn.2018-12.net.sujx:storage0/tpg1/acls create iqn.2018-12.net.sujx:virthm Created Node ACL for iqn.2018-12.net.sujx:virthm //创建访问的lun /> iscsi/iqn.2018-12.net.sujx:storage0/tpg1/luns create /backstores/block/iscsi_disk Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.2018-12.net.sujx:virthm //删除默认IP设置 /> iscsi/iqn.2018-12.net.sujx:storage0/tpg1/portals delete 0.0.0.0 3260 Deleted network portal 0.0.0.0:3260 //建立指定IP设置 /> iscsi/iqn.2018-12.net.sujx:storage0/tpg1/portals create 172.16.10.243 3260 Using default IP port 3260 Created network portal 172.16.10.243:3260. //建立多路径IP /> iscsi/iqn.2018-12.net.sujx:storage0/tpg1/portals create 172.16.10.213 3260 Using default IP port 3260 Created network portal 172.16.10.213:3260. //当前状态 /> ls / o- / ............................................................................................................ [...] o- backstores ................................................................................................. [...] | o- block ..................................................................................... [Storage Objects: 1] | | o- iscsi_disk ........................................................ [/dev/sdb1 (0 bytes) write-thru activated] | | o- alua ...................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp .......................................................... [ALUA state: Active/optimized] | o- fileio .................................................................................... [Storage Objects: 0] | o- pscsi ..................................................................................... [Storage Objects: 0] | o- ramdisk ................................................................................... [Storage Objects: 0] o- iscsi ............................................................................................... [Targets: 1] | o- iqn.2018-12.net.sujx:storage0 ........................................................................ [TPGs: 1] | o- tpg1 .................................................................................. [no-gen-acls, no-auth] | o- acls ............................................................................................. [ACLs: 1] | | o- iqn.2018-12.net.sujx:virthm ............................................................. [Mapped LUNs: 1] | | o- mapped_lun0 ............................................................... [lun0 block/iscsi_disk (rw)] | o- luns ............................................................................................. [LUNs: 1] | | o- lun0 ................................................... [block/iscsi_disk (/dev/sdb1) (default_tg_pt_gp)] | o- portals ....................................................................................... [Portals: 2] | o- 172.16.10.213:3260 .................................................................................. [OK] | o- 172.16.10.243:3260 .................................................................................. [OK] o- loopback ............................................................................................ [Targets: 0 保存配置 /> saveconfig Configuration saved to /etc/target/saveconfig.json /> /> exit //重新加载服务 [root@storage0 ~]# systemctl restart target.service ``` ps:注意不删除默认的0.0.0.0的IP配置,就会出现“Could not create networkportal in ConfigFS”的报错。 ### 防火墙配置 ```bash [root@storage0 ~]# firewall-cmd --permanent --add-service=iscsi-target [root@storage0 ~]# firewall-cmd --reload ``` # 启动器端 启动器端主机名为virthm,对应的IP地址为172.16.10.242和172.16.10.212。 ## 安装启动器 ```bash //安装iscsid [root@storage0 ~]# yum install -y iscsi-initiator-utils //设置开机启动 [root@virthm ~]# systemctl enable iscsid //修改启动器名称,对应上面的ACL设置 [root@virthm ~]# vim /etc/iscsi/initiatorname.iscsi [root@virthm ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2018-12.net.sujx:virthm //启动服务 [root@virthm ~]# systemctl restart iscsid //扫描目标端iSCSI [root@virthm ~]# iscsiadm --mode discoverydb --type sendtargets --portal 172.16.10.243 --discover 172.16.10.213:3260,1 iqn.2018-12.net.sujx:storage0 172.16.10.243:3260,1 iqn.2018-12.net.sujx:storage0 ``` ## 连接iSCSI目标 ```bash //连接243 [root@virthm ~]# iscsiadm --mode node --targetname iqn.2018-12.net.sujx:storage0 --portal 172.16.10.243:3260 --login Logging in to [iface: default, target: iqn.2018-12.net.sujx:storage0, portal: 172.16.10.243,3260] (multiple) Login to [iface: default, target: iqn.2018-12.net.sujx:storage0, portal: 172.16.10.243,3260] successful. //连接213 [root@virthm ~]# iscsiadm --mode node --targetname iqn.2018-12.net.sujx:storage0 --portal 172.16.10.213:3260 --login Logging in to [iface: default, target: iqn.2018-12.net.sujx:storage0, portal: 172.16.10.213,3260] (multiple) Login to [iface: default, target: iqn.2018-12.net.sujx:storage0, portal: 172.16.10.213,3260] successful. //查看连接情况 [root@virthm ~]# cat /proc/partitions major minor #blocks name 8 0 9437184 sda 8 1 1048576 sda1 8 2 8387584 sda2 253 0 7438336 dm-0 253 1 946176 dm-1 8 16 10484736 sdb 8 32 10484736 sdc //可以看到有sdb、sdc两块硬盘设备出现 ``` ### 多路径iSCSI目标 如果没有多路径的需求,可以直接看后面的格式化并挂载磁盘。 ```bash //安装服务 [root@virthm ~]# yum install -y device-mapper-multipath //加载模块 [root@virthm ~]# modprobe dm-multipath [root@virthm ~]# modprobe dm-round-robin [root@virthm ~]# modprobe dm-service-time [root@virthm ~]# systemctl enable multipathd.service [root@virthm ~]# systemctl restart multipathd.service //查看服务状态 [root@virthm ~]# systemctl status multipathd.service ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: inactive (dead) Condition: start condition failed at Wed 2018-12-05 13:27:29 CST; 5s ago ConditionPathExists=/etc/multipath.conf was not met //可见默认程序没有默认的配置文件需要手动创建 [root@virthm ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/ [root@virthm ~]# systemctl restart multipathd.service [root@virthm ~]# systemctl status multipathd.service //服务状态已经正常 [root@virthm ~]# systemctl status multipathd.service ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2018-12-05 13:31:33 CST; 6s ago Process: 40923 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS) Process: 40921 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS) Process: 40917 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS) Main PID: 40926 (multipathd) CGroup: /system.slice/multipathd.service └─40926 /sbin/multipathd //查看磁盘,可以看到mpatha多路径磁盘已经建立 [root@virthm ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 9G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 8G 0 part ├─centos-root 253:0 0 7.1G 0 lvm / └─centos-swap 253:1 0 924M 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─mpatha 253:2 0 10G 0 mpath sdc 8:32 0 10G 0 disk └─mpatha 253:2 0 10G 0 mpath //查看mpatha性质 [root@virthm ~]# ll /dev/mapper/mpatha lrwxrwxrwx 1 root root 7 Dec 5 13:31 /dev/mapper/mpatha -> ../dm-2 //多路径状况 [root@virthm ~]# multipath -ll mpatha (360014053b2066ff78fc41e9aaa1f6e88) dm-2 LIO-ORG ,iscsi_disk size=10.0G features='0' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=1 status=active | `- 3:0:0:0 sdb 8:16 active ready running `-+- policy='service-time 0' prio=1 status=enabled `- 4:0:0:0 sdc 8:32 active ready running ```` ## 添加磁盘 连接完成并完成多路径之后,就需要分区磁盘然后格式化 ```bash //格式化磁盘 [root@virthm ~]# fdisk /dev/mapper/mpatha Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x9225e14a. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (8192-20969471, default 8192): Using default value 8192 Last sector, +sectors or +size{K,M,G} (8192-20969471, default 20969471): Using default value 20969471 Partition 1 of type Linux and of size 10 GiB is set //重新发现分区 [root@virthm ~]# partprobe /dev/mapper/mpatha //格式化 [root@virthm ~]# mkfs.xfs /dev/mapper/mpatha mpatha mpatha1 [root@virthm ~]# mkfs.xfs /dev/mapper/mpatha1 meta-data=/dev/mapper/mpatha1 isize=512 agcount=4, agsize=655040 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2620160, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ``` ### 固定挂载磁盘 要将新建磁盘固定挂载到指定路径 ```bash //创建挂载点 [root@virthm ~]# mkdir /data //查看磁盘UUD [root@virthm ~]# blkid |grep mpatha1 /dev/mapper/mpatha1: UUID="c7572b34-64c5-4f05-8c93-978f851e3ff2" TYPE="xfs" [root@virthm ~]# vim /etc/fstab /dev/mapper/centos-root / xfs defaults 0 0 UUID=fa89a3f5-88c5-4ac6-b346-678bd7fdc5d4 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 UUID=c7572b34-64c5-4f05-8c93-978f851e3ff2 /data xfs defaults,_netdev 0 0 [root@virthm ~]# mount -a [root@virthm ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 7.1G 2.8G 4.4G 40% / devtmpfs devtmpfs 899M 0 899M 0% /dev tmpfs tmpfs 910M 0 910M 0% /dev/shm tmpfs tmpfs 910M 21M 889M 3% /run tmpfs tmpfs 910M 0 910M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 143M 872M 15% /boot tmpfs tmpfs 182M 0 182M 0% /run/user/0 /dev/mapper/mpatha1 xfs 10G 33M 10G 1% /data [root@virthm ~]# sync [root@virthm ~]# reboot [root@virthm ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 7.1G 2.8G 4.3G 40% / devtmpfs devtmpfs 899M 0 899M 0% /dev tmpfs tmpfs 910M 0 910M 0% /dev/shm tmpfs tmpfs 910M 9.5M 901M 2% /run tmpfs tmpfs 910M 0 910M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 143M 872M 15% /boot /dev/mapper/mpatha1 xfs 10G 33M 10G 1% /data tmpfs tmpfs 182M 0 182M 0% /run/user/0 ``` ps:在写fstab的时候,对于iSCSI设备一定要标注_netdev表明这是一个网络设备,否则系统启动将会十分漫长,然后进入维护模式。因为,设备自检是在网络初始化之前,不向系统表明此设备要在网络启动之后再自建就会报错。 自此,iSCSI挂载完成。 # END
虚拟主机开启全站SSL 时间: 2018-11-30 分类: 教程 阅读:次 `以下服务由阿里云提供` ## 结果 本site是构建在阿里云最便宜的50块钱一年的廉价版虚拟主机服务上的(空间200M、数据库200M,年流量10G)。为了顺应潮流,现在已经实现全站ssl,也就是说虚拟主机、CDN和图床全部采用证书加密的ssl链接。  直观的展示,就是在各个主流浏览器和手机浏览器上都可以看到绿色的小锁了。 本次使用到了阿里云的云盾证书、CDN域名加速、虚拟主机、DNS解析、云监控;七牛的CDN对象存储(https)服务。 ## SSL证书 阿里云的云盾现在提供免费的为期一年的Symantec的测试证书,一次申请一张,每个域名可以为三个子域名申请不同的证书。本site就申请到了用户提供http服务的www子域名和提供图床的cdn子域名两张ssl证书。  ## 虚拟主机开启https 登陆阿里云控制台,然后进入云虚拟主机的管理后台,在域名管理栏目下开启https,并选择使用云盾证书。  ## 图床开启https 登陆七牛的后台,然后域名管理中开启https  ## CDN开启https 然后在CDN页面中开启https回源。 ### 设置CDN的CNAME别名  ### 设置CDN指向443  ## END
快速搭建Docker监控 时间: 2018-11-29 分类: 教程 阅读:次 `以下内容均使用centos7为实施环境,使用root账户登陆` # 部署 为了方便演示以及避免错误,我采用脚本的方式来部署。演示环境采用3台虚机,IP为172.16.10.230/1/2. [](https://cdn.sujx.net/image/granafa_index.png "granafa") ## 监控节点以及应用准备 首先启动sysdig本地监控程序、Weave Scope图形化监控程序、cAdvisor监控脚本和prometheus监控节点 ```bash #!/bin/bash #Clean Env docker stop `docker ps -a -q` docker rm `docker ps -a -q` #Monitor Node scope launch 172.16.10.230 172.16.10.231 172.16.10.232 docker run -itd --name sysdig-$HOSTNAME -v /dev/:/host/dev -v /var/run/docker.sock:/host/var/run/docker.sock --privileged -v /proc:/host/proc:ro -v /boot:/boot:ro -v /usr:/host/usr:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/sysdig docker run -itd --name cadivisor-$HOSTNAME --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --name=cadvisor --net=host google/cadvisor:latest docker run -itd --name prom-node-$HOSTNAME -p 9100:9100 -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --net=host prom/node-exporter --path.procfs /host/proc --path.sysfs /host/sys --collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)" #Example Docker docker run -itd --name centos-$HOSTNAME centos docker run -itd --name httpd-$HOSTNAME httpd -p 80:80 #Show Dockers docker ps ```` 我们现在就能看到很多监控内容了 [](https://cdn.sujx.net/weavescope-docker.png "weavescope_docker") ## 服务端准备 在各个节点启动监控node之后,再在主host(172.16.10.230)上启动Prometheus服务端和Granafa前端展示程序。 ```bash #!/bin/bash cd /root #config touch prometheus.yml #ym编辑文件不允许使用tab表示空格 tee prometheus.yml <<- 'EOF' global: scrape_interval: 15s evaluation_interval: 15s external_labels: monitor: 'codelab-monitor' scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090','localhost:8080','localhost:9100','172.16.10.232:8080','172.16.10.232:9100','172.16.10.231:8080','172.16.10.231:9100'] EOF #Prometheus Server docker run -itd --name prometheus -v /root/prometheus.yml:/etc/prometheus/prometheus.yml -p 9090:9090 --net=host prom/prometheus #Granafa docker run -itd --name grafana-$HOSTNAME -p 3000:3000 -e "GF_SERVER_ROOT_URL=http://grafana.server.name" -e "GF_SECURITY_ADMIN_PASSWORD=secret" --net=host grafana/grafana #Show Dockers docker ps -a ``` 就能看到服务端情况 [](https://cdn.sujx.net/prometheus_server.png "Prometheus") 然后就可以把监控数据导入granafa来进行前端展示。 至此,初步的docker监控搭建完成。