paulwong

          MongoDB健壯集群——用副本集做分片

          1.    MongoDB分片+副本集

          健壯的集群方案

          多個配置服務器 多個mongos服務器  每個片都是副本集 正確設置w

          架構圖

          說明:

          1.   此實驗環境在一臺機器上通過不同portdbpath實現啟動不同的mongod實例

          2.   總的9mongod實例,分別做成shard1shard2shard3三組副本集,每組12

          3.   Mongos進程的數量不限,建議把mongos配置在每個應用服務器本機上,這樣每個應用服務器就與自身的mongos進行通信,如果服務器不工作了,并不會影響其他的應用服務器與其自己的mongos通信

          4.   此實驗模擬2臺應用服務器(2mongos服務)

          5.   生產環境中每個片都應該是副本集,這樣單個服務器壞了,才不會導致片失效


           


           

           

           

           

           

           





          部署環境

          創建相關目錄

          ~]# mkdir -p /data/mongodb/shard{1,2,3}/node{1,2,3}

          ~]# mkdir -p /data/mongodb/shard{1,2,3}/logs

          ~]# ls /data/mongodb/shard*

          /data/mongodb/shard1:

          logs  node1  node2  node3

          /data/mongodb/shard2:

          logs  node1  node2  node3

          /data/mongodb/shard3:

          logs  node1  node2  node3

          ~]# mkdir -p /data/mongodb/config/logs

          ~]# mkdir -p /data/mongodb/config/node{1,2,3}

          ~]# ls /data/mongodb/config/

          logs  node1  node2  node3

          ~]# mkdir -p /data/mongodb/mongos/logs

          啟動配置服務

          Config server

          /data/mongodb/config/node1

          /data/mongodb/config/logs/node1.log

          10000

          /data/mongodb/config/node2

          /data/mongodb/config/logs/node2.log

          20000

          /data/mongodb/config/node3

          /data/mongodb/config/logs/node3.log

          30000

          #按規劃啟動3個:跟啟動單個配置服務一樣,只是重復3

          ~]# mongod --dbpath /data/mongodb/config/node1 --logpath /data/mongodb/config/logs/node1.log --logappend --fork --port 10000

          ~]# mongod --dbpath /data/mongodb/config/node2 --logpath /data/mongodb/config/logs/node2.log --logappend --fork --port 20000

          ~]# mongod --dbpath /data/mongodb/config/node3 --logpath /data/mongodb/config/logs/node3.log --logappend --fork --port 30000

          ~]# ps -ef|grep mongod|grep -v grep

          root      3983     1  0 11:10 ?        00:00:03 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node1 --logpath /data/mongodb/config/logs/node1.log --logappend --fork --port 10000

          root      4063     1  0 11:13 ?        00:00:02 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node2 --logpath /data/mongodb/config/logs/node2.log --logappend --fork --port 20000

          root      4182     1  0 11:17 ?        00:00:03 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node3 --logpath /data/mongodb/config/logs/node3.log --logappend --fork --port 30000

          啟動路由服務

          Mongos server

          ——

          /data/mongodb/mongos/logs/node1.log

          40000

          ——

          /data/mongodb/mongos/logs/node2.log

          50000

          #mongos的數量不受限制,通常應用一個服務器運行一個mongos

          ~]#mongos --port 40000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos1.log  --logappend --fork

          ~]#mongos --port 50000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos2.log  --logappend –fork

          ~]# ps -ef|grep mongos|grep -v grep

          root      4421     1  0 11:29 ?        00:00:00 /usr/local/mongodb/bin/mongos --port 40000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos1.log --logappend --fork

          root      4485     1  0 11:29 ?        00:00:00 /usr/local/mongodb/bin/mongos --port 50000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos2.log --logappend  --fork

          配置副本集

          按規劃,配置啟動shard1shard2shard3三組副本集

          #此處以shard1為例說明配置方法

          #啟動三個mongod進程

          ~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node1 --logpath /data/mongodb/shard1/logs/node1.log --logappend --fork --port 10001

          ~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node2 --logpath /data/mongodb/shard1/logs/node2.log --logappend --fork --port 10002

          ~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node3 --logpath /data/mongodb/shard1/logs/node3.log --logappend --fork --port 10003

          #初始化Replica Set:shard1

          ~]# /usr/local/mongodb/bin/mongo --port 10001

          MongoDB shell version: 2.6.6

          connecting to: 127.0.0.1:10001/test

          > use admin

          switched to db admin

          > rsconf={

          ...   "_id" : "shard1",

          ...   "members" : [

          ...       {

          ...           "_id" : 0,

          ...           "host" : "192.168.211.217:10001"

          ...       }

          ...   ]

          ... }

          {

                  "_id" : "shard1",

                  "members" : [

                          {

                                  "_id" : 0,

                                  "host" : "192.168.211.217:10001"

                          }

                  ]

          }

          >  rs.initiate(rsconf)

          {

                  "info" : "Config now saved locally.  Should come online in about a minute.",

                  "ok" : 1

          }

          > rs.add("192.168.211.217:10002")

          { "ok" : 1 }

          shard1:PRIMARY> rs.add("192.168.211.217:10003")

          { "ok" : 1 }

          shard1:PRIMARY>  rs.conf()

          {

                  "_id" : "shard1",

                  "version" : 3,

                  "members" : [

                          {

                                  "_id" : 0,

                                  "host" : "192.168.211.217:10001"

                          },

                          {

                                  "_id" : 1,

                                  "host" : "192.168.211.217:10002"

                          },

                          {

                                  "_id" : 2,

                                  "host" : "192.168.211.217:10003"

                          }

                  ]

          }

          Shard2shard3shard1配置副本集

          #最終副本集配置如下:

          shard3:PRIMARY> rs.conf()

          {

                  "_id" : "shard3",

                  "version" : 3,

                  "members" : [

                          {

                                  "_id" : 0,

                                  "host" : "192.168.211.217:30001"

                          },

                          {

                                  "_id" : 1,

                                  "host" : "192.168.211.217:30002"

                          },

                          {

                                  "_id" : 2,

                                  "host" : "192.168.211.217:30003"

                          }

                  ]

          }

          ~]# /usr/local/mongodb/bin/mongo --port 20001

          MongoDB shell version: 2.6.6

          connecting to: 127.0.0.1:20001/test

          shard2:PRIMARY> rs.conf()

          {

                  "_id" : "shard2",

                  "version" : 3,

                  "members" : [

                          {

                                  "_id" : 0,

                                  "host" : "192.168.211.217:20001"

                          },

                          {

                                  "_id" : 1,

                                  "host" : "192.168.211.217:20002"

                          },

                          {

                                  "_id" : 2,

                                  "host" : "192.168.211.217:20003"

                          }

                  ]

          }

          ~]# /usr/local/mongodb/bin/mongo --port 10001

          MongoDB shell version: 2.6.6

          connecting to: 127.0.0.1:10001/test

          shard1:PRIMARY> rs.conf()

          {

                  "_id" : "shard1",

                  "version" : 3,

                  "members" : [

                          {

                                  "_id" : 0,

                                  "host" : "192.168.211.217:10001"

                          },

                          {

                                  "_id" : 1,

                                  "host" : "192.168.211.217:10002"

                          },

                          {

                                  "_id" : 2,

                                  "host" : "192.168.211.217:10003"

                          }

                  ]

          }

          目前mongo相關進程端口情況如下:


          #此時,剛好與環境規劃列表對應

          添加(副本集)分片

          #連接到mongs,并切換到admin這里必須連接路由節點

          ~]# /usr/local/mongodb/bin/mongo --port 40000

          MongoDB shell version: 2.6.6

          connecting to: 127.0.0.1:40000/test

          mongos> use admin

          switched to db admin

          mongos> db

          admin

          mongos> db.runCommand({"addShard":"shard1/192.168.211.217:10001"})

          { "shardAdded" : "shard1", "ok" : 1 }

          mongos> db.runCommand({"addShard":"shard2/192.168.211.217:20001"})

          { "shardAdded" : "shard2", "ok" : 1 }

          mongos> db.runCommand({"addShard":"shard3/192.168.211.217:30001"})

          { "shardAdded" : "shard3", "ok" : 1 }

          mongos> db.runCommand({listshards:1})

          mongos> db.runCommand({listshards:1})

          {

                  "shards" : [

                          {

                                  "_id" : "shard1",

                                  "host" : "shard1/192.168.211.217:10001,192.168.211.217:10002,192.168.211.217:10003"

                          },

                          {

                                  "_id" : "shard2",

                                  "host" : "shard2/192.168.211.217:20001,192.168.211.217:20002,192.168.211.217:20003"

                          },

                          {

                                  "_id" : "shard3",

                                  "host" : "shard3/192.168.211.217:30001,192.168.211.217:30002,192.168.211.217:30003"

                          }

                  ],

                  "ok" : 1

          }

          激活dbcollections分片

          激活數據庫分片,命令

          > db.runCommand( { enablesharding : “” } );

          執行以上命令,可以讓數據庫跨shard,如果不執行這步,數據庫只會存放在一個shard

          一旦激活數據庫分片,數據庫中不同的collection將被存放在不同的shard

          但一個collection仍舊存放在同一個shard上,要使單個collection也分片,還需單獨對collection作些操作

          #如:激活test數據庫分片功能,連接mongos進程

          ~]# /usr/local/mongodb/bin/mongo --port 50000

          MongoDB shell version: 2.6.6

          connecting to: 127.0.0.1:50000/test

          mongos> use admin

          switched to db admin

          mongos> db.runCommand({"enablesharding":"test"})

          { "ok" : 1 }

          要使單個collection也分片存儲,需要給collection指定一個分片key,通過以下命令操作

          > db.runCommand( { shardcollection : “”,key : });

          注:  a. 分片的collection系統會自動創建一個索引(也可用戶提前創建好)

                   b. 分片的collection只能有一個在分片key上的唯一索引,其它唯一索引不被允許

          #collectiontest.yujx分片

          mongos> use admin

          switched to db admin

          mongos> db.runCommand({"shardcollection":"test.yujx","key":{"_id":1}})

          { "collectionsharded" : "test.yujx", "ok" : 1 }

          生成測試數據

          mongos> use test

          #生成測試數據,此處只是為了說明問題,并不一定要生成那么多行

          mongos> for(var i=1;i<=888888;i++) db.yujx.save({"id":i,"a":123456789,"b":888888888,"c":100000000})

          mongos> db.yujx.count()

          271814             #此實驗使用了這么多行測試數據

          查看集合分片

          mongos> db.yujx.stats()

          {

                  "sharded" : true,

                  "systemFlags" : 1,

                  "userFlags" : 1,

                  "ns" : "test.yujx",

                  "count" : 271814,

                  "numExtents" : 19,

                  "size" : 30443168,

                  "storageSize" : 51773440,

                  "totalIndexSize" : 8862784,

                  "indexSizes" : {

                          "_id_" : 8862784

                  },

                  "avgObjSize" : 112,

                  "nindexes" : 1,

                  "nchunks" : 4,

                  "shards" : {

                          "shard1" : {

                                  "ns" : "test.yujx",

                                  "count" : 85563,

                                  "size" : 9583056,

                                  "avgObjSize" : 112,

                                  "storageSize" : 11182080,

                                  "numExtents" : 6,

                                  "nindexes" : 1,

                                  "lastExtentSize" : 8388608,

                                  "paddingFactor" : 1,

                                  "systemFlags" : 1,

                                  "userFlags" : 1,

                                  "totalIndexSize" : 2796192,

                                  "indexSizes" : {

                                          "_id_" : 2796192

                                  },

                                  "ok" : 1

                          },

                          "shard2" : {

                                  "ns" : "test.yujx",

                                  "count" : 180298,

                                  "size" : 20193376,

                                  "avgObjSize" : 112,

                                  "storageSize" : 37797888,

                                  "numExtents" : 8,

                                  "nindexes" : 1,

                                  "lastExtentSize" : 15290368,

                                  "paddingFactor" : 1,

                                  "systemFlags" : 1,

                                  "userFlags" : 1,

                                  "totalIndexSize" : 5862192,

                                  "indexSizes" : {

                                          "_id_" : 5862192

                                  },

                                  "ok" : 1

                          },

                          "shard3" : {

                                  "ns" : "test.yujx",

                                  "count" : 5953,

                                  "size" : 666736,

                                  "avgObjSize" : 112,

                                  "storageSize" : 2793472,

                                  "numExtents" : 5,

                                  "nindexes" : 1,

                                  "lastExtentSize" : 2097152,

                                  "paddingFactor" : 1,

                                  "systemFlags" : 1,

                                  "userFlags" : 1,

                                  "totalIndexSize" : 204400,

                                  "indexSizes" : {

                                          "_id_" : 204400

                                  },

                                  "ok" : 1

                          }

                  "ok" : 1

          }

          #此時我們連接各個分片的primary查詢自身擁有的記錄數據:


          可以發現分別與上面的集合分片狀態顯示的一致

          補充提示:secondary節點默認是無法執行查詢操作,需要執行setSlaveOK操作

          查看數據庫分片

          mongos> db.printShardingStatus()


          #或者連接mongosconfig數據庫查詢

          mongos> db.shards.find()

          { "_id" : "shard1", "host" : "shard1/192.168.211.217:10001,192.168.211.217:10002,192.168.211.217:10003" }

          { "_id" : "shard2", "host" : "shard2/192.168.211.217:20001,192.168.211.217:20002,192.168.211.217:20003" }

          { "_id" : "shard3", "host" : "shard3/192.168.211.217:30001,192.168.211.217:30002,192.168.211.217:30003" }

          mongos> db.databases.find()

          { "_id" : "admin", "partitioned" : false, "primary" : "config" }

          { "_id" : "test", "partitioned" : true, "primary" : "shard3" }

          mongos> db.chunks.find()

          { "_id" : "test.yujx-_id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("54b8b58ddb2797bbb973a718") }, "shard" : "shard1" }

          { "_id" : "test.yujx-_id_ObjectId('54b8b58ddb2797bbb973a718')", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b58ddb2797bbb973a718") }, "max" : { "_id" : ObjectId("54b8b599db2797bbb973be59") }, "shard" : "shard3" }

          { "_id" : "test.yujx-_id_ObjectId('54b8b599db2797bbb973be59')", "lastmod" : Timestamp(4, 1), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b599db2797bbb973be59") }, "max" : { "_id" : ObjectId("54b8b6cfdb2797bbb9767ea2") }, "shard" : "shard2" }

          { "_id" : "test.yujx-_id_ObjectId('54b8b6cfdb2797bbb9767ea2')", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b6cfdb2797bbb9767ea2") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard1" }

          單點故障分析

           由于這是為了了解入門mongodb做的實驗,而故障模擬太浪費時間,所以這里就不一一列出,關于故障場景分析,可以參考:
          http://blog.itpub.net/27000195/viewspace-1404402/

          posted on 2015-12-18 14:03 paulwong 閱讀(848) 評論(0)  編輯  收藏 所屬分類: MONGODB

          主站蜘蛛池模板: 靖宇县| 安顺市| 扎囊县| 昭觉县| 泰顺县| 稷山县| 京山县| 晋州市| 临清市| 南郑县| 葵青区| 阿尔山市| 姚安县| 大港区| 嘉荫县| 交城县| 襄垣县| 黔南| 黄龙县| 焦作市| 昌邑市| 大化| 普宁市| 张家口市| 萨迦县| 庆安县| 马尔康县| 济南市| 东城区| 阳原县| 景洪市| 巴彦县| 富民县| 石柱| 长宁区| 武汉市| 大英县| 霍城县| 中牟县| 西林县| 长宁县|