paulwong

          利用Mongodb的復制集搭建高可用分片,Replica Sets + Sharding的搭建過程

          參考資料 reference:  http://mongodb.blog.51cto.com/1071559/740131  http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/#sharding-setup-shard-collection

          感謝網友Mr.Sharp,他給了我很多很有用的建議。

          概念梳理
          Sharded cluster has the following components: shards, query routers and config servers.

          Shards A : 
          A shard is a MongoDB instance that holds a subset of a collection’s data. Each shard is either a single
          mongod instance or a replica set. In production, all shards are replica sets.

          Config Servers Each :
          config server (page 10) is a mongod instance that holds metadata about the cluster. The metadata
          maps chunks to shards. For more information, see Config Servers (page 10).

          Routing Instances :
          Each router is a mongos instance that routes the reads and writes from applications to the shards.
          Applications do not access the shards directly.

          Sharding is the only solution for some classes of deployments. Use sharded clusters if:
          1 your data set approaches or exceeds the storage capacity of a single MongoDB instance.
          2 the size of your system’s active working set will soon exceed the capacity of your system’s maximum RAM.
          3 a single MongoDB instance cannot meet the demands of your write operations, and all other approaches have
          not reduced contention.

           

           

          開始部署搭建

          Deploy a Sharded Cluster
          1 Start the Config Server Database Instances?
           1.1 create data directories for each of the three config server instances
           mkdir -p /opt/data/configdb


           1.2 Start the three config server instances. Start each by issuing a command using the following syntax:

           ignore thisdo it later

           1.3 config file
           [root@472322 configdb]# vim /etc/mongodb/37018.conf 
           # This is an example config file for MongoDB
           dbpath = /opt/data/configdb2
           port = 37018
           rest = true
           fork = true
           logpath = /var/log/mongodb12.log
           logappend = true
           directoryperdb = true
           configsvr = true

           [root@472322 configdb]# vim /etc/mongodb/37017.conf 
           # This is an example config file for MongoDB
           dbpath = /opt/data/configdb1
           port = 37017
           rest = true
           fork = true
           logpath = /var/log/mongodb12.log
           logappend = true
           directoryperdb = true
           configsvr = true

            [root@472322 configdb]# vim /etc/mongodb/37019.conf 
           # This is an example config file for MongoDB
           dbpath = /opt/data/configdb3
           port = 37019
           rest = true
           fork = true
           logpath = /var/log/mongodb13.log
           logappend = true
           directoryperdb = true
           configsvr = true


           1.4 start mongodb server
           /db/mongodb/bin/mongod -f /etc/mongodb/37019.conf
           /db/mongodb/bin/mongod -f /etc/mongodb/37018.conf
           /db/mongodb/bin/mongod -f /etc/mongodb/37017.conf

           

          2 Start the mongos Instances
           The mongos instances are lightweight and do not require data directories. You can run a mongos instance on a system that runs other cluster components,
           such as on an application server or a server running a mongod process. By default, a mongos instance runs on port 27017.
           /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019
           or run " nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 & "
           
           2.1 start mongos on default port 27017
           [root@472322 ~]# nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 &
           Tue Aug  6 07:40:09 /db/mongodb/bin/mongos db version v2.0.1, pdfile version 4.5 starting (--help for usage)
           Tue Aug  6 07:40:09 git version: 3a5cf0e2134a830d38d2d1aae7e88cac31bdd684
           Tue Aug  6 07:40:09 build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41
           Tue Aug  6 07:40:09 SyncClusterConnection connecting to [20.10x.91.119:37017]
           Tue Aug  6 07:40:09 SyncClusterConnection connecting to [20.10x.91.119:37018]
           Tue Aug  6 07:40:09 SyncClusterConnection connecting to [20.10x.91.119:37019]
           Tue Aug  6 07:40:09 [Balancer] about to contact config servers and shards
           Tue Aug  6 07:40:09 [websvr] admin web console waiting for connections on port 28017
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37017]
           Tue Aug  6 07:40:09 [mongosMain] waiting for connections on port 27017
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37018]
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37019]
           Tue Aug  6 07:40:09 [Balancer] config servers and shards contacted successfully
           Tue Aug  6 07:40:09 [Balancer] balancer id: 472322.ea.com:27017 started at Aug  6 07:40:09
           Tue Aug  6 07:40:09 [Balancer] created new distributed lock for balancer on 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 ( lock timeout : 900000, ping interval : 30000, process : 0 )
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37017]
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37018]
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37019]
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37017]
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37018]
           Tue Aug  6 07:40:09 [Balancer] SyncClusterConnection connecting to [20.10x.91.119:37019]
           Tue Aug  6 07:40:09 [LockPinger] creating distributed lock ping thread for 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 and process 472322.ea.com:27017:1375774809:1804289383 (sleeping for 30000ms)
           Tue Aug  6 07:40:10 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8598caa3e21e9888bd3
           Tue Aug  6 07:40:10 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.

           Tue Aug  6 07:40:20 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8648caa3e21e9888bd4
           Tue Aug  6 07:40:20 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
           Tue Aug  6 07:40:30 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a86e8caa3e21e9888bd5
           Tue Aug  6 07:40:30 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.
           Tue Aug  6 07:40:40 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' acquired, ts : 5200a8788caa3e21e9888bd6
           Tue Aug  6 07:40:40 [Balancer] distributed lock 'balancer/472322.ea.com:27017:1375774809:1804289383' unlocked.

           2.2 start others mongos in appointed port 27018,27019
           nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 --port 27018 --chunkSize 1 --logpath /var/log/mongos2.log --fork &
           nohup /db/mongodb/bin/mongos --configdb 20.10x.91.119:37017,20.10x.91.119:37018,20.10x.91.119:37019 --port 27019 --chunkSize 1 --logpath /var/log/mongos3.log --fork &

           

          3 [root@472322 ~]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port 27017
             the replica sets have not been created, so do it soon.

           

          4 prepare the replica sets
           4.1 the first sets
           4.1.1 create the data directory
           mkdir -p /opt/db/mongodb-data/db37
           mkdir -p /opt/db/mongodb-data/db38 
           mkdir -p /opt/db/mongodb-data/db39 
           
           4.1.2 set the config files 27037.conf
           [root@472322 mongodb]# vi  27037.conf
           dbpath =/opt/db/mongodb-data/db37
           port = 27037
           rest = true
           fork = true
           logpath = /var/log/mongodb37.log
           logappend = true
           replSet = rpl
           #diaglog = 3
           profile = 3
           slowms=50
           oplogSize=4000
           
           4.1.3 set the config files 27039.conf
           [root@472322 mongodb]#  vi  27039.conf
           # This is an example config file for MongoDB
           dbpath =/opt/db/mongodb-data/db39
           port = 27039
           rest = true
           fork = true
           logpath = /var/log/mongodb39.log
           logappend = true
           replSet = rpl
           #diaglog = 3
           profile = 3
           slowms=50
           oplogSize=4000 
           
           4.1.4 set the config files 27038.conf 
           [root@472322 mongodb]#  vi  27038.conf
           # This is an example config file for MongoDB
           dbpath =/opt/db/mongodb-data/db38
           port = 27038
           rest = true
           fork = true
           logpath = /var/log/mongodb38.log
           logappend = true
           replSet = rpl
           #diaglog = 3
           profile = 3
           slowms=50
           oplogSize=4000 
           
           4.1.5 start replica set
           config = {_id: 'rpl1', members: [
             {_id: 0, host: '127.0.0.1:27027'},
             {_id: 1, host: '127.0.0.1:27028'},
             {_id: 3, host: '127.0.0.1:27029', arbiterOnly: true}
            ]};
            > config = {_id: 'rpl1', members: [
            ... {_id: 0, host: '127.0.0.1:27027'},
            ... {_id: 1, host: '127.0.0.1:27028'},
            ... {_id: 3, host: '127.0.0.1:27029', arbiterOnly: true}
            ... ]};
            {
              "_id" : "rpl1",
              "members" : [
                {
                  "_id" : 0,
                  "host" : "127.0.0.1:27027"
                },
                {
                  "_id" : 1,
                  "host" : "127.0.0.1:27028"
                },
                {
                  "_id" : 3,
                  "host" : "127.0.0.1:27029",
                  "arbiterOnly" : true
                }
              ]
            }

            rs.initiate(config);
            > rs.initiate(config);
            {
              "info" : "Config now saved locally.  Should come online in about a minute.",
              "ok" : 1
            }
            rs.status();
            PRIMARY> rs.status();
            {
              "set" : "rpl1",
              "date" : ISODate("2013-08-06T09:18:39Z"),
              "myState" : 1,
              "members" : [
                {
                  "_id" : 0,
                  "name" : "127.0.0.1:27027",
                  "health" : 1,
                  "state" : 1,
                  "stateStr" : "PRIMARY",
                  "optime" : {
                    "t" : 1375780672000,
                    "i" : 1
                  },
                  "optimeDate" : ISODate("2013-08-06T09:17:52Z"),
                  "self" : true
                },
                {
                  "_id" : 1,
                  "name" : "127.0.0.1:27028",
                  "health" : 1,
                  "state" : 2,
                  "stateStr" : "SECONDARY",
                  "uptime" : 29,
                  "optime" : {
                    "t" : 1375780672000,
                    "i" : 1
                  },
                  "optimeDate" : ISODate("2013-08-06T09:17:52Z"),
                  "lastHeartbeat" : ISODate("2013-08-06T09:18:38Z"),
                  "pingMs" : 0
                },
                {
                  "_id" : 3,
                  "name" : "127.0.0.1:27029",
                  "health" : 1,
                  "state" : 7,
                  "stateStr" : "ARBITER",
                  "uptime" : 31,
                  "optime" : {
                    "t" : 0,
                    "i" : 0
                  },
                  "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                  "lastHeartbeat" : ISODate("2013-08-06T09:18:38Z"),
                  "pingMs" : 0
                }
              ],
              "ok" : 1
            }

           
           4.2 the second sets
           do the same to 4.1 in 27037,27038,27039 ports
            config = {_id: 'rpl2', members: [
             {_id: 0, host: '127.0.0.1:27037'},
             {_id: 1, host: '127.0.0.1:27038'},
             {_id: 3, host: '127.0.0.1:27039', arbiterOnly: true}
            ]};
            rs.initiate(config);
            rs.status();

           

          5 Add Shards to the Cluster
           A shard can be a standalone mongod or a replica set. In a production environment, each shard should be a replica set.
           5.1 From a mongo shell, connect to the mongos instance. Issue a command using the following syntax:
           [root@472322 mongodb]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port 27017
           MongoDB shell version: 2.0.1
           connecting to: 20.10x.91.119:27017/test
           mongos> 
              
           5.2  add the replica rpl1 into shards
           sh.addShard( "rpl1/127.0.0.1:27027" );
           mongos> sh.addShard( "rpl1/127.0.0.1:27027" );
           command failed: {
             "ok" : 0,
             "errmsg" : "can't use localhost as a shard since all shards need to communicate. either use all shards and configdbs in localhost or all in actual IPs  host: 127.0.0.1:27027 isLocalHost:1"
           }
           Tue Aug  6 09:41:16 uncaught exception: error { "$err" : "can't find a shard to put new db on", "code" : 10185 }
           mongos> 
           can't use the localhost or 127.0.0.1, so change to the real ip address of "20.10x.91.119".
           mongos> sh.addShard( "rpl1/20.10x.91.119:27027" );
           command failed: {
             "ok" : 0,
             "errmsg" : "in seed list rpl1/20.10x.91.119:27027, host 20.10x.91.119:27027 does not belong to replica set rpl1"
           }
           Tue Aug  6 09:42:26 uncaught exception: error { "$err" : "can't find a shard to put new db on", "code" : 10185 }
           mongos> 
           sh.addShard( "rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029" );
           
           so, replica sets config 127.0.0.1, 20.10x.91.119 can't be recognized,it is wrong. i should config it again.
           
           5.2.1 delete all db files, then restart mongodb, then config again
            
            config = {_id: 'rpl1', members: [
             {_id: 0, host: '20.10x.91.119:27027'},
             {_id: 1, host: '20.10x.91.119:27028'},
             {_id: 3, host: '20.10x.91.119:27029', arbiterOnly: true}
             ]};
            rs.initiate(config);
            rs.status();
            
            config = {_id: 'rpl2', members: [
             {_id: 0, host: '20.10x.91.119:27037'},
             {_id: 1, host: '20.10x.91.119:27038'},
             {_id: 3, host: '20.10x.91.119:27039', arbiterOnly: true}
             ]};
            rs.initiate(config);
            rs.status();
           
           5.2.2 add replica sets to shards again, it is ok.
           [root@472322 ~]# /db/mongodb/bin/mongo --host 20.10x.91.119 --port 27017
           MongoDB shell version: 2.0.1
           connecting to: 20.10x.91.119:27017/test
           mongos> sh.addShard( "rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029" );
           mongos>

           

          6 Enable Sharding for a Database
           Before you can shard a collection, you must enable sharding for the collection’s database.
           Enabling sharding for a database does not redistribute data but make it possible to shard the collections in that database.
             6.1 1.From a mongo shell, connect to the mongos instance. Issue a command using the following syntax:
           sh.enableSharding("<database>")

             6.2 Optionally, you can enable sharding for a database using the enableSharding command, which uses the following syntax:
           db.runCommand( { enableSharding: <database> } )

           

          7  Convert a Replica Set to a Replicated Sharded Cluster
           簡介梳理The procedure, from a high level, is as follows:
           7.1.Create or select a 3-member replica set and insert some data into a collection. done
           7.2.Start the config databases and create a cluster with a single shard.  done.
           7.3.Create a second replica set with three new mongod instances.  done
           7.4.Add the second replica set as a shard in the cluster. haven't
           7.5.Enable sharding on the desired collection or collections. haven't

            開始操作
           7.2 Add the second replica set as a shard in the cluster.
           sh.addShard( "rpl2/20.10x.91.119:27037,20.10x.91.119:27038,20.10x.91.119:27039" );
           it is okay.
           
           7.3  Verify that both shards are properly configured by running the listShards command.
           mongos> use admin
           switched to db admin
           mongos> db.runCommand({listShards:1})
           {
             "shards" : [
               {
                 "_id" : "rpl1",
                 "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028,20.10x.91.119:27029"
               },
               {
                 "_id" : "rpl2",
                 "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038,20.10x.91.119:27039"
               }
             ],
             "ok" : 1
           }
           mongos> 
           
           7.4 insert into 1000,000 data to test.tickets,and shared the tickets collections.
           mongos> db.runCommand( { enableSharding : "test" } );
           { "ok" : 1 }
           mongos> use admin
           mongos> db.runCommand( { shardCollection : "test.tickets", key : {"number":1} });
           {
             "proposedKey" : {
               "number" : 1
             },
             "curIndexes" : [
               {
                 "v" : 1,
                 "key" : {
                   "_id" : 1
                 },
                 "ns" : "test.tickets",
                 "name" : "_id_"
               }
             ],
             "ok" : 0,
             "errmsg" : "please create an index over the sharding key before sharding."
           }
           mongos> 
           if enable index to share, must create index first.
           db.tickets.ensureIndex({ip:1});
           db.runCommand( { shardCollection : "test.tickets", key : {"ip":1} })
           
           mongos> db.runCommand( { shardCollection : "test.tickets", key : {"ip":1} })
           { "collectionsharded" : "test.tickets", "ok" : 1 }
           mongos> 
           The collection tickets is now sharded!
           
           7.5 check in mongos
           use test;
           db.stats();
           db.printShardingStatus();
           there is no data chunks in rpl2,so sharedCollection is error.
           
           
           7.6 i decided to add the second shared again, first remove,and then add.
           7.6.1 Use the listShards command, as in the following:
           db.adminCommand( { listShards: 1 } );
           get the share name
           
           7.6.2 Remove Chunks from the Shard
           db.runCommand( { removeShard: "rpl2" } );
           mongos> use admin
           switched to db admin
           mongos> db.runCommand( { removeShard: "rpl2" } );
           {
             "msg" : "draining started successfully",
             "state" : "started",
             "shard" : "rpl2",
             "ok" : 1
           }
           mongos> 
           
           check
           db.adminCommand( { listShards: 1 } ); it  is ok. when i insert data for tickets in mongo 27027 of first shard, the data can't be switched to the second shard.
           why ? i asked for HYG, he told me that we must insert data in mongos windown,if not ,the data can't be switched,so i will try to do it soon.
           
           7.7 insert data in mongos command window
           db.tc2.ensureIndex({xx:1});
           db.runCommand( { shardCollection : "test.tc2", key : {"xx":1} })
           for( var i = 1; i < 2000000; i++ ) db.tc2.insert({ "xx":i, "fapp" : "84eb9fb556074d6481e31915ac2427f0",  "dne" : "ueeDEhIB6tmP4cfY43NwWvAenzKWx19znmbheAuBl4j39U8uFXS1QGi2GCMHO7L21szgeF6Iquqmnw8kfJbvZUs/11RyxcoRm+otbUJyPPxFkevzv4SrI3kGxczG6Lsd19NBpyskaElCTtVKxwvQyBNgciXYq6cO/8ntV2C6cwQ=",  "eml" : "q5x68h3qyVBqp3ollJrY3XEkXECjPEncXhbJjga+3hYoa4zYNhrNmBN91meL3o7jsBI/N6qe2bb2BOnOJNAnBMDzhNmPqJKG/ZVLfT9jpNkUD/pQ4oJENMv72L2GZoiyym2IFT+oT3N0KFhcv08b9ke9tm2EHTGcBsGg1R40Ah+Y/5z89OI4ERmI/48qjvaw",  "uid" : "sQt92NUPr3CpCVnbpotU2lqRNVfZD6k/9TGW62UT7ExZYF8Dp1cWIVQoYNQVyFRLkxjmCoa8m6DiLiL/fPdG1k7WYGUH4ueXXK2yfVn/AGUk3pQbIuh7nFbqZCrAQtEY7gU0aIGC4sotAE8kghvCa5qWnSX0SWTViAE/esaWORo=",  "agt" : "PHP/eaSSOPlugin",  "sd" : "S15345853557133877",  "pt" : 3795,  "ip" : "211.223.160.34",  "av" : "http://secure.download.dm.origin.com/production/avatar/prod/1/599/40x40.JPEG",  "nex" : ISODate("2013-01-18T04:00:32.41Z"),  "exy" : ISODate("2013-01-18T01:16:32.015Z"),  "chk" : "sso4648609868740971",  "aid" : "Ciyvab0tregdVsBtboIpeChe4G6uzC1v5_-SIxmvSLLINJClUkXJhNvWkSUnajzi8xsv1DGYm0D3V46LLFI-61TVro9-HrZwyRwyTf9NwYIyentrgAY_qhs8fh7unyfB",  "tid" : "rUqFhONysi0yA==13583853927872",  "date" : ISODate("2012-02-17T01:16:32.787Z"),  "v" : "2.0.0",  "scope" : [],  "rug" : false,  "schk" : true,  "fjs" : false,  "sites" : [{      "name" : "Origin.com",      "id" : "nb09xrt8384147bba27c2f12e112o8k9",      "last" : ISODate("2013-01-17T01:16:32.787Z"),      "_id" : ObjectId("50f750f06a56028661000f20")    }]});
           
           watch the status of sharding, use the command of "db.printShardingStatus();"
           mongos> db.printShardingStatus();
           --- Sharding Status --- 
             sharding version: { "_id" : 1, "version" : 3 }
             shards:
             {  "_id" : "rpl1",  "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
             {  "_id" : "rpl2",  "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
             databases:
             {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
             {  "_id" : "test",  "partitioned" : true,  "primary" : "rpl1" }
               test.tc chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
               test.tc2 chunks:
                   rpl1    62
                   rpl2    61
                 too many chunks to print, use verbose if you want to force print
               test.tickets chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
             {  "_id" : "adin",  "partitioned" : false,  "primary" : "rpl1" }

           mongos> 
           
           if we want to see the detail of "too many chunks to print, use verbose if you want to force print",we should use the parameter vvvv.
           db.printShardingStatus("vvvv");
           mongos> db.printShardingStatus("vvvv")
           --- Sharding Status --- 
             sharding version: { "_id" : 1, "version" : 3 }
             shards:
             {  "_id" : "rpl1",  "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
             {  "_id" : "rpl2",  "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
             databases:
             {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
             {  "_id" : "test",  "partitioned" : true,  "primary" : "rpl1" }
               test.tc chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
               test.tc2 chunks:
                   rpl1    62
                   rpl2    61
                 { "xx" : { $minKey : 1 } } -->> { "xx" : 1 } on : rpl1 { "t" : 59000, "i" : 1 }
                 { "xx" : 1 } -->> { "xx" : 2291 } on : rpl1 { "t" : 2000, "i" : 2 }
                 { "xx" : 2291 } -->> { "xx" : 4582 } on : rpl1 { "t" : 2000, "i" : 4 }
                 { "xx" : 4582 } -->> { "xx" : 6377 } on : rpl1 { "t" : 2000, "i" : 6 }
                 { "xx" : 6377 } -->> { "xx" : 8095 } on : rpl1 { "t" : 2000, "i" : 8 }
                 { "xx" : 8095 } -->> { "xx" : 9813 } on : rpl1 { "t" : 2000, "i" : 10 }
                 { "xx" : 9813 } -->> { "xx" : 11919 } on : rpl1 { "t" : 2000, "i" : 12 }
                 { "xx" : 11919 } -->> { "xx" : 14210 } on : rpl1 { "t" : 2000, "i" : 14 }
                 { "xx" : 14210 } -->> { "xx" : 18563 } on : rpl1 { "t" : 2000, "i" : 16 }
                 { "xx" : 18563 } -->> { "xx" : 23146 } on : rpl1 { "t" : 2000, "i" : 18 }
                 { "xx" : 23146 } -->> { "xx" : 27187 } on : rpl1 { "t" : 2000, "i" : 20 }
                 { "xx" : 27187 } -->> { "xx" : 31770 } on : rpl1 { "t" : 2000, "i" : 22 }
                 { "xx" : 31770 } -->> { "xx" : 35246 } on : rpl1 { "t" : 2000, "i" : 24 }
                 { "xx" : 35246 } -->> { "xx" : 38683 } on : rpl1 { "t" : 2000, "i" : 26 }
                 { "xx" : 38683 } -->> { "xx" : 42120 } on : rpl1 { "t" : 2000, "i" : 28 }
                 { "xx" : 42120 } -->> { "xx" : 45557 } on : rpl1 { "t" : 2000, "i" : 30 }
                 { "xx" : 45557 } -->> { "xx" : 48994 } on : rpl1 { "t" : 2000, "i" : 32 }
                 { "xx" : 48994 } -->> { "xx" : 53242 } on : rpl1 { "t" : 2000, "i" : 34 }
                 { "xx" : 53242 } -->> { "xx" : 62409 } on : rpl1 { "t" : 2000, "i" : 36 }
                 { "xx" : 62409 } -->> { "xx" : 71576 } on : rpl1 { "t" : 2000, "i" : 38 }
                 { "xx" : 71576 } -->> { "xx" : 80743 } on : rpl1 { "t" : 2000, "i" : 40 }
                 { "xx" : 80743 } -->> { "xx" : 89910 } on : rpl1 { "t" : 2000, "i" : 42 }
                 { "xx" : 89910 } -->> { "xx" : 99077 } on : rpl1 { "t" : 2000, "i" : 44 }
                 { "xx" : 99077 } -->> { "xx" : 119382 } on : rpl1 { "t" : 2000, "i" : 46 }
                 { "xx" : 119382 } -->> { "xx" : 133133 } on : rpl1 { "t" : 2000, "i" : 48 }
                 { "xx" : 133133 } -->> { "xx" : 146884 } on : rpl1 { "t" : 2000, "i" : 50 }
                 { "xx" : 146884 } -->> { "xx" : 160635 } on : rpl1 { "t" : 2000, "i" : 52 }
                 { "xx" : 160635 } -->> { "xx" : 178457 } on : rpl1 { "t" : 2000, "i" : 54 }
                 { "xx" : 178457 } -->> { "xx" : 200508 } on : rpl1 { "t" : 2000, "i" : 56 }
                 { "xx" : 200508 } -->> { "xx" : 214259 } on : rpl1 { "t" : 2000, "i" : 58 }
                 { "xx" : 214259 } -->> { "xx" : 228010 } on : rpl1 { "t" : 2000, "i" : 60 }
                 { "xx" : 228010 } -->> { "xx" : 241761 } on : rpl1 { "t" : 2000, "i" : 62 }
                 { "xx" : 241761 } -->> { "xx" : 255512 } on : rpl1 { "t" : 2000, "i" : 64 }
                 { "xx" : 255512 } -->> { "xx" : 279630 } on : rpl1 { "t" : 2000, "i" : 66 }
                 { "xx" : 279630 } -->> { "xx" : 301723 } on : rpl1 { "t" : 2000, "i" : 68 }
                 { "xx" : 301723 } -->> { "xx" : 317196 } on : rpl1 { "t" : 2000, "i" : 70 }
                 { "xx" : 317196 } -->> { "xx" : 336533 } on : rpl1 { "t" : 2000, "i" : 72 }
                 { "xx" : 336533 } -->> { "xx" : 359500 } on : rpl1 { "t" : 2000, "i" : 74 }
                 { "xx" : 359500 } -->> { "xx" : 385354 } on : rpl1 { "t" : 2000, "i" : 76 }
                 { "xx" : 385354 } -->> { "xx" : 400837 } on : rpl1 { "t" : 2000, "i" : 78 }
                 { "xx" : 400837 } -->> { "xx" : 422259 } on : rpl1 { "t" : 2000, "i" : 80 }
                 { "xx" : 422259 } -->> { "xx" : 444847 } on : rpl1 { "t" : 2000, "i" : 82 }
                 { "xx" : 444847 } -->> { "xx" : 472084 } on : rpl1 { "t" : 2000, "i" : 84 }
                 { "xx" : 472084 } -->> { "xx" : 490796 } on : rpl1 { "t" : 2000, "i" : 86 }
                 { "xx" : 490796 } -->> { "xx" : 509498 } on : rpl1 { "t" : 2000, "i" : 88 }
                 { "xx" : 509498 } -->> { "xx" : 534670 } on : rpl1 { "t" : 2000, "i" : 90 }
                 { "xx" : 534670 } -->> { "xx" : 561927 } on : rpl1 { "t" : 2000, "i" : 92 }
                 { "xx" : 561927 } -->> { "xx" : 586650 } on : rpl1 { "t" : 2000, "i" : 94 }
                 { "xx" : 586650 } -->> { "xx" : 606316 } on : rpl1 { "t" : 2000, "i" : 96 }
                 { "xx" : 606316 } -->> { "xx" : 632292 } on : rpl1 { "t" : 2000, "i" : 98 }
                 { "xx" : 632292 } -->> { "xx" : 650179 } on : rpl1 { "t" : 2000, "i" : 100 }
                 { "xx" : 650179 } -->> { "xx" : 670483 } on : rpl1 { "t" : 2000, "i" : 102 }
                 { "xx" : 670483 } -->> { "xx" : 695898 } on : rpl1 { "t" : 2000, "i" : 104 }
                 { "xx" : 695898 } -->> { "xx" : 719853 } on : rpl1 { "t" : 2000, "i" : 106 }
                 { "xx" : 719853 } -->> { "xx" : 734751 } on : rpl1 { "t" : 2000, "i" : 108 }
                 { "xx" : 734751 } -->> { "xx" : 757143 } on : rpl1 { "t" : 2000, "i" : 110 }
                 { "xx" : 757143 } -->> { "xx" : 773800 } on : rpl1 { "t" : 2000, "i" : 112 }
                 { "xx" : 773800 } -->> { "xx" : 796919 } on : rpl1 { "t" : 2000, "i" : 114 }
                 { "xx" : 796919 } -->> { "xx" : 814262 } on : rpl1 { "t" : 2000, "i" : 116 }
                 { "xx" : 814262 } -->> { "xx" : 837215 } on : rpl1 { "t" : 2000, "i" : 118 }
                 { "xx" : 837215 } -->> { "xx" : 855766 } on : rpl1 { "t" : 2000, "i" : 120 }
                 { "xx" : 855766 } -->> { "xx" : 869517 } on : rpl1 { "t" : 2000, "i" : 122 }
                 { "xx" : 869517 } -->> { "xx" : 883268 } on : rpl2 { "t" : 59000, "i" : 0 }
                 { "xx" : 883268 } -->> { "xx" : 897019 } on : rpl2 { "t" : 58000, "i" : 0 }
                 { "xx" : 897019 } -->> { "xx" : 919595 } on : rpl2 { "t" : 57000, "i" : 0 }
                 { "xx" : 919595 } -->> { "xx" : 946611 } on : rpl2 { "t" : 56000, "i" : 0 }
                 { "xx" : 946611 } -->> { "xx" : 966850 } on : rpl2 { "t" : 55000, "i" : 0 }
                 { "xx" : 966850 } -->> { "xx" : 989291 } on : rpl2 { "t" : 54000, "i" : 0 }
                 { "xx" : 989291 } -->> { "xx" : 1008580 } on : rpl2 { "t" : 53000, "i" : 0 }
                 { "xx" : 1008580 } -->> { "xx" : 1022331 } on : rpl2 { "t" : 52000, "i" : 0 }
                 { "xx" : 1022331 } -->> { "xx" : 1036082 } on : rpl2 { "t" : 51000, "i" : 0 }
                 { "xx" : 1036082 } -->> { "xx" : 1060888 } on : rpl2 { "t" : 50000, "i" : 0 }
                 { "xx" : 1060888 } -->> { "xx" : 1088121 } on : rpl2 { "t" : 49000, "i" : 0 }
                 { "xx" : 1088121 } -->> { "xx" : 1101872 } on : rpl2 { "t" : 48000, "i" : 0 }
                 { "xx" : 1101872 } -->> { "xx" : 1122160 } on : rpl2 { "t" : 47000, "i" : 0 }
                 { "xx" : 1122160 } -->> { "xx" : 1143537 } on : rpl2 { "t" : 46000, "i" : 0 }
                 { "xx" : 1143537 } -->> { "xx" : 1168372 } on : rpl2 { "t" : 45000, "i" : 0 }
                 { "xx" : 1168372 } -->> { "xx" : 1182123 } on : rpl2 { "t" : 44000, "i" : 0 }
                 { "xx" : 1182123 } -->> { "xx" : 1201952 } on : rpl2 { "t" : 43000, "i" : 0 }
                 { "xx" : 1201952 } -->> { "xx" : 1219149 } on : rpl2 { "t" : 42000, "i" : 0 }
                 { "xx" : 1219149 } -->> { "xx" : 1232900 } on : rpl2 { "t" : 41000, "i" : 0 }
                 { "xx" : 1232900 } -->> { "xx" : 1247184 } on : rpl2 { "t" : 40000, "i" : 0 }
                 { "xx" : 1247184 } -->> { "xx" : 1270801 } on : rpl2 { "t" : 39000, "i" : 0 }
                 { "xx" : 1270801 } -->> { "xx" : 1294343 } on : rpl2 { "t" : 38000, "i" : 0 }
                 { "xx" : 1294343 } -->> { "xx" : 1313250 } on : rpl2 { "t" : 37000, "i" : 0 }
                 { "xx" : 1313250 } -->> { "xx" : 1336332 } on : rpl2 { "t" : 36000, "i" : 0 }
                 { "xx" : 1336332 } -->> { "xx" : 1358840 } on : rpl2 { "t" : 35000, "i" : 0 }
                 { "xx" : 1358840 } -->> { "xx" : 1372591 } on : rpl2 { "t" : 34000, "i" : 0 }
                 { "xx" : 1372591 } -->> { "xx" : 1386342 } on : rpl2 { "t" : 33000, "i" : 0 }
                 { "xx" : 1386342 } -->> { "xx" : 1400093 } on : rpl2 { "t" : 32000, "i" : 0 }
                 { "xx" : 1400093 } -->> { "xx" : 1418372 } on : rpl2 { "t" : 31000, "i" : 0 }
                 { "xx" : 1418372 } -->> { "xx" : 1440590 } on : rpl2 { "t" : 30000, "i" : 0 }
                 { "xx" : 1440590 } -->> { "xx" : 1461034 } on : rpl2 { "t" : 29000, "i" : 0 }
                 { "xx" : 1461034 } -->> { "xx" : 1488305 } on : rpl2 { "t" : 28000, "i" : 0 }
                 { "xx" : 1488305 } -->> { "xx" : 1510326 } on : rpl2 { "t" : 27000, "i" : 0 }
                 { "xx" : 1510326 } -->> { "xx" : 1531986 } on : rpl2 { "t" : 26000, "i" : 0 }
                 { "xx" : 1531986 } -->> { "xx" : 1545737 } on : rpl2 { "t" : 25000, "i" : 0 }
                 { "xx" : 1545737 } -->> { "xx" : 1559488 } on : rpl2 { "t" : 24000, "i" : 0 }
                 { "xx" : 1559488 } -->> { "xx" : 1576755 } on : rpl2 { "t" : 23000, "i" : 0 }
                 { "xx" : 1576755 } -->> { "xx" : 1596977 } on : rpl2 { "t" : 22000, "i" : 0 }
                 { "xx" : 1596977 } -->> { "xx" : 1619863 } on : rpl2 { "t" : 21000, "i" : 0 }
                 { "xx" : 1619863 } -->> { "xx" : 1633614 } on : rpl2 { "t" : 20000, "i" : 0 }
                 { "xx" : 1633614 } -->> { "xx" : 1647365 } on : rpl2 { "t" : 19000, "i" : 0 }
                 { "xx" : 1647365 } -->> { "xx" : 1668372 } on : rpl2 { "t" : 18000, "i" : 0 }
                 { "xx" : 1668372 } -->> { "xx" : 1682123 } on : rpl2 { "t" : 17000, "i" : 0 }
                 { "xx" : 1682123 } -->> { "xx" : 1695874 } on : rpl2 { "t" : 16000, "i" : 0 }
                 { "xx" : 1695874 } -->> { "xx" : 1711478 } on : rpl2 { "t" : 15000, "i" : 0 }
                 { "xx" : 1711478 } -->> { "xx" : 1738684 } on : rpl2 { "t" : 14000, "i" : 0 }
                 { "xx" : 1738684 } -->> { "xx" : 1758083 } on : rpl2 { "t" : 13000, "i" : 0 }
                 { "xx" : 1758083 } -->> { "xx" : 1773229 } on : rpl2 { "t" : 12000, "i" : 0 }
                 { "xx" : 1773229 } -->> { "xx" : 1794767 } on : rpl2 { "t" : 11000, "i" : 0 }
                 { "xx" : 1794767 } -->> { "xx" : 1808518 } on : rpl2 { "t" : 10000, "i" : 0 }
                 { "xx" : 1808518 } -->> { "xx" : 1834892 } on : rpl2 { "t" : 9000, "i" : 0 }
                 { "xx" : 1834892 } -->> { "xx" : 1848643 } on : rpl2 { "t" : 8000, "i" : 0 }
                 { "xx" : 1848643 } -->> { "xx" : 1873297 } on : rpl2 { "t" : 7000, "i" : 0 }
                 { "xx" : 1873297 } -->> { "xx" : 1887048 } on : rpl2 { "t" : 6000, "i" : 2 }
                 { "xx" : 1887048 } -->> { "xx" : 1911702 } on : rpl2 { "t" : 6000, "i" : 3 }
                 { "xx" : 1911702 } -->> { "xx" : 1918372 } on : rpl2 { "t" : 5000, "i" : 0 }
                 { "xx" : 1918372 } -->> { "xx" : 1932123 } on : rpl2 { "t" : 7000, "i" : 2 }
                 { "xx" : 1932123 } -->> { "xx" : 1959185 } on : rpl2 { "t" : 7000, "i" : 3 }
                 { "xx" : 1959185 } -->> { "xx" : 1972936 } on : rpl2 { "t" : 9000, "i" : 2 }
                 { "xx" : 1972936 } -->> { "xx" : 1999999 } on : rpl2 { "t" : 9000, "i" : 3 }
                 { "xx" : 1999999 } -->> { "xx" : { $maxKey : 1 } } on : rpl2 { "t" : 2000, "i" : 0 }
               test.tickets chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
             {  "_id" : "adin",  "partitioned" : false,  "primary" : "rpl1" }

           mongos> 
           
           7.7.2 vertify if it is a sharding
           [root@472322 ~]# /db/mongodb/bin/mongo --port 27027
           MongoDB shell version: 2.0.1
           connecting to: 127.0.0.1:27027/test
           PRIMARY> db.runCommand({ isdbgrid:1 });
           {
             "errmsg" : "no such cmd: isdbgrid",
             "bad cmd" : {
               "isdbgrid" : 1
             },
             "ok" : 0
           }
           PRIMARY> 
           there is errmsg info, run it in wrong windows,maybe run it in the mongos window.
           
           [root@472322 ~]# /db/mongodb/bin/mongo --port 27017
           MongoDB shell version: 2.0.1
           connecting to: 127.0.0.1:27017/test
           mongos> db.runCommand({ isdbgrid:1 });
           { "isdbgrid" : 1, "hostname" : "472322.ea.com", "ok" : 1 }
           mongos> 
           ok,  "isdbgrid" : 1 means it is a sharding.
           
           7.8 shared for unshared collection
           7.8.1 prepared a collection in mongo window.

           for( var i = 1; i < 2000000; i++ ) db.tc3.insert({ "xx":i, "fapp" : "84eb9fb556074d6481e31915ac2427f0",  "dne" : "ueeDEhIB6tmP4cfY43NwWvAenzKWx19znmbheAuBl4j39U8uFXS1QGi2GCMHO7L21szgeF6Iquqmnw8kfJbvZUs/11RyxcoRm+otbUJyPPxFkevzv4SrI3kGxczG6Lsd19NBpyskaElCTtVKxwvQyBNgciXYq6cO/8ntV2C6cwQ=",  "eml" : "q5x68h3qyVBqp3ollJrY3XEkXECjPEncXhbJjga+3hYoa4zYNhrNmBN91meL3o7jsBI/N6qe2bb2BOnOJNAnBMDzhNmPqJKG/ZVLfT9jpNkUD/pQ4oJENMv72L2GZoiyym2IFT+oT3N0KFhcv08b9ke9tm2EHTGcBsGg1R40Ah+Y/5z89OI4ERmI/48qjvaw",  "uid" : "sQt92NUPr3CpCVnbpotU2lqRNVfZD6k/9TGW62UT7ExZYF8Dp1cWIVQoYNQVyFRLkxjmCoa8m6DiLiL/fPdG1k7WYGUH4ueXXK2yfVn/AGUk3pQbIuh7nFbqZCrAQtEY7gU0aIGC4sotAE8kghvCa5qWnSX0SWTViAE/esaWORo=",  "agt" : "PHP/eaSSOPlugin",  "sd" : "S15345853557133877",  "pt" : 3795,  "ip" : "211.223.160.34",  "av" : "http://secure.download.dm.origin.com/production/avatar/prod/1/599/40x40.JPEG",  "nex" : ISODate("2013-01-18T04:00:32.41Z"),  "exy" : ISODate("2013-01-18T01:16:32.015Z"),  "chk" : "sso4648609868740971",  "aid" : "Ciyvab0tregdVsBtboIpeChe4G6uzC1v5_-SIxmvSLLINJClUkXJhNvWkSUnajzi8xsv1DGYm0D3V46LLFI-61TVro9-HrZwyRwyTf9NwYIyentrgAY_qhs8fh7unyfB",  "tid" : "rUqFhONysi0yA==13583853927872",  "date" : ISODate("2012-02-17T01:16:32.787Z"),  "v" : "2.0.0",  "scope" : [],  "rug" : false,  "schk" : true,  "fjs" : false,  "sites" : [{      "name" : "Origin.com",      "id" : "nb09xrt8384147bba27c2f12e112o8k9",      "last" : ISODate("2013-01-17T01:16:32.787Z"),      "_id" : ObjectId("50f750f06a56028661000f20")    }]});
           db.tc3.ensureIndex({xx:1});
           see the status of sharding
           db.tc3.stats();
           mongos> db.tc3.stats();
           {
             "sharded" : false,
             "primary" : "rpl1",
             "ns" : "test.tc3",
             "count" : 1999999,
             "size" : 2439998940,
             "avgObjSize" : 1220.00008000004,
             "storageSize" : 2480209920,
             "numExtents" : 29,
             "nindexes" : 2,
             "lastExtentSize" : 417480704,
             "paddingFactor" : 1,
             "flags" : 1,
             "totalIndexSize" : 115232544,
             "indexSizes" : {
               "_id_" : 64909264,
               "xx_1" : 50323280
             },
             "ok" : 1
           }
           mongos> 
           
           7.8.2 enable the sharding,Shard a Collection Using a Hashed Shard Key "xx"
           sh.shardCollection( "test.tc3", { xx: "hashed" } ) or  db.runCommand( { shardCollection : "test.tc3", key : {"xx":1} })
           mongos>  sh.shardCollection( "test.tc3", { xx: "hashed" } ) ;
           command failed: { "ok" : 0, "errmsg" : "shard keys must all be ascending" }
           shard keys must all be ascending
           mongos> 
           why failed ? i see thehttp://docs.mongodb.org/manual/reference/method/sh.shardCollection/
           and found "New in version 2.4: Use the form {field: "hashed"} to create a hashed shard key. Hashed shard keys may not be compound indexes.",
           and the current is 2.0.1,so i should use the other way to enable.
           mongos> db.runCommand( { shardCollection : "test.tc3", key : {"xx":1} })
           { "ok" : 0, "errmsg" : "access denied - use admin db" }
           mongos> use admin
           switched to db admin
           mongos> db.runCommand( { shardCollection : "test.tc3", key : {"xx":1} })
           { "collectionsharded" : "test.tc3", "ok" : 1 }
           mongos> 
           nice,enable successfully.
           
           7.8.3 and see the status of sharding
           mongos> use test;
           switched to db test
           mongos> db.tc3.stats();
           {
             "sharded" : true,  -- nice, it is a sharding now
             "flags" : 1,
             "ns" : "test.tc3",
             "count" : 2005360,
             "numExtents" : 48,
             "size" : 2446539448,
             "storageSize" : 2979864576,
             "totalIndexSize" : 116573408,
             "indexSizes" : {
               "_id_" : 65105488,
               "xx_1" : 51467920
             },
             "avgObjSize" : 1220.0001236685682,
             "nindexes" : 2,
             "nchunks" : 73,
             "shards" : {
               "rpl1" : {
                 "ns" : "test.tc3",
                 "count" : 1647809,
                 "size" : 2010326980,
                 "avgObjSize" : 1220,
                 "storageSize" : 2480209920,
                 "numExtents" : 29,
                 "nindexes" : 2,
                 "lastExtentSize" : 417480704,
                 "paddingFactor" : 1,
                 "flags" : 1,
                 "totalIndexSize" : 94956064,
                 "indexSizes" : {
                   "_id_" : 53487392,
                   "xx_1" : 41468672
                 },
                 "ok" : 1
               },
               "rpl2" : {
                 "ns" : "test.tc3",
                 "count" : 357551,
                 "size" : 436212468,
                 "avgObjSize" : 1220.0006936073455,
                 "storageSize" : 499654656,
                 "numExtents" : 19,
                 "nindexes" : 2,
                 "lastExtentSize" : 89817088,
                 "paddingFactor" : 1,
                 "flags" : 1,
                 "totalIndexSize" : 21617344,
                 "indexSizes" : {
                   "_id_" : 11618096,
                   "xx_1" : 9999248
                 },
                 "ok" : 1
               }
             },
             "ok" : 1
           }
           mongos>  
           
           mongos> db.printShardingStatus();
           --- Sharding Status --- 
             sharding version: { "_id" : 1, "version" : 3 }
             shards:
             {  "_id" : "rpl1",  "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
             {  "_id" : "rpl2",  "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
             databases:
             {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
             {  "_id" : "test",  "partitioned" : true,  "primary" : "rpl1" }
               test.tc chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
               test.tc2 chunks:
                   rpl1    62
                   rpl2    61
                 too many chunks to print, use verbose if you want to force print
               test.tc3 chunks:
                   rpl2    19
                   rpl1    54
                 too many chunks to print, use verbose if you want to force print  
                 -- see this,there are 19 chunks in rpl2,so data is switching to rpl2 now.
               test.tickets chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
             {  "_id" : "adin",  "partitioned" : false,  "primary" : "rpl1" }

           mongos>

           

          8 add the third new replica sets to sharding.
           8.1 config file
           8.1.1 create the data directory
           mkdir -p /opt/db/mongodb-data/db47
           mkdir -p /opt/db/mongodb-data/db48 
           mkdir -p /opt/db/mongodb-data/db49 
           
           8.1.2 set the config files 27047.conf
           [root@472322 ~]# cd /etc/mongodb/
           [root@472322 mongodb]# vi  27047.conf
           dbpath =/opt/db/mongodb-data/db47
           port = 27047
           rest = true
           fork = true
           logpath = /var/log/mongodb47.log
           logappend = true
           replSet = rpl3
           #diaglog = 3
           profile = 3
           slowms=50
           oplogSize=4000
           
           8.1.3 set the config files 27048.conf
           [root@472322 mongodb]#  vi  27048.conf
           # This is an example config file for MongoDB
           dbpath =/opt/db/mongodb-data/db48
           port = 27048
           rest = true
           fork = true
           logpath = /var/log/mongodb48.log
           logappend = true
           replSet = rpl3
           #diaglog = 3
           profile = 3
           slowms=50
           oplogSize=4000 
           
           8.1.4 set the config files 27049.conf 
           [root@472322 mongodb]#  vi  27049.conf
           # This is an example config file for MongoDB
           dbpath =/opt/db/mongodb-data/db49
           port = 27049
           rest = true
           fork = true
           logpath = /var/log/mongodb49.log
           logappend = true
           replSet = rpl3
           #diaglog = 3
           profile = 3
           slowms=50
           oplogSize=4000 
           
           8.2 start mongod server
           /db/mongodb/bin/mongod -f /etc/mongodb/27047.conf
           /db/mongodb/bin/mongod -f /etc/mongodb/27048.conf
           /db/mongodb/bin/mongod -f /etc/mongodb/27049.conf
           
           8.3 start replica set
           [root@472322 mongodb]# /db/mongodb/bin/mongo --port 27047
           MongoDB shell version: 2.0.1
           connecting to: 127.0.0.1:27047/test
           >
           config = {_id: 'rpl3', members: [
             {_id: 0, host: '20.10x.91.119:27047'},
             {_id: 1, host: '20.10x.91.119:27048'},
             {_id: 3, host: '20.10x.91.119:27049', arbiterOnly: true}
            ]};

           > config = {_id: 'rpl3', members: [
           ... {_id: 0, host: '20.10x.91.119:27047'},
           ... {_id: 1, host: '20.10x.91.119:27048'},
           ... {_id: 3, host: '20.10x.91.119:27049', arbiterOnly: true}
           ... ]};
           {
             "_id" : "rpl3",
             "members" : [
               {
                 "_id" : 0,
                 "host" : "20.10x.91.119:27047"
               },
               {
                 "_id" : 1,
                 "host" : "20.10x.91.119:27048"
               },
               {
                 "_id" : 3,
                 "host" : "20.10x.91.119:27049",
                 "arbiterOnly" : true
               }
             ]
           }
           > rs.initiate(config);
           {
             "info" : "Config now saved locally.  Should come online in about a minute.",
             "ok" : 1
           }
           > rs.status();
           {
             "set" : "rpl3",
             "date" : ISODate("2013-08-09T06:53:57Z"),
             "myState" : 1,
             "members" : [
               {
                 "_id" : 0,
                 "name" : "20.10x.91.119:27047",
                 "health" : 1,
                 "state" : 1,
                 "stateStr" : "PRIMARY",
                 "optime" : {
                   "t" : 1376031176000,
                   "i" : 1
                 },
                 "optimeDate" : ISODate("2013-08-09T06:52:56Z"),
                 "self" : true
               },
               {
                 "_id" : 1,
                 "name" : "20.10x.91.119:27048",
                 "health" : 1,
                 "state" : 2,
                 "stateStr" : "SECONDARY",
                 "uptime" : 49,
                 "optime" : {
                   "t" : 1376031176000,
                   "i" : 1
                 },
                 "optimeDate" : ISODate("2013-08-09T06:52:56Z"),
                 "lastHeartbeat" : ISODate("2013-08-09T06:53:56Z"),
                 "pingMs" : 0
               },
               {
                 "_id" : 3,
                 "name" : "20.10x.91.119:27049",
                 "health" : 1,
                 "state" : 7,
                 "stateStr" : "ARBITER",
                 "uptime" : 55,
                 "optime" : {
                   "t" : 0,
                   "i" : 0
                 },
                 "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                 "lastHeartbeat" : ISODate("2013-08-09T06:53:56Z"),
                 "pingMs" : 0
               }
             ],
             "ok" : 1
           }
           OKay, the third replica set is done now.
           
           8.4 add the third replica set to the sharding
           8.4.1 run "sh.addShard( "rpl3/20.10x.91.119:27047,20.10x.91.119:27048,20.10x.91.119:27049" )" in mongos;
           [root@472322 mongodb]# /db/mongodb/bin/mongo --port 27018
           MongoDB shell version: 2.0.1
           connecting to: 127.0.0.1:27018/test
           mongos> use admin
           switched to db admin
           mongos> sh.addShard( "rpl3/20.10x.91.119:27047,20.10x.91.119:27048,20.10x.91.119:27049" );
           mongos>
           
           8.4.2 check the tc3 collections,watch if tc3 has chunks in rpl3 sharding
           mongos> use test;
           switched to db test
           mongos> db.tc3.stats();
           {
             "sharded" : true,
             "flags" : 1,
             "ns" : "test.tc3",
             "count" : 2027503,
             "numExtents" : 71,
             "size" : 2473554192,
             "storageSize" : 4191793152,
             "totalIndexSize" : 127970752,
             "indexSizes" : {
               "_id_" : 70027440,
               "xx_1" : 57943312
             },
             "avgObjSize" : 1220.0002623917203,
             "nindexes" : 2,
             "nchunks" : 73,
             "shards" : {
               "rpl1" : {
                 "ns" : "test.tc3",
                 "count" : 844832,
                 "size" : 1030695040,
                 "avgObjSize" : 1220,
                 "storageSize" : 2480209920,
                 "numExtents" : 29,
                 "nindexes" : 2,
                 "lastExtentSize" : 417480704,
                 "paddingFactor" : 1,
                 "flags" : 1,
                 "totalIndexSize" : 48712608,
                 "indexSizes" : {
                   "_id_" : 27438656,
                   "xx_1" : 21273952
                 },
                 "ok" : 1
               },
               "rpl2" : {
                 "ns" : "test.tc3",
                 "count" : 852623,
                 "size" : 1040200396,
                 "avgObjSize" : 1220.000394078039,
                 "storageSize" : 1301745664,
                 "numExtents" : 24,
                 "nindexes" : 2,
                 "lastExtentSize" : 223506432,
                 "paddingFactor" : 1,
                 "flags" : 1,
                 "totalIndexSize" : 52677968,
                 "indexSizes" : {
                   "_id_" : 28313488,
                   "xx_1" : 24364480
                 },
                 "ok" : 1
               },
               "rpl3" : {
                 "ns" : "test.tc3", -- nice, and there are chunks in the new third rpl3 sharding.
                 "count" : 330048,
                 "size" : 402658756,
                 "avgObjSize" : 1220.0005938530153,
                 "storageSize" : 409837568,
                 "numExtents" : 18,
                 "nindexes" : 2,
                 "lastExtentSize" : 74846208,
                 "paddingFactor" : 1,
                 "flags" : 1,
                 "totalIndexSize" : 26580176,
                 "indexSizes" : {
                   "_id_" : 14275296,
                   "xx_1" : 12304880
                 },
                 "ok" : 1
               }
             },
             "ok" : 1
           }
           mongos>

           8.4.3 check all sharding status
           mongos> db.printShardingStatus();
           --- Sharding Status --- 
             sharding version: { "_id" : 1, "version" : 3 }
             shards:
             {  "_id" : "rpl1",  "host" : "rpl1/20.10x.91.119:27027,20.10x.91.119:27028" }
             {  "_id" : "rpl2",  "host" : "rpl2/20.10x.91.119:27037,20.10x.91.119:27038" }
             {  "_id" : "rpl3",  "host" : "rpl3/20.10x.91.119:27047,20.10x.91.119:27048" }
             databases:
             {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
             {  "_id" : "test",  "partitioned" : true,  "primary" : "rpl1" }
               test.tc chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
               test.tc2 chunks:
                   rpl3    21 -- there are data sharding in rpl3
                   rpl1    51
                   rpl2    51
                 too many chunks to print, use verbose if you want to force print
               test.tc3 chunks:
                   rpl2    27
                   rpl3    20  -- there are data sharding in rpl3
                   rpl1    26
                 too many chunks to print, use verbose if you want to force print
               test.tickets chunks:
                   rpl1    1
                 { "ip" : { $minKey : 1 } } -->> { "ip" : { $maxKey : 1 } } on : rpl1 { "t" : 1000, "i" : 0 }
             {  "_id" : "adin",  "partitioned" : false,  "primary" : "rpl1" }

           mongos> 
           
          Question 1:  how to change the sharding key in exists sharding collection,i cann't find eny method?


          Question 2:  there are 3 mongos,how to use viturl ip to point one of the three mongos servers automaticly? 

           Sharp:不用使用vip,mongos在java驅動中就是自動切換的,只要都配置上就可以,驅動自己實現了高可用。啟動不用使用nohup,fork就是后臺運行的參數
          不知道你們的機器是不是numa架構,啟動要使用numa啟動,

          posted on 2015-12-18 13:54 paulwong 閱讀(940) 評論(0)  編輯  收藏 所屬分類: MONGODB

          主站蜘蛛池模板: 固始县| 金山区| 永靖县| 呈贡县| 仙游县| 都昌县| 若尔盖县| 洮南市| 峨眉山市| 顺昌县| 阳山县| 巢湖市| 烟台市| 云霄县| 龙南县| 金湖县| 辽宁省| 隆回县| 南澳县| 遂昌县| 邢台市| 工布江达县| 巴林右旗| 浙江省| 普兰县| 丰镇市| 汕尾市| 六安市| 广宁县| 杭锦后旗| 揭东县| 阿坝| 阜南县| 名山县| 满城县| 双城市| 荔浦县| 郴州市| 镇远县| 色达县| 湄潭县|