RackHD MongoDB version status
Configuration | MongoDB Version |
---|---|
RackHD with Ubuntu 14.04 | 2.4.9 |
RackHD with Ubuntu 16.04 | 2.6.10 |
Mongo Latest (Till 201710) | 3.4.9 |
RackHD is developed with Ubuntu 14.04 or 16.04 default environment, MongoDB version is relatively old since MongoDB is evolving from 2.4 to 2.6, 3.0, 3.2 and latest 3.4. Some user may want to use latest MongoDB version, thus we need to verify RackHD with MongoDB 3.4.9 on both Ubuntu 14.04 and 16.04.
Major new features for new MongoDB versions
MongoDB 2.6.x
Key new features include:
- Aggregation Enhancements
- Text Search Integration
- Insert and Update Improvements
- Query Engine Improvements
MongoDB can now use index intersection to fulfill queries supported by more than one index. In previous versions, MongoDB could use only a single index to fulfill most queries.
db.orders.find( { item: "abc123", qty: { $gt: 15 } } )
- Security Improvements
MongoDB 2.6 enhances support for secure deployments through improved SSL support, x.509-based authentication, an improved authorization system with more granular controls, as well as centralized credential storage, and improved user management tools.
- Configuration Options YAML File Format
MongoDB 2.6 supports a YAML-based configuration file format in addition to the previous configuration file format.
- usePowerOf2Sizes is now the default allocation strategy for all new collections.
- /etc/mongorc.js
Global mongorc.js file which the mongo shell evaluates upon start-up.
Reference link: https://docs.mongodb.com/manual/release-notes/2.6/
- MongoDB 3.0
key features include support for the WiredTiger storage engine, pluggable storage engine API, SCRAM-SHA-1 authentication mechanism, and improved explain functionality.
- MongoDB default storage engine is MMAP, MMAPv1 is used in MongoDB 3.0.
- The MMAPv1 storage engine adds support for collection-level locking.
- The default allocation strategy for collections in instances that use MMAPv1 is power of 2 allocation
- WiredTiger Configuration
- The 3.0 WiredTiger storage engine provides document-level locking and compression.
- WiredTiger storage engine is available in the 64-bit builds.
- With WiredTiger, MongoDB supports compression for all collections and indexes. Compression minimizes storage use at the expense of additional CPU.
- MongoDB Tools Enhancement
- mongostat and mongotop can now return output in JSON format with the --json option.
https://docs.mongodb.com/manual/release-notes/3.0/
- MongoDB 3.2
key features include WiredTiger as the default storage engine, replication election enhancements, config servers as replica sets, readConcern, and document validations.
- WiredTiger as Default
- WiredTiger Default Cache Size: 60% of RAM minus 1 GB, or 1GB
- Text Search Enhancements
- New Storage Engines - inMemory Storage Engine
- Available in MongoDB Enterprise only. Other than some metadata, the in-memory storage engine does not maintain any on-disk data. By avoiding disk I/O, the in-memory storage engine allows for more predictable latency of database operations.
- Starting in 3.2, MongoDB deprecates its HTTP interface.
- Starting in MongoDB 3.2, 32-bit binaries are deprecated and will be unavailable in future releases.
- MongoDB 3.4
key features include linearizable read concerns, views, and collation.
- Lots of aggregation enhancement
- New Aggregation Stage for Recursive Search, Faceted Search, to Facilitate Reshaping Documents, Count
- New Aggregation Array/String/Date Operators
- New Aggregation Control Flow Expression
- New Monitoring Aggregation Sources
- New Type Operator
- Collation
Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks. You can specify collation for a collection or a view, an index, or specific operations that support collation.
- Views
Specify what is returned by command such as find()
- MongoDB Tools
MongoDB introduces mongoreplay, a workload capture and analysis tool that replaces mongosniff. You can use mongoreplay to inspect and record commands sent to a MongoDB instance, and then replay the commands back onto another host at a later time.
- WiredTiger
- WiredTiger Default Cache Size: 50% of RAM minus 1 GB, or 256MB
- General Enhancements
- Added systemd support in distributions.
- mongo shell adds support for marshalling fields of type javascript and javascriptWithScope to JavaScript functions. See --disableJavaScriptProtection.
Move secure erase overlay to RackHD static file folder
Secure erase overlay is named secure.erase.overlay.cpio.gz by default. It should be moved to RackHD static file folder.
By default RackHD static file folder is /var/renasar/on-http/static/http/common/. User can also setup independent static file server, in this case overlay should be moved to user specified static file server path. For more details on static file server setup, please refer to:
http://rackhd.readthedocs.io/en/latest/rackhd/static_file_server.html
Get driveId catalog
Use below command to get driveId catalog for specified node:
curl <server>/api/current/nodes/<nodeId>/catalogs/driveId
Disk parameters for secure erase should be retrieved from RackHD driveId catalogs. If RAID operation is done outside RackHD, please re-run discovery on the node:
curl -X POST -H 'Content-Type: application/json' -d '{"name": "Graph.Discovery", "options":{"defaults":{"nodeId": "55b6afba024fd1b349afc148"}}}' <server>/api/current/nodes/55b6afba024fd1b349afc148/workflows
Below is an example of RackHD driveId catalog, you can use devName or identifier as secure erase payload disk identification.
{
"createdAt": "2016-09-30T07:38:09.861Z",
"data": [
{
"devName": "sdg",
"esxiWwid": "t10.ATA_____SATADOM2DSL_3ME__________________________TW02PTHF482935730079",
"identifier": 0,
"linuxWwid": "/dev/disk/by-id/ata-SATADOM-SL_3ME_TW02PTHF482935730079",
"scsiId": "6:0:0:0",
"virtualDisk": ""
}
],
"id": "eadd5581-382b-490b-b70d-a845cf590493",
"node": "57ee15ff09011929051819e1",
"source": "driveId",
"updatedAt": "2016-09-30T07:38:09.861Z"
}
Inject SKU packs
Dell servers have different secure erase workflow from other servers, to do secure erase on a Dell server, user has to inject related SKU pack with below command if a node has SKU id:
curl -T pack.tar.gz <server>/api/current/skus/<skuid>/pack
User can also create a SKU with pack if a node doesn't have SKU id:
curl -X POST --data-binary @pack.tar.gz <server>/api/current/skus/pack
Run secure erase workflow
Run secure erase workflow with below command:
curl -X POST -H 'Content-Type: application/json' -d @params.json <server>/api/current/nodes/<identifier>/workflows?name=Graph.Drive.SecureErase
Below is an example of params.json:
{
"options": {
"drive-secure-erase":{
"eraseSettings": [
{
"disks":["sdb"],
"tool":"sg_format",
"arg": "0"
},
{
"disks":["sda"],
"tool":"scrub",
"arg": "nnsa"
}
]},
"disk-scan-delay": {
"duration": 10000
}
}
}
For more details on secure erase workflow and its required payload parameters, please refer to:
http://rackhd.readthedocs.io/en/latest/rackhd/secure_erase.html
Beside doing secure erase, secure erase workflow will also update drive related catalogs like driveId and megaraid related sources.
Get secure erase progress
Secure erase is a long run task that takes hours or even days, it is forbid to power cycle a node without completing secure erase. During secure erase task, RackHD will report erasing progress via AMQP every minute, you can subscribe progress messages on RackHD server via below AMQP info:
Exchange: on.events
Routing Key: graph.progress.updated.information.<graphId>.<nodeId>
RackHD provides a tool to filter AMQP messages in below link:
https://github.com/RackHD/on-tools/tree/master/dev_tools
You can also subscribe AMQP messages via webhook. For more details on RackHD webhook and AMQP events, please refer to:
http://rackhd.readthedocs.io/en/latest/rackhd/event_notification.html