Skip to content

Commit

Permalink
Version bump to 1.0.9
Browse files Browse the repository at this point in the history
  • Loading branch information
wolf31o2 committed May 14, 2014
1 parent 1825bfa commit 81b21fb
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions metadata.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "hadoop",
"description": "Installs/Configures Hadoop (HDFS/YARN/MRv2), HBase, Hive, Oozie, Pig, and ZooKeeper",
"long_description": "# hadoop cookbook\n\n[![Cookbook Version](http://img.shields.io/cookbook/v/hadoop.svg)](https://community.opscode.com/cookbooks/hadoop)\n[![Build Status](http://img.shields.io/travis/continuuity/hadoop_cookbook.svg)](http://travis-ci.org/continuuity/hadoop_cookbook)\n\n# Requirements\n\nThis cookbook may work on earlier versions, but these are the minimal tested versions.\n\n* Chef 11.4.0+\n* CentOS 6.4+\n* Ubuntu 12.04+\n\nThis cookbook assumes that you have a working Java installation. It has been tested using version `1.21.2` of the `java` cookbook, using Oracle Java 6. If you plan on using Hive with a database other than the embedded Derby, you will need to provide it and set it up prior to starting Hive Metastore service.\n\n# Usage\n\nThis cookbook is designed to be used with a wrapper cookbook or a role with settings for configuring Hadoop. The services should work out of the box on a single host, but little validation is done that you have made a working Hadoop configuration. The cookbook is attribute-driven and is suitable for use via either `chef-client` or `chef-solo` since it does not use any server-based functionality. The cookbook defines service definitions for each Hadoop service, but it does not enable or start them, by default.\n\nFor more information, read the [Wrapping this cookbook](https://github.com/continuuity/hadoop_cookbook/wiki/Wrapping-this-cookbook) wiki entry.\n\n# Attributes\n\nAttributes for this cookbook define the configuration files for Hadoop and its various services. Hadoop configuration files are XML files, with name/value property pairs. The attribute name determines which file the property is placed and the property name. The attribute value is the property value. The attribute `hadoop['core_site']['fs.defaultFS']` will configure a property named `fs.defaultFS` in `core-site.xml` in `hadoop['conf_dir']`. All attribute values are taken as-is and only minimal configuration checking is done on values. It is up to the user to provide a valid configuration for your cluster.\n\nAttribute Tree | File | Location \n-------------- | ---- | --------\nhadoop['capacity_scheduler'] | capacity-scheduler.xml | `hadoop['conf_dir']`\nhadoop['container_executor'] | container-executor.cfg | `hadoop['conf_dir']`\nhadoop['core_site'] | core-site.xml | `hadoop['conf_dir']`\nhadoop['fair_scheduler'] | fair-scheduler.xml | `hadoop['conf_dir']`\nhadoop['hadoop_env'] | hadoop-env.sh | `hadoop['conf_dir']`\nhadoop['hadoop_metrics'] | hadoop-metrics.properties | `hadoop['conf_dir']`\nhadoop['hadoop_policy'] | hadoop-policy.xml | `hadoop['conf_dir']`\nhadoop['hdfs_site'] | hdfs-site.xml | `hadoop['conf_dir']`\nhadoop['log4j'] | log4j.properties | `hadoop['conf_dir']`\nhadoop['mapred_site'] | mapred-site.xml | `hadoop['conf_dir']`\nhadoop['yarn_env'] | yarn-env.sh | `hadoop['conf_dir']`\nhadoop['yarn_site'] | yarn-site.xml | `hadoop['conf_dir']`\nhbase['hadoop_metrics'] | hadoop-metrics.properties | `hbase['conf_dir']`\nhbase['hbase_env'] | hbase-env.sh | `hbase['conf_dir']`\nhbase['hbase_policy'] | hbase-policy.xml | `hbase['conf_dir']`\nhbase['hbase_site'] | hbase-site.xml | `hbase['conf_dir']`\nhbase['log4j'] | log4j.properties | `hbase['conf_dir']`\nhive['hive_env'] | hive-env.sh | `hive['conf_dir']`\nhive['hive_site'] | hive-site.xml | `hive['conf_dir']`\noozie['oozie_site'] | oozie-site.xml | `oozie['conf_dir']`\nzookeeper['log4j'] | log4j.properties | `zookeeper['conf_dir']`\nzookeeper['zoocfg'] | zoo.cfg | `zookeeper['conf_dir']`\n\n## Distribution Attributes\n\n* `hadoop['distribution']` - Specifies which Hadoop distribution to use, currently supported: cdh, hdp. Default `hdp`\n* `hadoop['distribution_version']` - Specifies which version of `hadoop['distribution']` to use. Default `2.0` if `hadoop['distribution']` is `hdp` and `5` if `hadoop['distribution']` is `cdh`\n\n### APT-specific settings\n\n* `hadoop['apt_repo_url']` - Provide an alternate apt installation source location. If you change this attribute, you are expected to provide a path to a working repo for the `hadoop['distribution']` used. Default: `nil`\n* `hadoop['apt_repo_key_url']` - Provide an alternative apt repository key source location. Default `nil`\n\n### RPM-specific settings\n\n* `hadoop['yum_repo_url']` - Provide an alternate yum installation source location. If you change this attribute, you are expected to provide a path to a working repo for the `hadoop['distribution']` used. Default: `nil`\n* `hadoop['yum_repo_key_url']` - Provide an alternative yum repository key source location. Default `nil`\n\n## Global Configuration Attributes\n\n* `hadoop['conf_dir']` - The directory used inside `/etc/hadoop` and used via the alternatives system. Default `conf.chef`\n* `hbase['conf_dir']` - The directory used inside `/etc/hbase` and used via the alternatives system. Default `conf.chef`\n* `hive['conf_dir']` - The directory used inside `/etc/hive` and used via the alternatives system. Default `conf.chef`\n* `oozie['conf_dir']` - The directory used inside `/etc/oozie` and used via the alternatives system. Default `conf.chef`\n* `zookeeper['conf_dir']` - The directory used inside `/etc/zookeeper` and used via the alternatives system. Default `conf.chef`\n\n## Default Attributes\n\n* `hadoop['core_site']['fs.defaultFS']` - Sets URI to HDFS NameNode. Default `hdfs://localhost`\n* `hadoop['yarn_site']['yarn.resourcemanager.hostname']` - Sets hostname of YARN ResourceManager. Default `localhost`\n* `hive['hive_site']['javax.jdo.option.ConnectionURL']` - Sets JDBC URL. Default `jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true`\n* `hive['hive_site']['javax.jdo.option.ConnectionDriverName']` - Sets JDBC Driver. Default `org.apache.derby.jdbc.EmbeddedDriver`\n\n# Recipes\n\n* `default.rb` - Sets up configuration and `hadoop-client` packages.\n* `hadoop_hdfs_checkconfig` - Ensures the HDFS configuration meets required parameters.\n* `hadoop_hdfs_datanode` - Sets up an HDFS DataNode.\n* `hadoop_hdfs_ha_checkconfig` - Ensures the HDFS configuration meets requirements for High Availability.\n* `hadoop_hdfs_journalnode` - Sets up an HDFS JournalNode.\n* `hadoop_hdfs_namenode` - Sets up an HDFS NameNode.\n* `hadoop_hdfs_secondarynamenode` - Sets up an HDFS Secondary NameNode.\n* `hadoop_hdfs_zkfc` - Sets up HDFS Failover Controller, required for automated NameNode failover.\n* `hadoop_yarn_nodemanager` - Sets up a YARN NodeManager.\n* `hadoop_yarn_proxyserver` - Sets up a YARN Web Proxy.\n* `hadoop_yarn_resourcemanager` - Sets up a YARN ResourceManager.\n* `hbase` - Sets up configuration and `hbase` packages.\n* `hbase_checkconfig` - Ensures the HBase configuration meets required parameters.\n* `hbase_master` - Sets up an HBase Master.\n* `hbase_regionserver` - Sets up an HBase RegionServer.\n* `hbase_rest` - Sets up an HBase REST interface.\n* `hbase_thrift` - Sets up an HBase Thrift interface.\n* `hive` - Sets up configuration and `hive` packages.\n* `hive_metastore` - Sets up Hive Metastore metadata repository.\n* `hive_server` - Sets up a Hive Thrift service.\n* `hive_server2` - Sets up a Hive Thrift service with Kerberos and multi-client concurrency support.\n* `oozie` - Sets up an Oozie server.\n* `oozie_client` - Sets up an Oozie client.\n* `pig` - Installs pig interpreter.\n* `repo` - Sets up package manager repositories for specified `hadoop['distribution']`\n* `zookeeper` - Sets up `zookeeper` package.\n* `zookeeper_server` - Sets up a ZooKeeper server.\n\n# Author\n\nAuthor:: Continuuity, Inc. (<[email protected]>)\n\n# Testing\n\nThis cookbook has several ways to test it. It includes code tests, which are done using `foodcritic`, `rubocop`, and `chefspec`.\nIt, also, includes functionality testing, provided by `vagrant`.\n\n```text\nrake foodcritic\nrake rubocop\nrake chefspec\nrake vagrant\n```\n\nThis cookbook requires the `vagrant-omnibus` and `vagrant-berkshelf` Vagrant plugins to be installed.\n\n# License\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this software except in compliance with the License. You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n",
"long_description": "# hadoop cookbook\n\n[![Cookbook Version](http://img.shields.io/cookbook/v/hadoop.svg)](https://community.opscode.com/cookbooks/hadoop)\n[![Build Status](http://img.shields.io/travis/continuuity/hadoop_cookbook.svg)](http://travis-ci.org/continuuity/hadoop_cookbook)\n\n# Requirements\n\nThis cookbook may work on earlier versions, but these are the minimal tested versions.\n\n* Chef 11.4.0+\n* CentOS 6.4+\n* Ubuntu 12.04+\n\nThis cookbook assumes that you have a working Java installation. It has been tested using version `1.21.2` of the `java` cookbook, using Oracle Java 6. If you plan on using Hive with a database other than the embedded Derby, you will need to provide it and set it up prior to starting Hive Metastore service.\n\n# Usage\n\nThis cookbook is designed to be used with a wrapper cookbook or a role with settings for configuring Hadoop. The services should work out of the box on a single host, but little validation is done that you have made a working Hadoop configuration. The cookbook is attribute-driven and is suitable for use via either `chef-client` or `chef-solo` since it does not use any server-based functionality. The cookbook defines service definitions for each Hadoop service, but it does not enable or start them, by default.\n\nFor more information, read the [Wrapping this cookbook](https://github.com/continuuity/hadoop_cookbook/wiki/Wrapping-this-cookbook) wiki entry.\n\n# Attributes\n\nAttributes for this cookbook define the configuration files for Hadoop and its various services. Hadoop configuration files are XML files, with name/value property pairs. The attribute name determines which file the property is placed and the property name. The attribute value is the property value. The attribute `hadoop['core_site']['fs.defaultFS']` will configure a property named `fs.defaultFS` in `core-site.xml` in `hadoop['conf_dir']`. All attribute values are taken as-is and only minimal configuration checking is done on values. It is up to the user to provide a valid configuration for your cluster.\n\nAttribute Tree | File | Location \n-------------- | ---- | --------\nhadoop['capacity_scheduler'] | capacity-scheduler.xml | `hadoop['conf_dir']`\nhadoop['container_executor'] | container-executor.cfg | `hadoop['conf_dir']`\nhadoop['core_site'] | core-site.xml | `hadoop['conf_dir']`\nhadoop['fair_scheduler'] | fair-scheduler.xml | `hadoop['conf_dir']`\nhadoop['hadoop_env'] | hadoop-env.sh | `hadoop['conf_dir']`\nhadoop['hadoop_metrics'] | hadoop-metrics.properties | `hadoop['conf_dir']`\nhadoop['hadoop_policy'] | hadoop-policy.xml | `hadoop['conf_dir']`\nhadoop['hdfs_site'] | hdfs-site.xml | `hadoop['conf_dir']`\nhadoop['log4j'] | log4j.properties | `hadoop['conf_dir']`\nhadoop['mapred_site'] | mapred-site.xml | `hadoop['conf_dir']`\nhadoop['yarn_env'] | yarn-env.sh | `hadoop['conf_dir']`\nhadoop['yarn_site'] | yarn-site.xml | `hadoop['conf_dir']`\nhbase['hadoop_metrics'] | hadoop-metrics.properties | `hbase['conf_dir']`\nhbase['hbase_env'] | hbase-env.sh | `hbase['conf_dir']`\nhbase['hbase_policy'] | hbase-policy.xml | `hbase['conf_dir']`\nhbase['hbase_site'] | hbase-site.xml | `hbase['conf_dir']`\nhbase['log4j'] | log4j.properties | `hbase['conf_dir']`\nhive['hive_env'] | hive-env.sh | `hive['conf_dir']`\nhive['hive_site'] | hive-site.xml | `hive['conf_dir']`\noozie['oozie_site'] | oozie-site.xml | `oozie['conf_dir']`\nzookeeper['log4j'] | log4j.properties | `zookeeper['conf_dir']`\nzookeeper['zoocfg'] | zoo.cfg | `zookeeper['conf_dir']`\n\n## Distribution Attributes\n\n* `hadoop['distribution']` - Specifies which Hadoop distribution to use, currently supported: cdh, hdp. Default `hdp`\n* `hadoop['distribution_version']` - Specifies which version of `hadoop['distribution']` to use. Default `2.0` if `hadoop['distribution']` is `hdp` and `5` if `hadoop['distribution']` is `cdh`\n\n### APT-specific settings\n\n* `hadoop['apt_repo_url']` - Provide an alternate apt installation source location. If you change this attribute, you are expected to provide a path to a working repo for the `hadoop['distribution']` used. Default: `nil`\n* `hadoop['apt_repo_key_url']` - Provide an alternative apt repository key source location. Default `nil`\n\n### RPM-specific settings\n\n* `hadoop['yum_repo_url']` - Provide an alternate yum installation source location. If you change this attribute, you are expected to provide a path to a working repo for the `hadoop['distribution']` used. Default: `nil`\n* `hadoop['yum_repo_key_url']` - Provide an alternative yum repository key source location. Default `nil`\n\n## Global Configuration Attributes\n\n* `hadoop['conf_dir']` - The directory used inside `/etc/hadoop` and used via the alternatives system. Default `conf.chef`\n* `hbase['conf_dir']` - The directory used inside `/etc/hbase` and used via the alternatives system. Default `conf.chef`\n* `hive['conf_dir']` - The directory used inside `/etc/hive` and used via the alternatives system. Default `conf.chef`\n* `oozie['conf_dir']` - The directory used inside `/etc/oozie` and used via the alternatives system. Default `conf.chef`\n* `zookeeper['conf_dir']` - The directory used inside `/etc/zookeeper` and used via the alternatives system. Default `conf.chef`\n\n## Default Attributes\n\n* `hadoop['core_site']['fs.defaultFS']` - Sets URI to HDFS NameNode. Default `hdfs://localhost`\n* `hadoop['yarn_site']['yarn.resourcemanager.hostname']` - Sets hostname of YARN ResourceManager. Default `localhost`\n* `hive['hive_site']['javax.jdo.option.ConnectionURL']` - Sets JDBC URL. Default `jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true`\n* `hive['hive_site']['javax.jdo.option.ConnectionDriverName']` - Sets JDBC Driver. Default `org.apache.derby.jdbc.EmbeddedDriver`\n\n# Recipes\n\n* `default.rb` - Sets up configuration and `hadoop-client` packages.\n* `hadoop_hdfs_checkconfig` - Ensures the HDFS configuration meets required parameters.\n* `hadoop_hdfs_datanode` - Sets up an HDFS DataNode.\n* `hadoop_hdfs_ha_checkconfig` - Ensures the HDFS configuration meets requirements for High Availability.\n* `hadoop_hdfs_journalnode` - Sets up an HDFS JournalNode.\n* `hadoop_hdfs_namenode` - Sets up an HDFS NameNode.\n* `hadoop_hdfs_secondarynamenode` - Sets up an HDFS Secondary NameNode.\n* `hadoop_hdfs_zkfc` - Sets up HDFS Failover Controller, required for automated NameNode failover.\n* `hadoop_yarn_nodemanager` - Sets up a YARN NodeManager.\n* `hadoop_yarn_proxyserver` - Sets up a YARN Web Proxy.\n* `hadoop_yarn_resourcemanager` - Sets up a YARN ResourceManager.\n* `hbase` - Sets up configuration and `hbase` packages.\n* `hbase_checkconfig` - Ensures the HBase configuration meets required parameters.\n* `hbase_master` - Sets up an HBase Master.\n* `hbase_regionserver` - Sets up an HBase RegionServer.\n* `hbase_rest` - Sets up an HBase REST interface.\n* `hbase_thrift` - Sets up an HBase Thrift interface.\n* `hive` - Sets up configuration and `hive` packages.\n* `hive_metastore` - Sets up Hive Metastore metadata repository.\n* `hive_server` - Sets up a Hive Thrift service.\n* `hive_server2` - Sets up a Hive Thrift service with Kerberos and multi-client concurrency support.\n* `oozie` - Sets up an Oozie server.\n* `oozie_client` - Sets up an Oozie client.\n* `pig` - Installs pig interpreter.\n* `repo` - Sets up package manager repositories for specified `hadoop['distribution']`\n* `zookeeper` - Sets up `zookeeper` package.\n* `zookeeper_server` - Sets up a ZooKeeper server.\n\n# Author\n\nAuthor:: Continuuity, Inc. (<[email protected]>)\n\n# Testing\n\nThis cookbook has several ways to test it. It includes code tests, which are done using `foodcritic`, `rubocop`, and `chefspec`.\nIt, also, includes functionality testing, provided by `vagrant`.\n\n```text\nrake foodcritic\nrake rubocop\nrake chefspec\nrake vagrant\n```\n\nThis cookbook requires the `vagrant-omnibus` and `vagrant-berkshelf` Vagrant plugins to be installed.\n\n# License\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this software except in compliance with the License. You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n",
"maintainer": "Continuuity, Inc.",
"maintainer_email": "[email protected]",
"license": "Apache 2.0",
Expand Down Expand Up @@ -42,5 +42,5 @@
},
"recipes": {
},
"version": "1.0.8"
"version": "1.0.9"
}
2 changes: 1 addition & 1 deletion metadata.rb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
license 'Apache 2.0'
description 'Installs/Configures Hadoop (HDFS/YARN/MRv2), HBase, Hive, Oozie, Pig, and ZooKeeper'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version '1.0.8'
version '1.0.9'

depends 'yum', '>= 3.0'
depends 'apt'
Expand Down

0 comments on commit 81b21fb

Please sign in to comment.