Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.1.5 #77

Open
wants to merge 7 commits into
base: 2.1.5
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions 20_glusterfs_hadoop_sudoers
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# GlusterFS sudo settings for multi-tennancy
Defaults:%hadoop !requiretty
%hadoop ALL=NOPASSWD:/usr/bin/getfattr -m . -n trusted.glusterfs.pathinfo *
53 changes: 41 additions & 12 deletions README
Original file line number Diff line number Diff line change
Expand Up @@ -139,14 +139,11 @@ FOR HACKERS

* Source Layout (./src/)

org.apache.hadoop.fs.glusters/GlusterFSBrickClass.java
org.apache.hadoop.fs.glusters/GlusterFSXattr.java <--- Fetch/Parse Extended Attributes of a file
org.apache.hadoop.fs.glusters/GlusterFUSEInputStream.java <--- Input Stream (instantiated during open() calls;
org.apache.hadoop.fs.glusters/GlusterFSBrickRepl.java
org.apache.hadoop.fs.glusters/GlusterFUSEOutputStream.java <--- Output Stream (instantiated during creat() calls)
org.apache.hadoop.fs.glusters/GlusterFileSystem.java <--- Entry Point for the plugin (extends Hadoop FileSystem class)
For the overall architecture, see. Currently, we use the hadoop RawLocalFileSystem as
the basis - and wrap it with the GlusterVolume class. That class is then used by the
Hadoop 1x (GlusterFileSystem) and Hadoop 2x (GlusterFs) adapters.

org.gluster.test.AppTest.java <--- Your test cases go here (if any :-))
https://forge.gluster.org/hadoop/pages/Architecture

./tools/build-deploy-jar.py <--- Build and Deployment Script
./conf/core-site.xml <--- Sample configuration file
Expand All @@ -160,9 +157,6 @@ org.gluster.test.AppTest.java <--- Your test cases go here (if
JENKINS
-------

At the moment, you need to run as root - this can be done by modifying this line in the jenkins init.d/ script.
This is because of the mount command issued in the GlusterFileSystem.

#Method 1) Modify JENKINS_USER in /etc/sysconfig/jenkins
JENKINS_USER=root

Expand All @@ -181,8 +175,43 @@ The unit tests read test resources from glusterconfig.properties - a file which
1) edit your .bashrc, or else at your terminal run :

export GLUSTER_MOUNT=/mnt/glusterfs
export HCFS_FILE_SYSTEM_CONNECTOR=org.apache.hadoop.hcfs.test.connector.glusterfs.GlusterFileSystemTestConnector
export HCFS_FILE_SYSTEM_CONNECTOR=org.apache.hadoop.fs.test.connector.glusterfs.GlusterFileSystemTestConnector
export HCFS_CLASSNAME=org.apache.hadoop.fs.glusterfs.GlusterFileSystem

(in eclipse - see below , you will add these at the "Run Configurations" menu,
in VM arguments, prefixed with -D, for example, "-DGLUSTER_MOUNT=x -DHCFS_FILE_SYSTEM_CONNECTOR=y ...")

2) run:
mvn package
mvn clean package

3) The jar artifact will be in target/

DEVELOPING
----------

0) Create a mock gluster mount:

#Create raw disk and format it...
truncate -s 1G /export/debugging_fun.brick
sudo mkfs.xfs /export/debugging_fun.brick

#Mount it as loopback fs
mount -o loop /export/debugging_fun.brick /mnt/mybrick ;

#Now make a mount point for it, and also, for gluster itself
mkdir /mnt/mybrick/glusterbrick
mkdir /mnt/glusterfs
MNT="/mnt/glusterfs"
BRICK="/mnt/mybrick/glusterbrick"

#Create a gluster volume that writes to the brick
sudo gluster volume create HadoopVol 10.10.61.230:$BRICK

#Mount the volume on top of the newly created brick
mount -t glusterfs mount -t glusterfs $(hostname):HadoopVol $MNT

1) Run "mvn eclipse:eclipse", and import into eclipse.

2) Add the exported env variables above via Run Configurations as described in the above section.

3) Develop and run unit tests as you would any other java app.
60 changes: 0 additions & 60 deletions glusterfs-hadoop.spec

This file was deleted.

10 changes: 9 additions & 1 deletion glusterfs-hadoop.spec.tmpl
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
Name: rhs-hadoop
Version: $version
# release number is automatically updated when soure version is the same
Release: $release
#if $epoch
Epoch: $epoch
Expand Down Expand Up @@ -32,6 +33,7 @@ in the hadoop configuration files and loaded at runtime as the FileSystem implem
rm -rf %{buildroot}
/bin/mkdir -p %{buildroot}%{_javadir}
/bin/mkdir -p %{buildroot}%{hadoop_libdir}
/bin/mkdir -p %{buildroot}%{_sysconfdir}/sudoers.d

#for $i, $artifact in $enumerate($all_artifacts)
#if $artifact.endswith('.jar')
Expand All @@ -40,6 +42,9 @@ rm -rf %{buildroot}
#end if
#end for

# move sudoers file to /etc/sudoers.d/
install -m 644 20_glusterfs_hadoop_sudoers %{buildroot}%{_sysconfdir}/sudoers.d/

%clean
rm -rf %{buildroot}

Expand All @@ -51,9 +56,12 @@ rm -rf %{buildroot}
%{hadoop_libdir}/$artifact
#end if
#end for
%{_sysconfdir}/sudoers.d/20_glusterfs_hadoop_sudoers

%changelog
* Wed Jan 9 2014 Jay Vyas <[email protected]> 2.1.4 renamed
* Wed Feb 05 2014 Jeff Vance <[email protected]> 2.1.5-2
- installs the sudoers file. BZ 1059986
* Wed Jan 9 2014 Jay Vyas <[email protected]> 2.1.4-1
- rename to rhs-hadoop for release

* Fri Nov 22 2013 Jay Vyas <[email protected]> 2.1.4
Expand Down
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
<artifactId>glusterfs</artifactId>
<packaging>jar</packaging>
<version>2.1.5</version>
<name>glusterfs</name>
<name>glusterfs-hadoop</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
Expand Down