GlusterFS
GlusterFS is a free software originally developed by Gluster, Inc., then after Red Hat, Inc., acquired Gluster in 2011. By using this software we can make a very large storage, that is combining multiple storage servers with the help of interconnect like 1G Ethernet or Infiniband. In simple terms we can say aggregate multiple storage servers to form a large storage that can be accessed by clients.
Advantages of GlusterFS:-
- Open Source
- Scale storage size up to several petabytes
- High Performance & IO
- You can deploy GlusterFS with the help of commodity hardware servers
Following Types of volume can be created in your GlusterFS Environment:-
- Distributed
- Replicated
- Striped
- Distributed
Striped
- Distributed
Replicated
- Distributed
Striped Replicated
- Striped
Replicated
NOTE:- brick is an export directory on a server
1) Distributed
This type of volume simply distributes the data evenly across the available bricks in a volume.most basic GlusterFS volume type is a “Distribute only” volume, if I write 100 files, on average, fifty will end up on one server, and fifty will end up on another. This is faster than a “replicated” volume.
Now we are going to configure a distributed volume using 2 servers which can be able to use from GlusterFS Clients
OS
Version:- RHEL 6.5
Server1
Hostname:- server1.example.comServer1
IP :- 172.66.249.15
Server2
Hostname:- server2.example.comServer2
IP :- 172.66.249.16
Client
Machine Hostname:- client.example.comClient
IP :- 172.66.249.17
Step1:- Login to Server 1 then configure as follows.
First download the repo file:-
Then Install the necessary Components:-
start the glusterd service
Add IP and hostname entry to /etc/hosts file this is the file used to determine the IP address that corresponds to a host name
Format and Mount the partition
[NOTE:- Configure the same settings in server2 ]
STEP 2:- Login to Server 2 then configure as follows
[root@server2
~]#
wget
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
-P /etc/yum.repos.d
[root@server2
~]#
yum
-y install glusterfs-server
[root@server2
~]# /etc/init.d/glusterd start
Starting glusterd: [ OK ]
[root@server2
~]#
vi
/etc/hosts
172.66.249.15 server1.example.com
server1
172.66.249.16 server2.example.com
server2
172.66.249.17 client.example.com
client
[root@server2
~]# fdisk -l
Disk
/dev/sda: 21.5 GB, 21474836480 bytes
64
heads, 32 sectors/track, 20480 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O
size (minimum/optimal): 512 bytes / 512 bytes
Disk
identifier: 0x000c7fe4
Device
Boot Start End Blocks Id System
/dev/sda1
* 2 501 512000 83 Linux
Partition
1 does not end on cylinder boundary.
/dev/sda2
502 20480 20458496 8e Linux LVM
Partition
2 does not end on cylinder boundary.
Disk
/dev/mapper/vg_server1-lv_root: 18.8 GB, 18798870528 bytes
255
heads, 63 sectors/track, 2285 cylinders
Units
= cylinders of 16065 * 512 = 8225280 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O
size (minimum/optimal): 512 bytes / 512 bytes
Disk
identifier: 0x00000000
Disk
/dev/mapper/vg_server1-lv_swap: 2147 MB, 2147483648 bytes
255
heads, 63 sectors/track, 261 cylinders
Units
= cylinders of 16065 * 512 = 8225280 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O
size (minimum/optimal): 512 bytes / 512 bytes
Disk
identifier: 0x00000000
Disk
/dev/sdb: 26.8 GB, 26843545600 bytes
64
heads, 32 sectors/track, 25600 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O
size (minimum/optimal): 512 bytes / 512 bytes
Disk
identifier: 0x00000000
#
create a directory for GlusterFS volume
[root@server2
~]# mkfs.ext4 /dev/sdb
[root@server2 ~]# mkdir /server2/disk2 -p
Create a gluster volume by using this command
[root@server1
~]# gluster volume create My_GFS_Volume
server1:/server1/disk1/volume1 server2:/server2/disk2/volume2
server1:/server1/disk1/volume1 server2:/server2/disk2/volume2
Next you should start the volume then only you can access it from client
No comments:
Post a Comment