Skip to content

scott-mead/storm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Provisioning Storm

  This document covers the procedures for bulding a storm cluster using the StormStarter script.  When executed against a redhat-based ( yum ) system, this script will give you a working storm cluster.

Essentially, to get a working cluster, you will:
  1. Setup 5 VM’s ( any odd number will do, but 5 is recommended )
  2. git clone https://github.com/scott-mead/storm
  3. Configure the cluster.yml config file
  4. Configure SSH
  5. Run the ‘StormStarter’ script

Setup VM’s

Nodes

  Building a cluster with 5 VM’s will allow you to withstand two node failures.  Always build clusters in odd numbers so that zookeeper’s quorum keeping operations can elect a master.

Salt

  I’m using salt-cloud to quickly provision ec2 nodes for me.  You can use whichever method you’d like, but, if you don’t have a tool, I highly recommend using salt-cloud.

Clone StormStarter

  Get a copy of the ‘StormStarter’ git repository from https://github.com/scott-mead/storm


Configure cluster.yml

  This is your primary config file.  Look at the example and modify it to your needs.

The configuration file contains a group of servers that you will be addressing.
```yaml group_name: username: "sshusername" node1: hostname: "externally addressable name" int_ip: "host internal ip" int_name: "cluster1" ``` Example config ```yaml dev_cluster: username: "ec2-user" node1: hostname : "c1.example.com" int_ip : "192.168.185.1" int_name : "cluster1" node2: hostname : "c2.example.com" int_ip : "192.168.185.2" int_name : "cluster2" node3: hostname : "c3.example.com" int_ip : "192.168.185.3" int_name : "cluster3"

prod_cluster: username: "ec2-user" node1: hostname : "p1.example.com" int_ip : "192.168.185.1" int_name : "cluster1" node2: hostname : "p2.example.com" int_ip : "192.168.185.2" int_name : "cluster2" node3: hostname : "p3.example.com" int_ip : "192.168.185.3" int_name : "cluster3"

<div></div><div><ul><li><span style="line-height: 1.4;">group_name is what you will pass to StormStarter -g</span><br></li><ul><li><span style="line-height: 1.4;">hostname&nbsp;</span></li><ul><li><span style="line-height: 1.4;">should be the externally addressable, ec2 name i.e. (&nbsp;</span>ec2-52-109-12-100.compute-1.amazonaws.com )</li></ul><li>int_ip</li><ul><li>This is the internal ( a.k.a. non-public / private ) ip address. &nbsp;This will be used by the nodes to communicate with one another</li></ul><li>int_name</li><ul><li>This is an internal name to be setup by StormStarter. &nbsp;If you change this, you will need to modify most of the chef-managed configuration files within the package. &nbsp;I do NOT recommend this.</li></ul></ul></ul></div><h1>Configure SSH</h1><div>&nbsp; Edit the $HOME/.ssh/config file and add the key for your aws hosts and disable prompting. &nbsp;</div><div><br></div><div><div>Host *aws*</div><div>&nbsp; &nbsp; &nbsp; &nbsp; IdentityFile /home/user/myawskey.pem</div><div>&nbsp; &nbsp; &nbsp; &nbsp;StrictHostKeyChecking no</div></div><div><br></div><div>You can either use wildcards like I did above, or, list each one individually.</div><div><br></div><div><div><div>Host&nbsp;<span style="line-height: 1.4;">ec2-52-109-12-100.compute-1.amazonaws.com</span></div><div>&nbsp; &nbsp; &nbsp; &nbsp; IdentityFile /home/user/myawskey.pem</div><div>&nbsp; &nbsp; &nbsp; &nbsp;StrictHostKeyChecking no</div></div></div><div><br></div><div><div>Host&nbsp;<span style="line-height: 1.4;">ec2-52-109-12-101.compute-1.amazonaws.com</span></div><div>&nbsp; &nbsp; &nbsp; &nbsp; IdentityFile /home/user/myawskey.pem</div><div>&nbsp; &nbsp; &nbsp; &nbsp;StrictHostKeyChecking no</div></div><div><br></div><div>Once you have set this up, validate that you can ssh to the host with no password.</div><div><br></div><h1>Run StormStarter</h1><div>&nbsp; &nbsp;Required options:</div><div><br></div><div>&nbsp; &nbsp;-c &lt; path to config file&gt;</div><div>&nbsp; &nbsp;-g &lt;group name&gt;</div><div><br></div><div>&nbsp; &nbsp;The group name is the top of the yaml tree, so if we had configured</div>
```yaml
dev_cluster:
  username: "ec2-user"
  node1:
    hostname : "c1.example.com"
    int_ip : "192.168.185.1"
    int_name : "cluster1"
node2:
    hostname : "c2.example.com"
    int_ip : "192.168.185.2"
    int_name : "cluster2"
node3:
    hostname : "c3.example.com"
    int_ip : "192.168.185.3"
    int_name : "cluster3"

prod_cluster:
  username: "ec2-user"
  node1:
    hostname : "p1.example.com"
    int_ip : "192.168.185.1"
    int_name : "cluster1"
node2:
    hostname : "p2.example.com"
    int_ip : "192.168.185.2"
    int_name : "cluster2"
node3:
    hostname : "p3.example.com"
    int_ip : "192.168.185.3"
    int_name : "cluster3"

I could use either ‘dev_cluster’ or ‘prod_cluster’ as the group name

Action:
   You have 4 possible actions:
  1. provision
    1. This will copy files, clone the repository and install 
    2. You can run this against a system with an existing install to update configs
  2. start
    1. This will connect to each node and run service supervisord start
  3. stop
    1. This will connect to each node and run service supervisord stop
  4. restart
    1. This will connect to each node and run service supervisord restart

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published