-
Notifications
You must be signed in to change notification settings - Fork 56
Configuring a deployment v1.2
To configure the ordering nodes, frontends, peers, and clients, you can use the prepare_<TYPE>_defaults.sh
scripts contained in the ./docker_images/
folder to obtain the default configuration files for each type of principal (where <TYPE>
is either orderingnode
,frontend
, peer
, or cli
). This will create a new folder named ./<TYPE>_material
where you can review and edit the configuration files before incorporating them into containers. Once the configuration files are ready, they can be used in their respective containers by mounting them in volumes. The following steps describe how to create a new Fabric deployment with multiple organizations using our ordering service. Moreover, it is assumed you have entry-level knowledge of both docker and Hyperledger Fabric and that you have already gone through the quick start guide. You can also download the files in which this guide is based here.
The first thing to do is to create the crypto material for the organizations and the genesis block for the system channel. You will need to use the official cryptogen
tool and the configtxgen
tool provided by us. You can access these tools from within a bftsmart/fabric-tools
container (or alternatively, by copying the tool from the container to your machine). You should create organizations for both the ordering service and for the peers that will run chaincodes. In the case of the ordering service, there should be created one organization for the ordering nodes and other for the frontends. An example configuration file for such setup is:
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Organizations associated with the ordering service
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Ordering nodes
# ---------------------------------------------------------------------------
- Name: OrderingNodes
Domain: node.bft
Template:
Count: 4
Hostname: "{{.Index}}"
# ---------------------------------------------------------------------------
# Frontends
# ---------------------------------------------------------------------------
- Name: Frontends
Domain: frontend.bft
Specs:
- Hostname: 1000
- Hostname: 2000
# ---------------------------------------------------------------------------
# "PeerOrgs" - Organizations for the endorsing/committing peers
# ---------------------------------------------------------------------------
PeerOrgs:
- Name: LaSIGE
Domain: lasige.bft
EnableNodeOUs: true
Template:
Count: 2
Hostname: "{{.Index}}.peer"
Users:
Count: 1
- Name: IBM
Domain: ibm.bft
EnableNodeOUs: true
Template:
Count: 2
Hostname: "{{.Index}}.peer"
Users:
Count: 1
You can download the file represented above here.
Using the example above, the cryptogen
tool will generate certificates and private keys for the organizations OrderingNodes (comprised of 4 ordering nodes), Frontends (comprised of 2 frontends), LaSIGE, and IBM (each one comprised of 2 peers and a single client besides the administrator). The generated crypto material will look like as follows:
$ ls crypto-config/ordererOrganizations/node.bft/orderers/
0.node.bft 1.node.bft 2.node.bft 3.node.bft
`
$ ls crypto-config/ordererOrganizations/frontend.bft/orderers/
1000.frontend.bft 2000.frontend.bft
$ ls crypto-config/peerOrganizations/lasige.bft/peers/
0.peer.lasige.bft 1.peer.lasige.bft
$ ls crypto-config/peerOrganizations/lasige.bft/users/
[email protected] [email protected]
$ ls crypto-config/peerOrganizations/ibm.bft/peers/
0.peer.ibm.bft 1.peer.ibm.bft
$ ls crypto-config/peerOrganizations/ibm.bft/users/
[email protected] [email protected]
The profiles to be included in the configtx.yaml
file used to generate the genesis block should be:
Profiles:
BFTGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
OrdererType: bftsmart
Addresses:
- 1000.frontend.bft:7050
- 2000.frontend.bft:7050
Organizations:
- *OrderingNodes
- *Frontends
Consortiums:
BFTConsortium:
Organizations:
- *LaSIGE
- *IBM
BFTChannel:
Consortium: BFTConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *LaSIGE
- *IBM
An important detail is the fact that the parameter Orderer->Addresses
must be a list of the frontend addresses, not a list of the ordering nodes. The ordering nodes stand behind the frontends and do not interact directly with any other Fabric component.
Moreover, the organizations should be specified as follows:
Organizations:
- &OrderingNodes
Name: OrderingNodes
ID: NodesMSP
MSPDir: crypto-config/ordererOrganizations/node.bft/msp
Policies:
Readers:
Type: Signature
Rule: "OR('NodesMSP.member')"
Writers:
Type: Signature
Rule: "AND('NodesMSP.member','NodesMSP.member')"
Admins:
Type: Signature
Rule: "OR('NodesMSP.admin')"
- &Frontends
Name: Frontends
ID: FrontendsMSP
MSPDir: crypto-config/ordererOrganizations/frontend.bft/msp
Policies:
Readers:
Type: Signature
Rule: "OR('FrontendsMSP.member')"
Writers:
Type: Signature
Rule: "OR('FrontendsMSP.member')"
Admins:
Type: Signature
Rule: "OR('FrontendsMSP.admin')"
- &LaSIGE
Name: LaSIGE
ID: LaSIGEMSP
MSPDir: crypto-config/peerOrganizations/lasige.bft/msp
Policies:
Readers:
Type: Signature
Rule: "OR('LaSIGEMSP.member')"
Writers:
Type: Signature
Rule: "OR('LaSIGEMSP.member')"
Admins:
Type: Signature
Rule: "OR('LaSIGEMSP.admin')"
AnchorPeers:
- Host: 0.peer.lasige.bft
Port: 7051
- &IBM
Name: IBM
ID: IBMMSP
MSPDir: crypto-config/peerOrganizations/ibm.bft/msp
Policies:
Readers:
Type: Signature
Rule: "OR('IBMMSP.member')"
Writers:
Type: Signature
Rule: "OR('IBMMSP.member')"
Admins:
Type: Signature
Rule: "OR('IBMMSP.admin')"
AnchorPeers:
- Host: 0.peer.ibm.bft
Port: 7051
You can download the file represented above here.
Notice that the Writers
policy for the OrderingNodes
organization uses the rule "AND('NodesMSP.member','NodesMSP.member')"
. This is done to make committing peers to demand blocks to include 2 signatures from the ordering nodes when the BlockValidation
policy hierarchy is evaluated. This is because, since we are dealing with Byzantine faults, a single block signature does not suffice. Since this particular example is designed to withstand a single Byzantine fault (f=1), we set the policy to demand f+1 signatures.
As was already mentioned in the quick start guide, the genesis block must be created using the configtxgen
tool provided the bftsmart/fabric-tools image. Moreover, once the genesis block is created, it must be disseminated across all ordering nodes and frontends using remote copying tools such as scp
or docker cp
. Do not attempt to generate the genesis block at each hosts instead of copying it from a single location, because configtxgen
cannot deterministically generate the same file even if all parameters are the same.
Edit the ./<TYPE>__material/config/hosts.config
file with the IDs, IP addresses, and ports of each host running an ordering node. This file must be the same across all ordering nodes, and should look like this:
#server id, address and port (the ids from 0 to n-1 are the service replicas)
0 0.node.bft 11000
1 1.node.bft 11000
2 2.node.bft 11000
3 3.node.bft 11000
Edit the GENESIS
and MSPID
parameters at the ./<TYPE>__material/config/node.config
file. GENESIS
should be the path to the genesis block within the container. MSPID
should be set to "OrderingNodesMSP" at the ordering nodes, and "FrontendsMSP" at the frontends.
Place the private key for the frontend/ordering node, as well as all certificates for both ordering nodes and frontends at ./<TYPE>__material/config/keys
. Rename each certificate generated with cryptogen
to cert<ID>.pem
, and the private key to keystore.pem
. The contents of the directory should look as follows:
$ ls orderingnode_material/config/keys/
cert0.pem cert1000.pem cert1.pem cert2000.pem cert2.pem cert3.pem keystore.pem
$ ls frontend_material/config/keys/
cert0.pem cert1000.pem cert1.pem cert2000.pem cert2.pem cert3.pem keystore.pem
If you are familiar with the BFT-SMaRt library, you may have noticed that we are placing the keys associated to the Fabric deployment in the same place as the keys used by the library. This is because the ordering service is designed to make the library and the application share the same set of keys, instead of demanding developers to manage two independent set of keys.
To setup the number of ordering nodes present in the system and the number of Byzantine faults to withstands, you need to edit the system.servers.num
, system.servers.f
, and system.initial.view
parameters in the ./<TYPE>__material/config/system.config
file. For instance, to tolerate a single Byzantine fault (which requires a total of 4 nodes), the aforementioned parameters would look as follows:
system.servers.num = 4
system.servers.f = 1
system.initial.view = 0,1,2,3
On the other hand, if you wish to withstand up to 3 Byzantine faults, the parameters would look as follows:
system.servers.num = 10
system.servers.f = 3
system.initial.view = 0,1,2,3,4,6,7,8,9
You can also configure the ordering service to tolerate either Byzantine faults or classic crash faults by setting the system.bft
to true
or false
, respectively.
To configure the number of frontends present in the system, it is necessary to generate the genesis block with the appropriate list in the Orderer->Addresses
parameter in the configtx.yaml
file (as previously mentioned). Moreover, it is also necessary to append each frontend ID to the RECEIVERS
parameter in the ./orderingnode__material/config/node.config
file.
To configure each individual frontend, the orderer.yaml
file must be edited as follows:
General:
GenesisFile: /bft-config/genesisblock
LocalMSPDir: /bft-config/fabric/msp
LocalMSPID: FrontendsMSP
Once this is done, you can supply these configuration files into the respective containers using volumes to incorporate them into the file system, and environment variables to force the container to adopt such configuration. Assuming that each configuration is located at /home/jcs/<TYPE>_material
across all hosts and the chosen folder to place them at the containers is /bft-config/
:
#At the hosts for the ordering nodes
$ docker run -i -t --rm --network=bft_network --name=<NODE ID>.node.bft -e NODE_CONFIG_DIR=/bft-config/config/ -v /home/jcs/orderingnode_material/:/bft-config/ bftsmart/fabric-orderingnode:amd64-1.2.0 <NODE ID>
#At the hosts for the frontends
$ docker run -i -t --rm --network=bft_network --name=<FRONTEND ID>.frontend.bft -e FRONTEND_CONFIG_DIR=/bft-config/config/ -e FABRIC_CFG_PATH=/bft-config/fabric/ -v /home/jcs/frontend_material/:/bft-config/ bftsmart/fabric-frontend:amd64-1.2.0 <FRONTEND ID>
The containers for the peers are still configured the same way as in standard Fabric deployments: either by editing the core.yaml
file or by defining environment variables matching the structure of such file. Nonetheless, it is still necessary to mount the crypto material in the container. In the case of this example configuration, the parameters to be modified are peer->gossip->bootstrap
, peer->gossip->endpoint
, peer->mspConfigPath
, and peer->localMspId
. This means that for each peer <ID>
at organization <ORG>
, the values of these parameters should be:
peer:
gossip:
bootstrap: <ID>.peer.<ORG>.bft:7051
endpoint: <ID>.peer.<ORG>.bft:7051
mspConfigPath: /bft-config/fabric/msp
localMspId: <ORG>MSP
And be deployed as follows:
$ docker create -i -t --rm --network=bridge --name=<ID>.peer.lasige.bft -e FABRIC_CFG_PATH=/bft-config/fabric/ -v /home/jcs/peer_material/:/bft-config/ -v /var/run/:/var/run/ hyperledger/fabric-peer:amd64-1.2.0
$ docker network connect bft_network <ID>.peer.<ORG>.bft
$ docker start -a <ID>.peer.<ORG>.bft
If you prefer, you can also use environment variables to define the above parameters instead of editing core.yaml
. Furthermore, make sure that docker can be correctly accessed from inside the container, by checking the vm->endpoint
parameter from core.yaml
(or by correctly setting $CORE_VM_ENDPOINT
). If the peers are supposed to access docker using UNIX sockets, make sure the host machine is creating the socket file at /var/run/docker.sock
folder and that the value of the parameter is set to unix:///var/run/docker.sock
.
In the case of the clients, the parameters of importance are peer->address
, peer->mspConfigPath
, and peer->localMspId
:
peer:
mspConfigPath: /bft-config/fabric/msp
localMspId: <ORG>MSP
And the deployment looks like:
docker run -i -t --rm --network=bft_network -e FABRIC_CFG_PATH=/bft-config/fabric/ -v /home/jcs/cli_material/:/bft-config/ -e CORE_PEER_ADDRESS=<ID>.peer.<ORG>.bft:7051 bftsmart/fabric-tools:amd64-1.2.0
If you expect to create the channel creation transaction and the anchor peer transaction with the container, you should also include a configtx.yaml
file like the one described earlier:
Organizations:
- &LaSIGE
Name: LaSIGE
ID: LaSIGEMSP
MSPDir: /bft-config/fabric/LaSIGE/msp
Policies:
Readers:
Type: Signature
Rule: "OR('LaSIGEMSP.member')"
Writers:
Type: Signature
Rule: "OR('LaSIGEMSP.member')"
Admins:
Type: Signature
Rule: "OR('LaSIGEMSP.admin')"
AnchorPeers:
- Host: 0.peer.lasige.bft
Port: 7051
- &IBM
Name: IBM
ID: IBMMSP
MSPDir: /bft-config/fabric/IBM/msp
Policies:
Readers:
Type: Signature
Rule: "OR('IBMMSP.member')"
Writers:
Type: Signature
Rule: "OR('IBMMSP.member')"
Admins:
Type: Signature
Rule: "OR('IBMMSP.admin')"
AnchorPeers:
- Host: 0.peer.ibm.bft
Port: 7051
You can download the file represented above here.
Keep in mind to update the MspDir
parameter to a valid path within the container such as /bft-config/fabric/<ORG>/msp
.