在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):feitnomore/hyperledger-fabric-kubernetes开源软件地址(OpenSource Url):https://github.com/feitnomore/hyperledger-fabric-kubernetes开源编程语言(OpenSource Language):Go 94.2%开源软件介绍(OpenSource Introduction):Blockchain Solution with Hyperledger Fabric + Hyperledger Explorer on KubernetesMaintainers: feitnomore This is a simple guide to help you implement a complete Blockchain solution using Hyperledger Fabric v1.3 with Hyperledger Explorer v0.3.7 on top of a Kubernetes platform. This solution uses also CouchDB as peer's backend, Apache Kafka topics for the orderers and a NFS Server (Network file system) to share data between the components. Note: Kafka/Zookeeper are running outside Kubernetes. WARNING: Use it at your own risk. BACKGROUNDA few weeks back, I've decided to take a look at Hyperledger Fabric solution to Blockchain, as it seems to be a technology that has been seeing an increase use and also is supported by giant tech companies like IBM and Oracle for example.
When I started looking at it, I've found lots of scripts like INTRODUCTIONWe're going to build a complete Hyperledger Fabric v1.3 environment with CA, Orderer and 4 Organizations. In order to achieve scalability and high availability on the Orderer we're going to be using Kafka. Each Organization will have 2 peers, and each peer will have it's own CouchDB instance. We're also going to deploy Hyperledger Explorer v0.3.7 with its PostgreSQL database as well. ARCHITECTUREInfrastructure viewFor this environment we're going to be using a 3-node Kubernetes cluster, a 3-node Apache Zookeeper cluster (for Kafka), a 4-node Apache Kafka cluster and a NFS server. All the machines are going to be in the same network. For Kubernetes cluster we'll have the following machines: kubenode01.local.parisi.biz
kubenode02.local.parisi.biz
kubenode03.local.parisi.biz Note: This is a home Kubernetes environment however most of what is covered here should apply to any cloud provider that provides Kubernetes compatible services. For Apache Zookeeper we'll have the following machines: zookeeper1.local.parisi.biz
zookeeper2.local.parisi.biz
zookeeper3.local.parisi.biz Note: Zookeeper is needed by Apache Kafka. For Apache Kafka we'll have the following machines: kafka1.local.parisi.biz
kafka2.local.parisi.biz
kafka3.local.parisi.biz
kafka4.local.parisi.biz Note: We're using Kafka 1.0 version for Hyperledger compatibility. For the NFS Server we'll have: storage.local.parisi.biz Note: Check this link for a quick guide on NFS Server setup The image below represents the environment infrastructure: Note: It's important to have all the environment with the time in sync as we're dealing with transactions and shared storage. Please make sure you have all the time in sync. I encourage you to use NTP on your servers. On my environment I have Fabric Logical viewThis environment will have a CA and a Orderer as Kubernetes deployments: blockchain-ca
blockchain-orderer We'll also have 4 organizations, with each organization having 2 peers, organized in the following deployments: blockchain-org1peer1
blockchain-org1peer2
blockchain-org2peer1
blockchain-org2peer2
blockchain-org3peer1
blockchain-org3peer2
blockchain-org4peer1
blockchain-org4peer2 The image below represents this logical view: Explorer Logical viewWe're going to have Hyperledger Explorer as a WebUI for our environment. Hyperledger Explorer will run in 2 deployments as below: blockchain-explorer-db
blockchain-explorer-app The image below represents this logical view: Detailed viewHyperledger Fabric Orderer will connect itself to the Kafka servers as image below: Each Hyperledger Fabric Peer will have it's own CouchDB instance running as a sidecar and will connect to our NFS shared storage: Note: Although its not depicted above, CA, Orderer and Explorer deployments will also have access to the NFS shared storage as they need the artifacts that we're going to store there. IMPLEMENTATIONStep 1: Checking environmentFirst let's make sure we have Kubernetes environment up & running: kubectl get nodes Step 2: Setting up shared storageNow, assuming the NFS server is up & running and with the correct permissions, we're going to create our kind: PersistentVolume
apiVersion: v1
metadata:
name: fabric-pv
labels:
type: local
name: fabricfiles
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /nfs/fabric
server: storage.local.parisi.biz
readOnly: false Note: NFS Server is running on Now let's apply the above configuration: kubectl apply -f kubernetes/fabric-pv.yaml After that we'll need to create a kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fabric-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
name: fabricfiles Note: We're using our previously created Now let's apply the above configuration: kubectl apply -f kubernetes/fabric-pvc.yaml Step 3: Launching a Fabric Tools helper podIn order to perform some operations on the environment like file management, peer configuration and artifact generation, we'll need a helper apiVersion: v1
kind: Pod
metadata:
name: fabric-tools
spec:
volumes:
- name: fabricfiles
persistentVolumeClaim:
claimName: fabric-pvc
- name: dockersocket
hostPath:
path: /var/run/docker.sock
containers:
- name: fabrictools
image: hyperledger/fabric-tools:amd64-1.3.0
imagePullPolicy: Always
command: ["sh", "-c", "sleep 48h"]
env:
- name: TZ
value: "America/Sao_Paulo"
- name: FABRIC_CFG_PATH
value: "/fabric"
volumeMounts:
- mountPath: /fabric
name: fabricfiles
- mountPath: /host/var/run/docker.sock
name: dockersocket Note: It's important to have the same timezone accross all network. Check TZ environment variable. After creating the file, let's apply it to our kubernetes cluster: kubectl apply -f kubernetes/fabric-tools.yaml Make sure the kubectl get pods Now, assuming kubectl exec -it fabric-tools -- mkdir /fabric/config Step 4: Loading the config files into the storage1 - Configtx ---
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
AdminPrincipal: Role.MEMBER
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: blockchain-org1peer1
Port: 30110
- Host: blockchain-org1peer2
Port: 30110
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: blockchain-org2peer1
Port: 30110
- Host: blockchain-org2peer2
Port: 30110
- &Org3
Name: Org3MSP
ID: Org3MSP
MSPDir: crypto-config/peerOrganizations/org3.example.com/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: blockchain-org3peer1
Port: 30110
- Host: blockchain-org3peer2
Port: 30110
- &Org4
Name: Org4MSP
ID: Org4MSP
MSPDir: crypto-config/peerOrganizations/org4.example.com/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: blockchain-org4peer1
Port: 30110
- Host: blockchain-org4peer2
Port: 30110
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- blockchain-orderer:31010
BatchTimeout: 1s
BatchSize:
MaxMessageCount: 50
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka1.local.parisi.biz:9092
- kafka2.local.parisi.biz:9092
- kafka3.local.parisi.biz:9092
- kafka4.local.parisi.biz:9092
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
FourOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
- *Org3
- *Org4
FourOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
- *Org4 Note: The file reflects the topology discussed on the architecture presented before. Now let's copy the file we just created to our shared filesystem: kubectl cp config/configtx.yaml fabric-tools:/fabric/config/ 2 - Crypto-config OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org2
Domain: org2.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org3
Domain: org3.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org4
Domain: org4.example.com
Template:
Count: 2
Users:
Count: 1 Let's copy the file to our shared filesystem: kubectl cp config/crypto-config.yaml fabric-tools:/fabric/config/ 3 - Chaincode kubectl cp config/chaincode/ fabric-tools:/fabric/config/ Step 5: Creating the necessary artifacts1 - cryptogen kubectl exec -it fabric-tools -- /bin/bash
cryptogen generate --config /fabric/config/crypto-config.yaml
exit Now we're going to copy our files to the correct path and rename the key files: kubectl exec -it fabric-tools -- /bin/bash
cp -r crypto-config /fabric/
for file in $(find /fabric/ -iname *_sk); do echo $file; dir=$(dirname $file); mv ${dir}/*_sk ${dir}/key.pem; done
exit 2 - configtxgen kubectl exec -it fabric-tools -- /bin/bash
cp /fabric/config/configtx.yaml /fabric/
cd /fabric
configtxgen -profile FourOrgsOrdererGenesis -outputBlock genesis.block
exit 3 - Anchor Peers kubectl exec -it fabric-tools -- /bin/bash
cd /fabric
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org1MSPanchors.tx -channelID channel1 -asOrg Org1MSP
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org2MSPanchors.tx -channelID channel1 -asOrg Org2MSP
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org3MSPanchors.tx -channelID channel1 -asOrg Org3MSP
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org4MSPanchors.tx -channelID channel1 -asOrg Org4MSP
exit Note: The generated files will be used later to update channel configuration with the respective Anchor Peers. This step is important for Hyperledger Fabric Service Discovery to work properly. 4 - Fix Permissions kubectl exec -it fabric-tools -- /bin/bash
chmod a+rx /fabric/* -R
exit Step 6: Setting up Fabric CACreate the apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: blockchain-ca
spec:
replicas: 1
template:
metadata:
labels:
name: ca
spec:
volumes:
- name: fabricfiles
persistentVolumeClaim:
claimName: fabric-pvc
containers:
- name: ca
image: hyperledger/fabric-ca:amd64-1.3.0
command: ["sh", "-c", "fabric-ca-server start -b admin:adminpw -d"]
env:
- name: TZ
value: "America/Sao_Paulo"
- name: FABRIC_CA_SERVER_CA_NAME
value: "CA1"
- name: FABRIC_CA_SERVER_CA_CERTFILE
value: /fabric/crypto-config/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem
- name: FABRIC_CA_SERVER_CA_KEYFILE
value: /fabric/crypto-config/peerOrganizations/org1.example.com/ca/key.pem
- name: FABRIC_CA_SERVER_DEBUG
value: "true"
- name: FABRIC_CA_SERVER_TLS_ENABLED
value: "false"
- name: FABRIC_CA_SERVER_TLS_CERTFILE
value: /certs/ca0a-cert.pem
- name: FABRIC_CA_SERVER_TLS_KEYFILE
value: /certs/ca0a-key.pem
- name: GODEBUG
value: "netdns=go"
volumeMounts:
- mountPath: /fabric
name: fabricfiles Note: The CA uses our shared filesystem. Now let's apply the configuration: kubectl apply -f kubernetes/blockchain-ca_deploy.yaml Create the file apiVersion: v1
kind: Service
metadata:
name: blockchain-ca
labels:
run: blockchain-ca
spec:
type: ClusterIP
selector:
name: ca
ports:
- protocol: TCP
port: 30054
targetPort: 7054
name: grpc
- protocol: TCP
port: 7054
name: grpc1 Now, apply the configuration: kubectl apply -f kubernetes/blockchain-ca_svc.yaml Step 7: Setting up Fabric OrdererCreate the file apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: blockchain-orderer
spec:
replicas: 3
template:
metadata:
labels:
name: orderer
spec:
volumes:
- name: fabricfiles
persistentVolumeClaim:
claimName: fabric-pvc
containers:
- name: orderer
image: hyperledger/fabric-orderer:amd64-1.3.0
command: ["sh", "-c", "orderer"]
env:
- name: TZ
value: "America/Sao_Paulo"
- name: ORDERER_CFG_PATH
value: /fabric/
- name: ORDERER_GENERAL_LEDGERTYPE
value: file
- name: ORDERER_FILELEDGER_LOCATION
value: /fabric/ledger/orderer
- name: ORDERER_GENERAL_BATCHTIMEOUT
value: 1s
- name: ORDERER_GENERAL_BATCHSIZE_MAXMESSAGECOUNT
value: "10"
- name: ORDERER_GENERAL_MAXWINDOWSIZE
value: "1000"
- name: CONFIGTX_GENERAL_ORDERERTYPE
value: kafka
- name: CONFIGTX_ORDERER_KAFKA_BROKERS
value: "kafka1.local.parisi.biz:9092,kafka2.local.parisi.biz:9092,kafka3.local.parisi.biz:9092,kafka4.local.parisi.biz:9092"
- name: ORDERER_KAFKA_RETRY_SHORTINTERVAL
value: 1s
- name: ORDERER_KAFKA_RETRY_SHORTTOTAL
value: 30s
- name: ORDERER_KAFKA_VERBOSE
value: "true"
- name: CONFIGTX_ORDERER_ADDRESSES
value: "blockchain-orderer:31010"
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_LISTENPORT
value: "31010"
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /fabric/genesis.block
- name: ORDERER_GENERAL_GENESISPROFILE
value: initial
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: GODEBUG
value: "netdns=go"
- name: ORDERER_GENERAL_LEDGERTYPE
value: "ram"
volumeMounts:
- mountPath: /fabric
name: fabricfiles Note: Because we're dealing with transactions, timezones needs to be in sync everywhere. Let's apply the configuration: kubectl apply -f kubernetes/blockchain-orderer_deploy.yaml Create the file 全部评论
专题导读
上一篇:csweichel/werft: Just Kubernetes Native CI发布时间:2022-07-09下一篇:appvision-gmbh/json2typescript: Convert JSON to TypeScript with secure type chec ...发布时间:2022-07-08热门推荐
热门话题
阅读排行榜
|
请发表评论