• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

feitnomore/hyperledger-fabric-kubernetes: Blockchain Solution with Hyperledger F ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

feitnomore/hyperledger-fabric-kubernetes

开源软件地址(OpenSource Url):

https://github.com/feitnomore/hyperledger-fabric-kubernetes

开源编程语言(OpenSource Language):

Go 94.2%

开源软件介绍(OpenSource Introduction):

Blockchain Solution with Hyperledger Fabric + Hyperledger Explorer on Kubernetes

Maintainers: feitnomore

This is a simple guide to help you implement a complete Blockchain solution using Hyperledger Fabric v1.3 with Hyperledger Explorer v0.3.7 on top of a Kubernetes platform. This solution uses also CouchDB as peer's backend, Apache Kafka topics for the orderers and a NFS Server (Network file system) to share data between the components.

Note: Kafka/Zookeeper are running outside Kubernetes.

WARNING: Use it at your own risk.

BACKGROUND

A few weeks back, I've decided to take a look at Hyperledger Fabric solution to Blockchain, as it seems to be a technology that has been seeing an increase use and also is supported by giant tech companies like IBM and Oracle for example. When I started looking at it, I've found lots of scripts like start.sh, stop.sh, byfn.sh and eyfn.sh. For me those seems like "magic", and everyone that I've talked to, stated that I should use those. While using those scripts made me start fast, I had lots of trouble figuring out what was going on behind the scenes and also had a really hard time trying to customize the environment or run anything different from those samples. At that point I've decided to start digging and started building a complete Blockchain environment, step-by-step, in order to see the details of how it works and how it can be achieved. This github repository is the result of my studies.

INTRODUCTION

We're going to build a complete Hyperledger Fabric v1.3 environment with CA, Orderer and 4 Organizations. In order to achieve scalability and high availability on the Orderer we're going to be using Kafka. Each Organization will have 2 peers, and each peer will have it's own CouchDB instance. We're also going to deploy Hyperledger Explorer v0.3.7 with its PostgreSQL database as well.

ARCHITECTURE

Infrastructure view

For this environment we're going to be using a 3-node Kubernetes cluster, a 3-node Apache Zookeeper cluster (for Kafka), a 4-node Apache Kafka cluster and a NFS server. All the machines are going to be in the same network. For Kubernetes cluster we'll have the following machines:

kubenode01.local.parisi.biz
kubenode02.local.parisi.biz
kubenode03.local.parisi.biz

Note: This is a home Kubernetes environment however most of what is covered here should apply to any cloud provider that provides Kubernetes compatible services.

For Apache Zookeeper we'll have the following machines:

zookeeper1.local.parisi.biz
zookeeper2.local.parisi.biz
zookeeper3.local.parisi.biz

Note: Zookeeper is needed by Apache Kafka.
Note: Apache Kafka should be 1.0 for Hyperledger compatibility.
Note: Check this link for a quick guide on Kafka/Zookeeper cluster.
Note: We're using 3 Zookeeper nodes as the minimum stated in Hyperledger Fabric Kafka Documentation.

For Apache Kafka we'll have the following machines:

kafka1.local.parisi.biz
kafka2.local.parisi.biz
kafka3.local.parisi.biz
kafka4.local.parisi.biz

Note: We're using Kafka 1.0 version for Hyperledger compatibility.
Note: Check this link for a quick guide on Kafka/Zookeeper cluster.
Note: We're using 4 Kafka nodes as the minimum stated in Hyperledger Fabric Kafka Documentation.

For the NFS Server we'll have:

storage.local.parisi.biz

Note: Check this link for a quick guide on NFS Server setup
Note: Crypto materials, configuration files and some scripts will be saved on this shared filesystem.
Note: Each peer will have its own CouchDB as Ledger, meaning the data will be saved there, and not on this NFS Server.

The image below represents the environment infrastructure:

slide1.jpg

Note: It's important to have all the environment with the time in sync as we're dealing with transactions and shared storage. Please make sure you have all the time in sync. I encourage you to use NTP on your servers. On my environment I have ntpdate running in a cron job.
Note: Kafka, Zookeeper and NFS Server are running outside Kubernetes.

Fabric Logical view

This environment will have a CA and a Orderer as Kubernetes deployments:

blockchain-ca
blockchain-orderer

We'll also have 4 organizations, with each organization having 2 peers, organized in the following deployments:

blockchain-org1peer1
blockchain-org1peer2
blockchain-org2peer1
blockchain-org2peer2
blockchain-org3peer1
blockchain-org3peer2
blockchain-org4peer1
blockchain-org4peer2

The image below represents this logical view:

slide2.jpg

Explorer Logical view

We're going to have Hyperledger Explorer as a WebUI for our environment. Hyperledger Explorer will run in 2 deployments as below:

blockchain-explorer-db
blockchain-explorer-app

The image below represents this logical view:

slide3.jpg

Detailed view

Hyperledger Fabric Orderer will connect itself to the Kafka servers as image below:

slide4.jpg

Each Hyperledger Fabric Peer will have it's own CouchDB instance running as a sidecar and will connect to our NFS shared storage:

slide5.jpg

Note: Although its not depicted above, CA, Orderer and Explorer deployments will also have access to the NFS shared storage as they need the artifacts that we're going to store there.

IMPLEMENTATION

Step 1: Checking environment

First let's make sure we have Kubernetes environment up & running:

kubectl get nodes

Step 2: Setting up shared storage

Now, assuming the NFS server is up & running and with the correct permissions, we're going to create our PersistentVolume. First lets create the file kubernetes/fabric-pv.yaml like the example below:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: fabric-pv
  labels:
    type: local
    name: fabricfiles
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/fabric
    server: storage.local.parisi.biz
    readOnly: false

Note: NFS Server is running on storage.local.parisi.biz and the shared filesystem is /nfs/fabric. We're using fabricfiles as the name for this PersistentVolume.

Now let's apply the above configuration:

kubectl apply -f kubernetes/fabric-pv.yaml

After that we'll need to create a PersistentVolumeClaim. To do that, we'll create file kubernetes/fabric-pvc.yaml as below:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: fabric-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      name: fabricfiles

Note: We're using our previously created fabricfiles as the selector here.

Now let's apply the above configuration:

kubectl apply -f kubernetes/fabric-pvc.yaml

Step 3: Launching a Fabric Tools helper pod

In order to perform some operations on the environment like file management, peer configuration and artifact generation, we'll need a helper Pod running fabric-tools. For that we'll create file kubernetes/fabric-tools.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: fabric-tools
spec:
  volumes:
  - name: fabricfiles
    persistentVolumeClaim:
      claimName: fabric-pvc
  - name: dockersocket
    hostPath:
      path: /var/run/docker.sock
  containers:
    - name: fabrictools
      image: hyperledger/fabric-tools:amd64-1.3.0
      imagePullPolicy: Always
      command: ["sh", "-c", "sleep 48h"]
      env:
      - name: TZ
        value: "America/Sao_Paulo"
      - name: FABRIC_CFG_PATH
        value: "/fabric"
      volumeMounts:
        - mountPath: /fabric
          name: fabricfiles
        - mountPath: /host/var/run/docker.sock
          name: dockersocket

Note: It's important to have the same timezone accross all network. Check TZ environment variable.

After creating the file, let's apply it to our kubernetes cluster:

kubectl apply -f kubernetes/fabric-tools.yaml

Make sure the fabric-tools Pod is running before we continue:

kubectl get pods

Now, assuming fabric-tools Pod is running, let's create a config directory on our shared filesystem to hold our files:

kubectl exec -it fabric-tools -- mkdir /fabric/config

Step 4: Loading the config files into the storage

1 - Configtx
Now we're going to create the file config/configtx.yaml with our network configuration, like the example below:

---
Organizations:

    - &OrdererOrg
        Name: OrdererOrg
        ID: OrdererMSP
        MSPDir: crypto-config/ordererOrganizations/example.com/msp
        AdminPrincipal: Role.MEMBER

    - &Org1
        Name: Org1MSP
        ID: Org1MSP
        MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
        AdminPrincipal: Role.MEMBER
        AnchorPeers:
            - Host: blockchain-org1peer1
              Port: 30110
            - Host: blockchain-org1peer2
              Port: 30110

    - &Org2
        Name: Org2MSP
        ID: Org2MSP
        MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
        AdminPrincipal: Role.MEMBER
        AnchorPeers:
            - Host: blockchain-org2peer1
              Port: 30110
            - Host: blockchain-org2peer2
              Port: 30110

    - &Org3
        Name: Org3MSP
        ID: Org3MSP
        MSPDir: crypto-config/peerOrganizations/org3.example.com/msp
        AdminPrincipal: Role.MEMBER
        AnchorPeers:
            - Host: blockchain-org3peer1
              Port: 30110
            - Host: blockchain-org3peer2
              Port: 30110

    - &Org4
        Name: Org4MSP
        ID: Org4MSP
        MSPDir: crypto-config/peerOrganizations/org4.example.com/msp
        AdminPrincipal: Role.MEMBER
        AnchorPeers:
            - Host: blockchain-org4peer1
              Port: 30110
            - Host: blockchain-org4peer2
              Port: 30110

Orderer: &OrdererDefaults

    OrdererType: kafka
    Addresses:
        - blockchain-orderer:31010

    BatchTimeout: 1s
    BatchSize:
        MaxMessageCount: 50
        AbsoluteMaxBytes: 99 MB
        PreferredMaxBytes: 512 KB

    Kafka:
        Brokers:
            - kafka1.local.parisi.biz:9092
            - kafka2.local.parisi.biz:9092
            - kafka3.local.parisi.biz:9092
            - kafka4.local.parisi.biz:9092

    Organizations:

Application: &ApplicationDefaults

    Organizations:

Profiles:

    FourOrgsOrdererGenesis:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *Org1
                    - *Org2
                    - *Org3
                    - *Org4
    FourOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2
                - *Org3
                - *Org4

Note: The file reflects the topology discussed on the architecture presented before.
Note: Pay attention to the Kafka brokers URLs.
Note: Its important to have Anchor Peers configuration here as it impacts Hyperledger Fabric Service Discovery.
Note: BatchTimeout and BatchSize impacts directly in the performance of your environment in terms of quantity of transactions that are processed.

Now let's copy the file we just created to our shared filesystem:

kubectl cp config/configtx.yaml fabric-tools:/fabric/config/

2 - Crypto-config
Now lets create the file config/crypto-config.yaml like below:

OrdererOrgs:
  - Name: Orderer
    Domain: example.com
    Specs:
      - Hostname: orderer
PeerOrgs:
  - Name: Org1
    Domain: org1.example.com
    Template:
      Count: 2
    Users:
      Count: 1
  - Name: Org2
    Domain: org2.example.com
    Template:
      Count: 2
    Users:
      Count: 1
  - Name: Org3
    Domain: org3.example.com
    Template:
      Count: 2
    Users:
      Count: 1
  - Name: Org4
    Domain: org4.example.com
    Template:
      Count: 2
    Users:
      Count: 1

Let's copy the file to our shared filesystem:

kubectl cp config/crypto-config.yaml fabric-tools:/fabric/config/

3 - Chaincode
It's time to copy our example chaincode to the shared filesystem. In this case we'll be using balance-transfer example:

kubectl cp config/chaincode/ fabric-tools:/fabric/config/

Step 5: Creating the necessary artifacts

1 - cryptogen
Time to generate our crypto material:

kubectl exec -it fabric-tools -- /bin/bash
cryptogen generate --config /fabric/config/crypto-config.yaml
exit

Now we're going to copy our files to the correct path and rename the key files:

kubectl exec -it fabric-tools -- /bin/bash
cp -r crypto-config /fabric/
for file in $(find /fabric/ -iname *_sk); do echo $file; dir=$(dirname $file); mv ${dir}/*_sk ${dir}/key.pem; done
exit

2 - configtxgen
Now we're going to copy the artifacts to the correct path and generate the genesis block:

kubectl exec -it fabric-tools -- /bin/bash
cp /fabric/config/configtx.yaml /fabric/
cd /fabric
configtxgen -profile FourOrgsOrdererGenesis -outputBlock genesis.block
exit

3 - Anchor Peers
Lets create the Anchor Peers configuration files using configtxgen:

kubectl exec -it fabric-tools -- /bin/bash
cd /fabric
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org1MSPanchors.tx -channelID channel1 -asOrg Org1MSP
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org2MSPanchors.tx -channelID channel1 -asOrg Org2MSP
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org3MSPanchors.tx -channelID channel1 -asOrg Org3MSP
configtxgen -profile FourOrgsChannel -outputAnchorPeersUpdate ./Org4MSPanchors.tx -channelID channel1 -asOrg Org4MSP
exit

Note: The generated files will be used later to update channel configuration with the respective Anchor Peers. This step is important for Hyperledger Fabric Service Discovery to work properly.

4 - Fix Permissions
We need to fix the files permissions on our shared filesystem now:

kubectl exec -it fabric-tools -- /bin/bash
chmod a+rx /fabric/* -R
exit

Step 6: Setting up Fabric CA

Create the kubernetes/blockchain-ca_deploy.yaml file with the following Deployment description:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: blockchain-ca
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: ca
    spec:
      volumes:
      - name: fabricfiles
        persistentVolumeClaim:
          claimName: fabric-pvc

      containers:
      - name: ca
        image: hyperledger/fabric-ca:amd64-1.3.0
        command: ["sh", "-c", "fabric-ca-server start -b admin:adminpw -d"]
        env:
        - name: TZ
          value: "America/Sao_Paulo"
        - name: FABRIC_CA_SERVER_CA_NAME
          value: "CA1"
        - name: FABRIC_CA_SERVER_CA_CERTFILE
          value: /fabric/crypto-config/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem
        - name: FABRIC_CA_SERVER_CA_KEYFILE
          value: /fabric/crypto-config/peerOrganizations/org1.example.com/ca/key.pem
        - name: FABRIC_CA_SERVER_DEBUG
          value: "true"
        - name: FABRIC_CA_SERVER_TLS_ENABLED
          value: "false"
        - name: FABRIC_CA_SERVER_TLS_CERTFILE
          value: /certs/ca0a-cert.pem
        - name: FABRIC_CA_SERVER_TLS_KEYFILE
          value: /certs/ca0a-key.pem
        - name: GODEBUG
          value: "netdns=go"
        volumeMounts:
        - mountPath: /fabric
          name: fabricfiles

Note: The CA uses our shared filesystem.
Note: The timezone configuration is important for certificate validation and expiration.

Now let's apply the configuration:

kubectl apply -f kubernetes/blockchain-ca_deploy.yaml

Create the file kubernetes/blockchain-ca_svc.yaml with the following Service description:

apiVersion: v1
kind: Service
metadata:
  name: blockchain-ca
  labels:
    run: blockchain-ca
spec:
  type: ClusterIP
  selector:
    name: ca
  ports:
  - protocol: TCP
    port: 30054
    targetPort: 7054
    name: grpc
  - protocol: TCP
    port: 7054
    name: grpc1

Now, apply the configuration:

kubectl apply -f kubernetes/blockchain-ca_svc.yaml

Step 7: Setting up Fabric Orderer

Create the file kubernetes/blockchain-orderer_deploy.yaml with the following Deployment description:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: blockchain-orderer
spec:
  replicas: 3
  template:
    metadata:
      labels:
        name: orderer
    spec:
      volumes:
      - name: fabricfiles
        persistentVolumeClaim:
          claimName: fabric-pvc

      containers:
      - name: orderer
        image: hyperledger/fabric-orderer:amd64-1.3.0
        command: ["sh", "-c", "orderer"]
        env:
        - name: TZ
          value: "America/Sao_Paulo"
        - name: ORDERER_CFG_PATH
          value: /fabric/
        - name: ORDERER_GENERAL_LEDGERTYPE
          value: file
        - name: ORDERER_FILELEDGER_LOCATION
          value: /fabric/ledger/orderer
        - name: ORDERER_GENERAL_BATCHTIMEOUT
          value: 1s
        - name: ORDERER_GENERAL_BATCHSIZE_MAXMESSAGECOUNT
          value: "10"
        - name: ORDERER_GENERAL_MAXWINDOWSIZE
          value: "1000"
        - name: CONFIGTX_GENERAL_ORDERERTYPE
          value: kafka
        - name: CONFIGTX_ORDERER_KAFKA_BROKERS
          value: "kafka1.local.parisi.biz:9092,kafka2.local.parisi.biz:9092,kafka3.local.parisi.biz:9092,kafka4.local.parisi.biz:9092"
        - name: ORDERER_KAFKA_RETRY_SHORTINTERVAL
          value: 1s
        - name: ORDERER_KAFKA_RETRY_SHORTTOTAL
          value: 30s
        - name: ORDERER_KAFKA_VERBOSE
          value: "true"
        - name: CONFIGTX_ORDERER_ADDRESSES
          value: "blockchain-orderer:31010"
        - name: ORDERER_GENERAL_LISTENADDRESS
          value: 0.0.0.0
        - name: ORDERER_GENERAL_LISTENPORT
          value: "31010"
        - name: ORDERER_GENERAL_LOGLEVEL
          value: debug
        - name: ORDERER_GENERAL_LOCALMSPDIR
          value: /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
        - name: ORDERER_GENERAL_LOCALMSPID
          value: OrdererMSP
        - name: ORDERER_GENERAL_GENESISMETHOD
          value: file
        - name: ORDERER_GENERAL_GENESISFILE
          value: /fabric/genesis.block
        - name: ORDERER_GENERAL_GENESISPROFILE
          value: initial
        - name: ORDERER_GENERAL_TLS_ENABLED
          value: "false"
        - name: GODEBUG
          value: "netdns=go"
        - name: ORDERER_GENERAL_LEDGERTYPE
          value: "ram"
        volumeMounts:
        - mountPath: /fabric
          name: fabricfiles

Note: Because we're dealing with transactions, timezones needs to be in sync everywhere.
Note: The Orderer also uses our shared filesystem.
Note: Orderer is using Kafka.
Note: Kafka Brokers previoulsy set on configtx are now listed under CONFIGTX_ORDERER_KAFKA_BROKERS environment variable.
Note: We're using a deployment with 3 Orderers.

Let's apply the configuration:

kubectl apply -f kubernetes/blockchain-orderer_deploy.yaml

Create the file kubernetes/blockchain-orderer_svc.yaml with the following Service description:


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap