-
Notifications
You must be signed in to change notification settings - Fork 0
/
algolia.json
1 lines (1 loc) · 23.1 KB
/
algolia.json
1
[{"author":"KAI CHU CHUNG","authorlink":"https://kaichu.io","banner":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/img/gae-custom-ws-0.png","categories":["App Engine"],"content":"在設計 API server 的時候會有遇到即時訊息傳遞的需求,同步可以用 GRPC 建立連線來溝通,為了降低系統的耦合性,可以選擇非同步的方式。而 PubSub 結合 websocket 是常用的方式。對於一位 Gopher 來說,NATS 是 CNCF 下面中關於訊息傳遞的開源專案且對 Golang 友善(比 Kafka 好多了 XD),選擇 NATS 的 PubSub 功能搭配 websocket 好像也是一個合理的選擇 在 Google App Engine 上搭建整個系統需要幾個知識點,讓我們一個一個來解釋,最後會附上完整的程式碼 Google App Engine 有一個很棒的功能是非常容易的建立 service,每一個 service 可以類比成 microservice。現在已經支援了 Python, Java, Node.js, PHP, Runy, Go 等幾種程式語言,也可以在 standard, flex, custom runtime (打包成 Docker 就不受到程式語言限制了) 中進行混搭,怎麼搭配就看題目進行選擇 不囉嗦,先看整個架構圖 這邊我們有 3 個 service + 1 個 Google compute engine instance default (us-central1): 每一個 Google App Engine 一定要有一個 default service 且要第一個進行部署 add (us-central1): 核的的 service,提供 2 個 API,sum 和 concat ws (us-central1): 透過 NATS 的 client library + gorilla websocket 來實作 NATS (asia-east1-b): NATS 的 server ","date":"2020-05-15","fuzzywordcount":600,"objectID":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/:0:0","originallink":null,"summary":"透過 Google App Engine 和 NATS 建立 Websocket PUBSUB 伺服器","tags":["GDGCloud Taipei","gcp","GAE","Google compute engine","Golang","websocket","NATS","CloudBuild"],"title":"Establishing a Websocket PUBSUB server with NATS and Google App Engine","translator":null,"uri":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/","weight":0,"wordcount":540},{"author":"KAI CHU CHUNG","authorlink":"https://kaichu.io","banner":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/img/gae-custom-ws-0.png","categories":["App Engine"],"content":"知識點 ## Google App Engine 上實作 websocket 只能使用 flex or custom runtime 這個是一個基本限制,如果在 Google App Engine 上有建立 websocket 的需求,只能選擇 flex or custom runtime. Google 官網有好幾個程式語言的範例1 ## 部著 NATS server 稍早提過,NATS 是 CNCF 下面中關於訊息傳遞的開源專案且可以視為 cloud native (rock),部署一個 NATS server 非常簡單。docker 就可以跑了,在 Google Cloud Platfrom 上我們可以透過 Cloud Deployment Manager 一鍵部署一個 NATS Certified by Bitnami 部著成功之後可以查看到相關的訊息,包含要連線的密碼 ## Google App Engine Access NATS server via Serverless VPC 當我們一開始建立 Google App Engine 專案時我問我們要部署在什麼 region northamerica-northeast1 (Montréal) us-central (Iowa) us-west2 (Los Angeles) us-west3 (Salt Lake City) us-east1 (South Carolina) us-east4 (Northern Virginia) southamerica-east1 (São Paulo) europe-west (Belgium) europe-west2 (London) europe-west3 (Frankfurt) europe-west6 (Zürich) asia-northeast1 (Tokyo) asia-northeast2 (Osaka) asia-northeast3 (Seoul) asia-east2 (Hong Kong) asia-south1 (Mumbai) australia-southeast1 (Sydney) asia 中日本,韓國,香港都有,台灣就是沒有,表示哭哭 當我們使用 standard runtime 建立的應用程式有需要跟我們自己建立的 Google compute engine instance 進行溝通時,就必需透過 VPC 進行連線,阿不是在 GCP 專案下的機器是相通的嗎? 一個簡單的判別方式,如果服務可以讓你設定 network 相關的設定就是;Google app engine standard runtime app.yaml 並沒有 network 相關可以配置的設定 (flex, custom runtime 中有)。而在 standard runtime 的 beta 中可以讓我們在 app.yaml 透過指定 vpc_access_connector 來 Configuring Serverless VPC Access 存取 Google compute engine2 上相關的資源 ## 透過 cloudbuild 部署整個 app engine application 需要的啟用的 API及權限 $ gcloud app GROUP | COMMAND [GCLOUD_WIDE_FLAG ...] 部署 Google app engine 的方式為使用 gcloud command,當數量少的時候可以手動進行部署,不過當 service 數量多時候,手動部署是很累人的,所以透過 Cloud Build 就是一個簡單的方式,要注意的部份是,在 local 時是以 gcloud auth 的身份進行部署,不過在 cloud build 中是透過 cloud build 的 service account ([email protected]), 所以需在啟用相關的 API 及配置相關應的權限給 cloud build 的 service account 才不會報錯 需要特別啟用的 API Cloud Build API App Engine Admin API: 在 Cloud build 中的設定直接 enable 就好 Serverless VPC Access API cloudbuild.yaml timeout: 1200s # 20 mins steps: - id: deploy website name: gcr.io/cloud-builders/gcloud args: - app - deploy - website/app.yaml - --version=$SHORT_SHA - --project=$PROJECT_ID - -q - id: deploy add service name: gcr.io/cloud-builders/gcloud args: - beta - app - deploy - cmd/add/app.yaml - --version=$SHORT_SHA - --project=$PROJECT_ID - -q - id: build ws name: gcr.io/cloud-builders/docker entrypoint: bash args: - -exc - | docker build --tag gcr.io/$PROJECT_ID/ws:$COMMIT_SHA --tag gcr.io/$PROJECT_ID/ws:$SHORT_SHA --file Dockerfile.ws . docker push gcr.io/$PROJECT_ID/ws:$COMMIT_SHA docker push gcr.io/$PROJECT_ID/ws:$SHORT_SHA - id: deploy ws service name: gcr.io/cloud-builders/gcloud args: - beta - app - deploy - cmd/ws/app.yaml - --version=$SHORT_SHA - --project=$PROJECT_ID - --image-url=gcr.io/$PROJECT_ID/ws:$SHORT_SHA - -q - id: deploy disptach name: gcr.io/cloud-builders/gcloud args: - app - deploy - dispatch.yaml cloud build 中的流程是 deploy website: default service, golang standard runtime deploy add: add service, golang standard runtime build ws docker image and push to gcr.io deploy ws service, golang custom runtime update disptach 需要配置以下權限給 cloud build service account App Engine Admin: cloud build deploy Google app engine Cloud Build Service Account: (default) Compute Network User: Access network Serverless VPC Access User: The “vpcaccess.connectors.use” permission is required. ","date":"2020-05-15","fuzzywordcount":600,"objectID":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/:1:0","originallink":null,"summary":"透過 Google App Engine 和 NATS 建立 Websocket PUBSUB 伺服器","tags":["GDGCloud Taipei","gcp","GAE","Google compute engine","Golang","websocket","NATS","CloudBuild"],"title":"Establishing a Websocket PUBSUB server with NATS and Google App Engine","translator":null,"uri":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/","weight":0,"wordcount":540},{"author":"KAI CHU CHUNG","authorlink":"https://kaichu.io","banner":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/img/gae-custom-ws-0.png","categories":["App Engine"],"content":"demo Google app engine 還有一個佛心的部份就是自帶 HTTPS,所以我們實作的 websocket entrypoint 也可從 ws:// 直接轉成 wss:// (rock) ","date":"2020-05-15","fuzzywordcount":600,"objectID":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/:2:0","originallink":null,"summary":"透過 Google App Engine 和 NATS 建立 Websocket PUBSUB 伺服器","tags":["GDGCloud Taipei","gcp","GAE","Google compute engine","Golang","websocket","NATS","CloudBuild"],"title":"Establishing a Websocket PUBSUB server with NATS and Google App Engine","translator":null,"uri":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/","weight":0,"wordcount":540},{"author":"KAI CHU CHUNG","authorlink":"https://kaichu.io","banner":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/img/gae-custom-ws-0.png","categories":["App Engine"],"content":"repo https://github.com/cage1016/gae-custom-ws ","date":"2020-05-15","fuzzywordcount":600,"objectID":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/:3:0","originallink":null,"summary":"透過 Google App Engine 和 NATS 建立 Websocket PUBSUB 伺服器","tags":["GDGCloud Taipei","gcp","GAE","Google compute engine","Golang","websocket","NATS","CloudBuild"],"title":"Establishing a Websocket PUBSUB server with NATS and Google App Engine","translator":null,"uri":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/","weight":0,"wordcount":540},{"author":"KAI CHU CHUNG","authorlink":"https://kaichu.io","banner":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/img/gae-custom-ws-0.png","categories":["App Engine"],"content":"Reference Creating Persistent Connections with WebSockets, 換程式語言也對應到相關的範例 ↩︎ 現在 vpc_access_connector 屬於 beta,所以在需要使用 gcloud beta app deploy ... ↩︎ ","date":"2020-05-15","fuzzywordcount":600,"objectID":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/:4:0","originallink":null,"summary":"透過 Google App Engine 和 NATS 建立 Websocket PUBSUB 伺服器","tags":["GDGCloud Taipei","gcp","GAE","Google compute engine","Golang","websocket","NATS","CloudBuild"],"title":"Establishing a Websocket PUBSUB server with NATS and Google App Engine","translator":null,"uri":"/blog/establishing-a-websocket-pubsub-server-with-nats-and-google-app-engine/","weight":0,"wordcount":540},{"author":"industrialclouds.net","authorlink":null,"banner":null,"categories":["kubernetes"],"content":"某些時候,我們會需要透過DNS的方式來對應外部服務的domain name位置,而某些應用中,這些domain name可能在不同的環境會對應到不同的地方,此時,我們在傳統作業方式會透過/etc/hosts的編輯方式來讓該主機可以對應到外部服務位置…. 而在K8S中,從1.7之後的版本開始支援hosts的複寫功能… 首先,我們需要先知道對應的DNS與IP有哪些,然後可以透過hostAliases這個spec來描述ip與hostname的對應,下面是個簡單的範例,以nginx的主機為例,如果掛載hostAliases的話,則可以預期在主機內可以觀察到/etc/hosts內有所希望複寫的功能… hosts.yaml apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: hostAliases: - ip: \"127.0.0.1\" hostnames: - \"foo.local\" - \"bar.local\" - ip: \"10.1.2.3\" hostnames: - \"foo.remote\" - \"bar.remote\" containers: - name: cat-hosts image: nginx 接著我們可以把上面的hosts.yaml檔案透過create建立起來… # kubectl create -f hosts.yaml pod \"hostaliases-pod\" created 最後,我們可以實際登入pod中檢視內部的/etc/hosts的對應狀況… # kubectl exec -it hostaliases-pod bash root@hostaliases-pod:/# root@hostaliases-pod:/# cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.0.0.11 hostaliases-pod 127.0.0.1 foo.local 127.0.0.1 bar.local 10.1.2.3 foo.remote 10.1.2.3 bar.remote root@hostaliases-pod:/# From: IndustrialClouds.net ","date":"2017-08-14","fuzzywordcount":100,"objectID":"/blog/hosts-domain-ip-mapping/:0:0","originallink":null,"summary":"某些時候,我們會需要透過DNS的方式來對應外部服務的domain name位置,而某些應用中,這些domain name可能在不同的環境會對應到不同的地方,此時,我們在傳統作業方式會透過/etc/hosts的編輯方式來讓該主機可以對應到外部服務位置.... 而在K8S中,從1.7之後的版本開始支援hosts的複寫功能...","tags":["kubernetes","domain"],"title":"透過hosts來指定domain對應的ip位置","translator":null,"uri":"/blog/hosts-domain-ip-mapping/","weight":0,"wordcount":88},{"author":"industrialclouds.net","authorlink":null,"banner":"/blog/deploy-db-ap-use-yourls/img/cover.png","categories":["Kubernetes"],"content":"接下來以建置mysql資料庫以及一個連線該資料庫的應用部署來觀察GKE在網路層的變化,我們參考kubernetes的mysql服務建立的方式(文章:https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/),其中我們需要先建立mysql所需要用到的disk空間,可以透過下面指令來建置: gcloud compute disks create --size=20GB mysql-disk 然後,透過mysql.yaml來設定mysql與相關掛載的設定… apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: mysql-disk fsType: ext4 --- apiVersion: v1 kind: Service metadata: name: mysql spec: ports: - port: 3306 selector: app: mysql clusterIP: None --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: \"\" resources: requests: storage: 20Gi --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: mysql spec: strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: 1qaz2wsx - name: MYSQL_DATABASE value: yourls ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim 上面的設定中,用到PV(Persistent Volume)與PVC(Persistent Volume Claim)來設定給MySQL Pod使用,再將MySQL服務以Service Expose出來… 我們可以用下面指令將服務建立起來… $ kubectl --namespace production create -f mysql.yaml persistentvolume \"mysql-pv\" created service \"mysql\" created persistentvolumeclaim \"mysql-pv-claim\" created deployment \"mysql\" created 接下來,需要建立yourls的服務(yourls是一個開源短網址服務,可提供短網址的對應功能,並且有相對應的管理與統計工具,相當不錯唷!),這邊省略yourls的Dockerfile建置步驟(其中我們把DB的一些相關設定與nginx上的一些設定都直接封裝在image中),並假設我們將docker image已經上傳到gcr.io的image registry。 下面是yourls的yaml檔案,我們透過與MySQL資料庫一樣的動作,掛載一個空間給yourls存放qrcode的圖片資料,整個yourls-service.yaml設定如下: apiVersion: v1 kind: PersistentVolume metadata: name: yourls-pv spec: capacity: storage: 40Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: yourls-disk fsType: ext4 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: yourls-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: \"\" resources: requests: storage: 40Gi --- apiVersion: v1 kind: Service metadata: name: yourls-server labels: app: yourls-server spec: ports: - port: 80 targetPort: 80 type: LoadBalancer selector: app: yourls-server sessionAffinity: ClientIP loadBalancerIP: 123.123.123.123 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: yourls-server spec: replicas: 1 template: metadata: labels: app: yourls-server spec: containers: - name: yourls-server image: [gcr.io/your-project-id/yourl:v1.0](http://gcr.io/your-project-id/yourl:v1.0) ports: - containerPort: 80 volumeMounts: - name: yourls-persistent-storage mountPath: /var/www/html/data volumes: - name: yourls-persistent-storage persistentVolumeClaim: claimName: yourls-pv-claim yaml中有一段指定了loadBalancerIP,這部分可以結合GCP上的外部靜態IP,讓未來與DNS的結合可以更加方便。 完成yaml檔後,可以透過下面指令來建立yourls server… $ kubectl --namespace production create -f yourls-service.yaml persistentvolume \"yourls-pv\" created persistentvolumeclaim \"yourls-pv-claim\" created service \"yourls-server\" created deployment \"yourls-server\" created 建立完成後,我們可以到network下的firewall再確認過,會發現firewall上已經針對yourls的port 80開放了,這樣我們就可以使用http來連線yourls服務了 :) 除了firewall的設定,GKE也在建立service的同時,幫我們把Load Balancer建立起來了… 由於GKE上的service對應到GCP的服務上是TCP Load Balancer,我們可以在Network中的Load Balancer上看到下面的設定… 其中紅色的驚嘆號是因為在GCP的TCP Load Balancer中可以指定Health Check,但也可以不用指定,而在預設的狀況下,GKE開啟servce之後,所建立的TCP Load Balancer是沒有加上Health Check的… 但這不影響GKE上service的運作,我們仍是可以使用80 port連線yourls… 最後,如果打算讓服務可以全球化佈局,讓GKE的服務串通多個資料中心的話(GCP的HTTP Load Balancer提供了Global IP,可以透過Anycast的方式提供更穩健的連線品質),則可以的透過HTTL Load Balancer來提供服務,而對應到Kubernetes的部分,則是Ingress這個服務。下面是Ingress的yaml設定檔… apiVersion: extensions/v1beta1 kind: Ingress metadata: name: yourls-server-ingress annotations: [kubernetes.io/ingress.global-static-ip-name](http://kubernetes.io/ingress.global-static-ip-name): \"ip-ingress-yourls\" spec: ","date":"2017-07-12","fuzzywordcount":400,"objectID":"/blog/deploy-db-ap-use-yourls/:0:0","originallink":null,"summary":"接下來以建置mysql資料庫以及一個連線該資料庫的應用部署來觀察GKE在網路層的變化,我們參考kubernetes的mysql服務建立的方式(文章:[https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/](https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)),其中我們需要先建立mysql所需要用到的disk空間,可以透過下面指令來建置","tags":["mysql","kubernetes","database","GKE"],"title":"部署一個具備DB與AP的應用程式 - 以Yourls服務部署為例","translator":null,"uri":"/blog/deploy-db-ap-use-yourls/","weight":0,"wordcount":345},{"author":"KAI CHU CHUNG","authorlink":"http://kaichu.io","banner":"/blog/weddingcnp-via-gcp/img/weddingcnp-via-gpc-0_2.png","categories":["COMPUTE"],"content":"Cage \u0026 Ping wedding 是一個我們為結婚喜宴處理朋友出席報名相關事宜特別開發的網站,所有的服務全部建構在 Google Cloud Platform 上 功能大至如下: 喜宴相關資訊 訂婚場/結婚場時間、地點、交通資訊 出席人數統計(強制使用 Google/Facebook 登入)。訂婚場/結婚場,人數、葷素、兒童椅等,需不需要住宿 婚紗搶先看,先公開一部份。喜宴當天再公佈所有照片 喜宴進行中的 Bingo 遊戲 EDM (發佈 email 給參加的朋友) GA (關心一下有多少人來看) 因為我們規劃了一些特別的梗(其實是要幫每一個出席的人作一張專屬的桌卡),所需要每一個人的大頭照(avatar),立馬就動到使用 Google/Facebook 進行登入,授權後程式能夠自動的抓到每一個人的照片,雖然不是每一個人的照片解析度都夠進行後制的加工,不過已經可以節省下非常多的時間 有結過婚的朋友都非常的清楚,統計出席人數是一件很麻煩的事情,出席的大人數、小孩數、小孩有沒有佔位、需不需要兒童座椅、有沒有住宿的需求。種種的資料統計很麻煩,所以就計設出一個表單,想出席的朋友直接登入 Google/Facebook 帳號後,填完相關問題的表單送出後就好了,收單前可以俢改資料(這塊踩到大的雷,透過 Google Analytic 可以很多是直接使用手機登入網站,不過遇到表單無法送出的問題,後來針對相容性作調整後才讓大家順利的報完名) ","date":"2017-06-08","fuzzywordcount":300,"objectID":"/blog/weddingcnp-via-gcp/:0:0","originallink":"https://kaichu.io/posts/weddingcnp-via-gcp/","summary":"Cage \u0026 Ping wedding 是一個我們為結婚喜宴處理朋友出席報名相關事宜特別開發的網站,所有的服務全部建構在 Google Cloud Platform 上","tags":["wedding","GAE","Vue"],"title":"Weddingcnp via Gcp","translator":null,"uri":"/blog/weddingcnp-via-gcp/","weight":0,"wordcount":205},{"author":"KAI CHU CHUNG","authorlink":"http://kaichu.io","banner":"/blog/weddingcnp-via-gcp/img/weddingcnp-via-gpc-0_2.png","categories":["COMPUTE"],"content":"weddingcnp architecture 上面是 weddingcnp 的架構圖,整個網站完全是架構在 Google App Engine 上,透過 dispatch.yaml 的設定將流量切為服務前端靜態網頁(golang + vue.js + auth0)及後端 endpoint API 的部份。前後端為不同的 instance, 可以容易在 Google App Engine 的管理介面中計對前後端別分進行版控 Enpoints API 作為接收前端送過來的資料,並接報名相關資料儲存到 Google DataStore, 並自動將使用者的 Avatar 儲存到 Google Cloud Storage 並將所有的名單透過 Google client API 轉存一份至 Google Drive 方便後序處理。只要有人報名會自動透過 sendgrid 寄通知到自己的信箱,不需一直去盯著 Google Spreadsheet 上面的名單有沒有增加 整個構架的實作細節我會分為五個部份來說明, 先把標題寫出來,內容會陸續的補上 1. weddingcnp 專案架構切分 使用 Google App Engine golang standard runtime 來作為網站的服務器,選擇使用吃資源較少的 golang,機器平均開機後的記憶體大約 200-300 MB, 效能比 Python 的好太多了 使用 dispatch.yaml 來進行服務的切分,將 endpoint API 的部份導至另外的 instance 作處理 2. weddingcnp 前端頁面設計實作 前端使用 echo 框架搭配 template 來產出頁面 3. weddingcnp endpointAPI 設計實作 利用 dispatch.yaml 來指定 endpoint API 實作的 Service,這邊基本 endpoint API 熟悉度使用 Python 版本,service 的另一個好處是可以再同一個專案下使用混合的語言來發開,這兒的例子是前端使用 golang, endpoint API 的部份使用 Python,搭配 Flexible 的環境也可以的,彈性非常的高 4. weddingcnp 前端 vue.js 設計實作 本來是打算使用 react.js 來實作前端,不過太花時間了,所以選擇了 vue.js 來快速實作出介接 endpoint API 前端的表單 5. weddingcnp edm 寄送 sendgrid 在收集到名單時,可以發通知給朋友。這兒的例子是我們的婚紗照片上線時,就發了 EDM 通知告訴朋友快點上來看。使用的是 sendgrid 來發信,透過 sendgrid 的模版、client API 讓發 html 的 EDM 輕鬆多了 weddingcnp 系例傳送門 weddingcnp via GCP 簡介 weddingcnp via GCP - 1. 專案架構切分 weddingcnp via GCP - 2. 前端頁面設計實作 weddingcnp via GCP - 3. endpointAPI 設計實作 weddingcnp 前端 vue.js 設計實作 weddingcnp edm 寄送 sendgrid ","date":"2017-06-08","fuzzywordcount":300,"objectID":"/blog/weddingcnp-via-gcp/:1:0","originallink":"https://kaichu.io/posts/weddingcnp-via-gcp/","summary":"Cage \u0026 Ping wedding 是一個我們為結婚喜宴處理朋友出席報名相關事宜特別開發的網站,所有的服務全部建構在 Google Cloud Platform 上","tags":["wedding","GAE","Vue"],"title":"Weddingcnp via Gcp","translator":null,"uri":"/blog/weddingcnp-via-gcp/","weight":0,"wordcount":205}]