Skip to content

skyglass/akka-betting-house

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“– Akka Betting House

    • βœ… Cloud-Native Spring Boot and Akka Development Environment and Startup Template
    • βœ… Concurrency and Resiliency Patterns in Saga Transactions for Spring Boot and Akka Microservices
    • βœ… Part 5: Reliable Long-Running Saga Business Processes
    • βœ… Reactive, Scalable, Persistent, Concurrent and Resilient Actor System with Akka Event Sourcing Framework
    • βœ… Kafka Dynamic Transaction Processing Queues for Long-Running Saga Business Processes
  • πŸ“– This Cloud-Native Full-Stack Developer Template provides fully functional Development and Production Environment with Akka Cluster, Kafka Cluster and Spring Boot Microservices deployed to Kubernetes Cluster
  • πŸ“– Swagger UI REST API Gateway
  • πŸ“– Spring Cloud Gateway with Spring Security and Keycloak Authorization Server
  • πŸ“– Event-Sourcing Framework based on Reactive, Concurrent, Scalable and Persistent Akka Actor System
  • πŸ“– Akka Actor System with gRPC and REST API Server
  • πŸ“– Kafka Cluster integrated with Akka Actor System
  • πŸ“– Long Running Saga Business Process Management Framework based on Akka Actor System State Machine and Kafka Dynamic Topics
  • πŸ“– Spring Boot Microservices integrated with Akka Actor System as gRPC and REST API Clients
  • πŸ“– Spring Boot gRPC Client with Protobuf Maven Plugin
  • πŸ“– Spring Cloud OpenFeign REST API Client
  • πŸ“– Postgresql persistence storage for Akka Actor Environment
  • πŸ“– E2E Concurrency and Resiliency Tests with Spring Boot, Kafka UI, Spring JDBC and Spring Cloud OpenFeign REST API Client
  • πŸ“– Keycloak Authorization Server integrated with Swagger UI and Spring Security
  • πŸ“– Reactive and Event-Driven Programming Paradigm based on Akka Actor System State Machine and Scala Programming Language
  • πŸ“– Local Kubernetes Development Environment with Skaffold
  • πŸ“– Production Kubernetes Development Environment with Skaffold (work in progress)
  • πŸ“– Github Actions CI/CD GitOps pipeline (work in progress)
  • πŸ“– Azure Terraform Infrastructure with AKS Kubernetes Cluster and Private Container Registry (work in progress)
  • πŸ“– Full Technology Stack:
    • βœ… Swagger UI
    • βœ… Spring Cloud Gateway
    • βœ… Kafka Kubernetes Cluster with Strimzi Operator
    • βœ… Akka Kubernetes Cluster
    • βœ… Event-Driven Microservices with Spring Boot, Akka Actor System, Kafka and PostgreSQL
    • βœ… Event-Sourcing Persistence with Akka Actor System, Kafka and PostgreSQL
    • βœ… E2E Concurrency and Resiliency Tests with Spring Boot, Kafka UI, Spring JDBC and Spring Cloud OpenFeign REST API Client
    • βœ… Keycloak Authorization Server
    • βœ… Terraform (work in progress)
    • βœ… Kubernetes
    • βœ… Github Actions (work in progress)
    • βœ… Github Secrets and envsubst Environment Variables parser (work in progress)
    • βœ… Kubernetes Secrets and Configmap Variables (work in progress)
    • βœ… Local Kubernetes Development Environment with Skaffold
    • βœ… Production Kubernetes Development Environment with Skaffold (work in progress)
    • βœ… Custom Kubernetes Manifest Generation for Local and Production Environments with sh scripts
    • βœ… Custom Skaffold Manifest Generation for Local and Production Environments with sh scripts
    • βœ… Hot reload of Docker Containers for Local and Production Environments with Skaffold

πŸ“– Links

πŸ“– Step By Step Guide

Step-01: Prepare Your Github Account (optional, only for production CD Environment)

  • make sure you have your own Github Account

Step 02 - Create New Github Repository (optional, only for production CD Environment)

  • Clone this repository and copy the source code to your new repository

Step-03: Prepare Your Azure Account (optional, only for production Environment)

  • make sure you have your own Azure Account with enough permissions (Sign Up for a Free Trial, if you don't have one)

Step-04: Prepare Source Code and Github Actions Workflow (work in progress, optional, only for production CD Environment):

  • Edit ".github/workflows/deploy-*.yaml" files: replace "master" with the name of your main branch (you can change default main branch name in github repository settings)

  • Edit "k8s/prod/ingress-srv.yaml" file: replace "skycomposer.net" with the name of your registered domain (see Step-05 and Azure Production Environment Setup for more details)

Step-05: Register your domain (optional, only for production Environment):

  • You need a registered domain to provide TLS connection with trusted Certificate Authority.

  • For more details on setting up TLS on AKS Ingress with LetsEncrypt see this article: https://medium.com/@jainchirag8001/tls-on-aks-ingress-with-letsencrypt-f42d65725a3 This article will show you how to configure TLS on Azure Kubernetes Cluster with LetsEncrypt for any registered domain, including AWS Route 53.

  • Make sure that you know how to create Hosted Zone and Record A for your domain provider.

  • For more details, see Azure Production Environment Setup

Step-06: Read the book "Akka in Action, Second Edition" and try out examples from this book (if you want to learn more about Akka Actor System and "Akka Betting House" example):

  • If you have further questions about Akka Actor System and "Akka Betting House", refer to this book: https://www.manning.com/books/akka-in-action-second-edition
  • I strongly recommend you finish this book, before following this guide!
  • This guide will only help you deploy the environment to local or azure cloud kubernetes cluster, enable github actions cd pipeline (work in progress) and configure local and production kubernetes development environment with skaffold
  • All information about Akka Actor System, Scala Development, Akka Event-Sourcing State Machine, Aka Scalable and Persistent Cluster and "Akka Betting House" example, is greatly explained in this book!
  • "Skyglass Akka Betting House" repository is based on the example from the book and further developed to implement the Long-Running Saga Business Process of Bet Settlement with Akka Persistent State Machine and Kafka Dynamic Transaction Processing Queues
  • "Skyglass Akka Betting House" repository also contains additional examples of Scalability, Concurrency, Resilience, Event-Sourcing, State Management, Long-Running Saga Business Processes, Idempotent Consumers and Retryable Error Handling based on "Akka Betting House" Business Domain

Local Kubernetes Environment Setup with Skaffold:

  • Create local Kubernetes Cluster. If you have Docker Desktop, just go to Settings -> Kubernetes -> Enable Kubernetes -> Apply & Restart

  • Switch context to local Kubernetes Cluster. If you have Docker Desktop, just go to Kubernetes Context and select "docker-desktop"

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml

These commands will install nginx ingress controller to your local kubernetes cluster. You need nginx ingress controller for your local kubernetes ingress resource to work correctly (see k8s/local/ingress-srv.yamlfor more information on your local kubernetes ingress resource)

  • create env folder in the root of the project

  • create .env.local file in env folder and provide the following parameters:

CONTAINER_REGISTRY="bettinghouse.azurecr.io" (for local environment you can keep this name, for production environment provide your own container registry, see **Azure Production Environment Setup** for more details)
DOCKER_FILE_NAME="Dockerfile"
DOCKER_PUSH="false"
VERSION="latest"
BASE_URL="http://ingress-nginx-controller.ingress-nginx.svc.cluster.local"
  • Don't Panic! env folder is included to .gitignore. You will not reveal your secrets with git commit! :)
  • Note: CONTAINER_REGISTRY for local development environment can be any prefix, but it is recommended to use container registry name for consistency with production environment
  • BASE_URL for local kubernetes cluster uses ingress-nginx service ip. Please, don't change it! If your nginx-ingress controller is installed correctly, this url will work as expected.
sh skaffold-local.sh

This script will build docker images and start local kubernetes environment with hot reloading of your code changes

  • open localhost in your Browser and make sure that Sign Up and Sign In works, you are able to Create a Ticket and buy it

  • optionally, create 2 test accounts, create ticket with one account and buy ticket with another account

  • Use "magic" payment card with number 4242 4242 4242 4242 for unlimited payment. :)

  • If the payment is sucessful, you will see your order with status complete in Orders tab

  • Note: if the payment is successfull, you will not see ticket in Tickets tab anymore! All tickets in this app have a quantity of one! It means that only one user can buy a ticket! You can test concurrency control by trying to buy the same ticket with several users. First user will succeed, others will fail to buy a ticket!

  • Congratulations! You successfuly tested Ticketing App locally!

Azure Production Kubernetes Environment Setup with Skaffold:

  • create terraform.auto.tfvars file in infra folder and provide following parameters:
kubernetes_version= "1.29.2"
app_name = "{provide_your_own_globally_unique_name}" 
location = "westeurope" (use any other azure location, for example, "germanywestcentral", if you have any issues with "westeurope")
  • login to Azure Cloud with az login CLI

  • cd to infra folder

  • replace bettinghouse with your own globally unique name (see files container-registry.tf, kubernetes-cluster.tf and resource-group.tf)

  • run terraform initand terraform apply --auto-approve

  • after the script is successfully finished, run the following command:

az aks get-credentials --resource-group {app_name} --name {app_name}
  • Make sure that your context is switched from local Kubernetes Cluster to Azure Kubernetes Cluster. If you have Docker Desktop, just open Kubernetes Context and make sure that the name of the context corresponds to your Azure Kubernetes Cluster

  • run kubectl get pods and make sure that kubectl works correctly and returns 0 resources

  • login to Azure Container registry with the following command:

docker login {login_server}
  • you can find docker login server, username and password in Azure Cloud (go to Container Registry -> Settings -> Access Keys)

  • create env folder in the root of the project

  • in env folder create .env.prod file and set the following environment variables:

CONTAINER_REGISTRY="bettinghouse.azurecr.io"  (provide your own globally unique container registry)
DOCKER_FILE_NAME="Dockerfile-prod"
DOCKER_PUSH="true"
VERSION="latest"
BASE_URL="https://skycomposer.net" (provide your own domain name, see `Step-05` and notes below for more details)

JWT_KEY="$JWT_KEY"
STRIPE_KEY="$STRIPE_KEY"
  • Don't Panic! env folder is included to .gitignore. You will not reveal your secrets with git commit! :)

  • Make sure you set your own values for CONTAINER_REGISTRY, BASE_URL, JWT_KEY and STRIPE_KEY

  • JWT_KEY can be generated with the command openssl rand -base64 32

  • STRIPE_KEY can be found in your Stripe Account (Developers -> API Keys -> Secret Key -> Reveal test key)

  • register your domain and enable TLS on AKS Ingress with Lestencrypt: https://medium.com/@jainchirag8001/tls-on-aks-ingress-with-letsencrypt-f42d65725a3

  • Make sure you provide your email for CA cluster issuer Kubernetes resource (see more details in the article)

  • Make sure you installed ingress controller with helm (see more details in the article)

  • Make sure you installed all other kubernetes resources and followed other instructions in the article

  • You can find production Ingress Kubernetes Resource in k8s/prod/ingress-srv.yaml. This resource will be applied with skaffold-prod.sh or skaffold-dev.sh scripts. Make sure that you replaced skycomposer.net with your registered domain name

  • run sh skaffold-dev.sh

  • this script will build docker images, push them to azure container registry and deploy images to production kubernetes cluster with hot reloading of your code changes

  • run kubectl get pods and make sure that all containers are RUNNING

  • open https url with your registered domain in your Browser and make sure that Sign Up and Sign In works, you are able to Create a Ticket and buy it

  • optionally, create 2 test accounts, create ticket with one account and buy ticket with another account

  • Use "magic" payment card with number 4242 4242 4242 4242 for unlimited payment. :)

  • If the payment is sucessful, you will see your order with status complete in Orders tab

  • Note: if the payment is successfull, you will not see ticket in Tickets tab anymore! All tickets in this app have a quantity of one! It means that only one user can buy a ticket! You can test concurrency control by trying to buy the same ticket with several users. First user will succeed, others will fail to buy a ticket!

  • Congratulations! You successfuly tested Ticketing App in production!

  • run sh skaffold-prod.sh to deploy final changes to production

  • The only difference between sh skaffold-prod.sh and sh skaffold-dev.sh is that sh skaffold-dev.sh allows hot reloading of your code changes on production! Try to make any code change with your IDE and you will immediately see this change on production!

  • If you run sh skaffold-dev.sh you will see logs in real-time. After closing the cli window, all kubernetes resources will be destroyed! Therefore, in order to deploy final changes to production use sh skaffold-prod.sh. You will not have hot reloading with sh skaffold-prod.sh, but kubernetes resources will not be destroyed after you close cli window.

Github Actions Deployment Pipeline Setup

  • create the following Github Secrets (Go to Your Repository -> Settings -> Secrets and Variables -> Actions -> New Repository Secret):
CONTAINER_REGISTRY=... (Azure Container Registry)
KUBE_CONFIG=.. (Base64 encoded  ~/.kube/config file contents)
REGISTRY_UN=... (Azure Container Registry Username)
REGISTRY_PW=... (Azure Container Registry Password)
  • you can find values for CONTAINER_REGISTRY, REGISTRY_UN and REGISTRY_PW in Azure Cloud (go to Container Registry -> Settings -> Access Keys)

  • you can get the value of KUBE_CONFIG with this command cat ~/.kube/config | base64 (make sure you switched context to Azure Production Kubernetes Cluster before running this command!)

  • make any code changes (for example change SkyComposer to SkyComposer 2 in client/components/header.js file)

  • push changes with git add ., git commit -m "test changes"and git push origin

  • go to "Your repository -> Actions" and make sure that the Deployment Pipeline is automatically started and successfully finished

  • this pipeline will build changed docker image, push it to container registry and deploy changed image with new version to kubernetes cluster

  • open https link for your registered domain in your Browser and make sure that you can see SkyComposer 2 title on the top left

  • Congratulations! You successfuly tested Ticketing App code changes with Github Actions Deployment Pipeline!

πŸ“– Akka Actor System Overview

Classic vs Reactive Programming

    • βœ… Classic Programming Paradigm deals with blocking I/O operations inefficiently using thread resources
    • βœ… While waiting for the response, thread is waiting blocked, without doing any useful work
    • βœ… More efficient resource usage would be sending request, without waiting for response, and be notified about the response
    • βœ… Reactive Programming is a variation of Observer Pattern, when thread sends request and then notified about the response later
    • βœ… Reactive Frameworks allow thread to continue after sending the request and handle another queued job, while waiting for the response
    • βœ… But Reactive programming paradigm, despite its power, has a lot of challenges
    • βœ… With Classic programming paradigm blocks of code are executed synchronously and sequentially
    • βœ… With Classic programming paradigm ACID transactions are easy to implement
    • βœ… But with Reactive Programming synchronous and sequential execution is challenging and even if possible doesn't provide any advantages, because in the end you end up with the same blocking operation
    • βœ… With Reactive Programming you write event-driven functional handlers without any idea on which thread and in which context they will be executed
    • βœ… Functional handler F1 might be executed in one thread and then functional handler F2 will be executed in a different thread with different context, possibly even in a different instance of the reactive application
    • βœ… Event-Driven nature of Reactive Programming fits naturally with Event Sourcing Framework
    • βœ… Event Sourcing is an alternative to classic ACID Transactions
    • βœ… Event Sourcing Framework provides eventual consistency by writing append-only events to the durable event storage, which acts as a single source of truth
    • βœ… Actor Frameworks deal with persistent event sourced micro processes, each with its own ID, state and business logic, also known as Actors
    • βœ… Any two messages, sent to actor with the same ID must be processed sequentially - Reactive Actor Frameworks guarantee Thread Safety for the Actors with the same ID
    • βœ… Reactive Actor Frameworks provide efficient resource usage by executing queued Actor Message Handlers in the next available thread from the pool
    • βœ… Reactive Actor Frameworks allow concurrent event handling for actors with different IDs
    • βœ… Reactive Actor Frameworks guarantee thread-safety for handling events of actors with the same ID
    • βœ… In case any application instance crashes, persistent actor state can be restored from the database
    • βœ… Actor State can be restored by replaying all its events from the beginning, or starting from the latest snapshot
    • βœ… Akka Actor System is a powerful reactive actor framework, available in Java or Scala, and enabling all of the above-mentioned Actor System capabilities
    • βœ… Akka Actor System provides Actor persistence, scalability, effective resource usage and strong-typed reactive programming paradigm helping programmers to focus on the business-logic
    • βœ… Akka Actor System framework provides powerful persistent state machine for each actor and allows to scale Akka Cluster if necessary, in order to deal with millions of actors, each with its own state
    • βœ… Millions of actors can exist in Akka Cluster Memory. Each Actor has very low memory footprint, in contrast to JVM Threads
    • βœ… Akka State Machine handles commands, persists events, safely updates its state and prevents from handling events in a wrong state
    • βœ… Akka Cluster requires persistent storage for storing Event Sourcing Events and Metadata. The list of supported databases: PostgreSQL, Oracle, Microsoft SQL Server, Cassandra
    • βœ… JDBC and R2DBC plugins enable integration of Akka Event Handlers with JDBC- and R2DBC-compliant databases
    • βœ… Alpakka plugin enables integration of Akka Event Handlers with Kafka
    • βœ… Integration of Akka with Kafka enables reliable processing of millions of events
    • βœ… Akka State Machine integrated with Kafka and Persistent Storage is a powerful combination to build Long-Running Saga Transactions also known as Business Processes
    • βœ… Each Business Process Instance is an actor of some type, with its own unique ID, state transition business logic and persistent state
    • βœ… Other Powerful Features of Akka Actors: Context Sharing, Sharding, Actor Hierarchy, Error Handling with Retry, Timeout or Scheduling Events and so on
    • βœ… Parent Actor can create child Actors and stop all children, when parent actor stops. Parent actor can also notify all children or children can notify parents

ACID Transactions with Reactive Programming are challenging to implement

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published